SlideShare a Scribd company logo
1 of 12
Download to read offline
IAES International Journal of Artificial Intelligence (IJ-AI)
Vol. 12, No. 1, March 2023, pp. 220~231
ISSN: 2252-8938, DOI: 10.11591/ijai.v12.i1.pp220-231  220
Journal homepage: http://ijai.iaescore.com
Intelligent system for Islamic prayer (salat) posture monitoring
Md Mozasser Rahman1
, Rayan Abbas Ahmed Alharazi2
, Muhammad Khairul Imban b Zainal Badri2
1
Department of Mechnical Engineering Technology, Faculty of Engineering Technology, Universiti Tun Hussein Onn Malaysia, Johor,
Malaysia
2
Department of Mechnical Engineering, Kulliyyah of Engineering, International Islamic University Malaysia, Selangor, Malaysia
Article Info ABSTRACT
Article history:
Received Sep 1, 2021
Revised Aug 9, 2022
Accepted Sep 7, 2022
This paper introduced an Intelligent Salat Monitoring and Training System
based on machine vision and image processing. In Islam, prayer (i.e. salat) is
the second pillar of Islam. It is the most important and fundamental
worshipping activity that believers have to perform five times a day. From
gestures’ perspective, there are predefined human postures that must be
performed in a precise manner. There are lots of materials on the internet
and social media for training and correction purposes. However, some
people do not perform these postures correctly due to being new to salat or
even having learned prayers incorrectly. Furthermore, the time spent in each
posture has to be balanced. To address these issues, we propose to develop
an assistive intelligence framework that guides worshippers to evaluate the
correctness of their prayer’s postures. Image comparison and pattern
matching are used to study the system’s effectiveness by using several
combining algorithms, such as Euclidean distance, template matching and
grey-level correlation, to compare the images of the user and the database.
The experiments’ results, both correct and incorrect salat performances, are
shown via pictures and graph for each of the postures of salat.
Keywords:
Image processing
Intelligent
Monitoring
Salat (Islamic prayer)
Salat posture
This is an open access article under the CC BY-SA license.
Corresponding Author:
Md Mozasser Rahman
Department of Mechanical Engineering Technology, Faculty of Engineering Technology Universiti Tun
Hussein Onn Malaysia
Km 1, Jalan Panchor, 84600 Pagoh, Muar, Johor Darul Ta’zim, Malaysia
Email: mozasser@uthm.edu.my
1. INTRODUCTION
Salat (prayer) is one of the main pillars in Islam, which is considered one of the most important
aspects of our faith. Our beloved Prophet Muhammad (peace be upon him) received the commandments of
salat during Isra’ and Mi’raj (the night journey). Hence, giving hope for humanity once more as they have
lost the light on how to worship the true and the only one God, The Almighty Allah (Glorious is He and He is
Exalted). During that time, Muslims learn how to perform salat by following the orders and actions of
Prophet Muhammad (peace be upon him). This is done by looking at the action and orders directly using the
senses of sight and hearing. In other words, the technique used to learn salat at that time solely by using the
human senses to detect the correct movements and words. Although Muslims during that time can only learn
how to perform salat through the Prophet’s words and actions, the teaching and learning are highly effective
because Muslims nowadays are still performing salat the same way as the Prophet.
As the world is getting older, some Muslims tend to forget the proper way to perform salat as they
are bound to the world. Today, technologies are improving at a very fast pace. Many kinds of research and
development have been conducted to improve our lives. This raised a big responsibility for Muslim
researchers to develop a technology that benefits this world and hereafter. With this goal, developing the
Int J Artif Intell ISSN: 2252-8938 
Intelligent system for Islamic prayer (salat) posture monitoring (Md Mozasser Rahman)
221
“Intelligent Salat Monitoring and Training System” as an educational tool could help many Muslims learn
and recognize the proper way of performing salat. There are a few research have been done regarding the
activity monitoring of salat. Alobaid and Rasheed [1], Al-Ghannam and Dossari [2] used smartphone
technology to recognize Salat activities. Rabbi et.al. [3], Ibrahim and Ahmad [4] assessed the activities of
salat using electromyographic (EMG) signals. Therefore, the most important technology that we need to learn
and master to perform the salat inspection and training system for the Muslim community is machine vision
and image processing.
Computer vision is used to inspect and track human movement in various fields, such as sport,
health care and even games. As Muslim, we usually notice and understand how to perform salat by following
others. We follow the postures and movements of others in performing salat mainly by scanning using our
eyes. Then we analyze and process the learning of salat, whether it is correct or wrong. However, by
combining this technology with the religious aspect, we will gain many advantages. Using the system, we
can learn the proper way of salat by looking at the correct posture of salat installed in the database system;
thus, giving the proper feedback to the user. The system feedback can be in the form of words and numbers,
indicating the percentage of error in the salat movement.
Many researchers found algorithms to detect human parts, such as the face, hand, movements, and
postures. Some of the algorithms can detect the posture of the human body [5]. Different algorithm leads to
differences in need for the system. Therefore, some consideration must be made to have a fully functional
system. Approaches and algorithms to perform the inspection and training system for salat must be chosen so
that the image we need to measure and compare does not lack the information needed by the system.
Elements like the angle of sight, size, color, and texture of the image need to be measured using multiple
algorithms to get an accurate result so that the system does not give the wrong feedback to the user. In order
to overcome the problems, the MATLAB program is used to implement and test the methods proposed.
Muslim communities and others who convert to Islam across the world are in dire need of the basic
knowledge of salat. By developing the salat inspection and training system, this technology can teach and
share the knowledge with ease. The system resolves several problems as: i) help Muslim across the world in
learning the correct ways of salat anytime and everywhere; ii) reduce time, cost, and does not use much space
for learning salat; iii) avoid from being used by fake preachers in learning salat; iv) for Muslims who feel
embarrassed to learn the salat from others; and v) help newly converted Muslims to learn salat with ease.
Several movements in salat are considered important [6]. These movements and postures are needed
to be in a correct manner so that Allah will accept our salat. Notably, salat has a few sequenced movements
to have a complete cycle known as raka’ah. The sequences of one complete cycle are shown in Figure 1.
Figure 1. Sequences for one complete cycle in salat [6]
2. RELATED WORKS
2.1. Human body modelling in machine vision
Human motion and pose recognition can be categorized into two types of models, which are model-
based and appearance-based methods. Model-based object tracking algorithms are based on simple CAD
(computer-aided design) wire models of objects, as shown in Figure 2. Using this kind of models, we can
 ISSN: 2252-8938
Int J Artif Intell, Vol. 12, No. 1, March 2023: 220-231
222
draw the starting and endpoint of the lines correctly into the image plane and granting a real-time tracking of
objects at the cost of a small computational effort. Appearance-based method use no priori knowledge on the
data present in the image. Instead, it analyzes the data by using the statistic of the available dataset in the
database to extract the modes. By doing so, it will group the data in the best possible condition. According to
Azad et al., the appearance-based method uses various algorithms to illustrate the object [7]. In other words,
appearance-based approaches are more reliable in many types of situations because they do not require a
specific object to be a model.
Figure 2. Illustration of the object using wire model [7]
2.2. Representation of human figure
2.2.1. Bounding box
One of the simplest representations of the human body is the bounding box. Although the function
of the bounding box is limited, the model is useful when the image of the human body in the picture is very
small because it only used a few pixels. This will reduce the complexity in image processing but at the cost
of accuracy. Figure 3 shows the bounding box as a human representation in human body modelling [8].
Figure 3. Bounding box representation of human body [8]
2.2.2. Stick representation
The stick or bone figure representation typically represents the human body in machine vision and
image processing. The stick is acting as a bone and make the pose or movement of the human. Figure 4
shows the stick representation of the human body. The disadvantages of this figure are some of the
movements like sitting will be difficult to make because of occlusion [9].
2.2.3. Multi-dimensional representation
Hand gesture, as one of the important ways for human to convey information and express intuitive
intention, has the advantages of high degree of differentiation, strong flexibility and high efficiency of
Int J Artif Intell ISSN: 2252-8938 
Intelligent system for Islamic prayer (salat) posture monitoring (Md Mozasser Rahman)
223
information transmission, which makes hand gesture recognition (HGR) as one of the research hotspots in
the field of human-machine interface (HMI) [10]. The two-dimensional (2D) contour representation used
the human body and projected it from three-dimensional (3D) space onto the two-dimensional image plane
[11]–[13]. It will approximate the human body by using deformable contours, ribbons, or cardboards [14].
Figure 5 shows 2D images of the hand. Three-dimensional (3D) representation describes the parts of the
human body in 3D space using a combination of cylinders as shown in Figure 6 [15]. The 3D representation
shape can also use other shapes such as a cone or sphere to represent the human body.
Figure 4. Stick figure representation of human body [9]
Figure 5. Pictures of 2D hand [12]
Figure 6. 3D human motion representation [15]
 ISSN: 2252-8938
Int J Artif Intell, Vol. 12, No. 1, March 2023: 220-231
224
2.3. Algorithm for pattern and image matching
2.3.1. Histogram of the oriented gradient
A histogram of the oriented gradient (HOG) could be used for image processing matching purposes
such as face recognition [16]–[18]. The theory behind HOG measurements is distributed within the region
range in the image. It is very useful for matching and tracking textured objects, which have inorganic shapes.
For application of computer vision on human hands indicate HOG as the better performer compared to other
feature extraction models [19]. HOG was applied to the base feature images to generate feature descriptors
[20].
2.3.2. Hidden markov model
The hidden markov model (HMM) is very famous among speech recognition, and it is one kind of
model that uses statistic to extract the features. According to Wang et al., HMM is more reliable in analyzing
time-varying data with variations in space-time conditions [21]. In matching procedures, it will compute the
probability of HMM to generate the test symbol and its sequences which corresponds to the features of the
input image. HMM is considered one of the best algorithms in matching the human motion pattern because it
can handle uncertainty or unknown in its stochastic framework [22]. However, there is a significant
disadvantage of this method. The HMM is inefficient in handling three or more processes that are
independent [23].
2.3.3. Euclidean distance
Euclidean distance can define the metric of the image efficiently. It used the Euclidean metric to
measure the distance between two connected points in a straight line in Euclidean space. According to Wang
et al., this method consists of the summation of the pixel-wise intensity differences [24]. They stated that the
traditional Euclidean distance might cause small deformation in using a large Euclidean distance. To solve
the problem, they proposed a method that can solve any reasonable metric. The keys for their method are
simplicity in computation, relative insensitivity to small deformation, and increased efficiency in embedding
the system in most of the powerful image recognition.
2.3.4. Temporal template
Bobick and Davis used temporal templates to recognize human movements by constructing a vector
image to match it against the image, whereby the movement is known and stored in the database [25]. Two
types of features were used, namely motion-energy image and motion-history image. There are many
advantages to using these methods. They could support direct recognition of the motion, instantly perform
temporal segmentation, invariant to linear changes in speed, and be run in real-time on a standard platform.
Some limitations were detected, such as it cannot handle incidental motion, and occlusion may sometimes
happen at a certain point.
3. METHOD
3.1. Actual picture and mechanical design
To design the salat inspection and training system, we used polyvinyl chloride (PVC) pipe as the
base in the design. In this design, we prioritize portability first as it requires large spaces to place or store it.
By using PVC pipe, we can assemble and disassemble it easily, which takes less than five minutes. In order
to build the base of the system, a combination of plain ended pipe, equal elbow pipe, end cap pipe, and equal
tee pipe are needed. Figure 7(a) shows the actual picture the system and the isometric view of the system is
shown in Figure 7(b). In the actual picture, two black lines are drawn on the base carpet. The middle black
line is for the initial position of at-tawarrok. The user will sit there until the system finished the inspection.
The black line located near the back camera is for the initial position for takbiratul ihram, ruku’ and sujud.
The user will perform all these postures of the salat at their respective initial position, marked with the black
lines. Two cameras are installed in the system as shown in Figure 7(b). One camera is installed at the front to
inspect the front part of the body, such as hands and head, which are placed higher than the second one. The
second camera is installed at the back to inspect the back part of the body, such as the legs, placed lower than
the first one. The front camera is used to inspect the postures of salat for takbiratul ihram, ruku’ and sujud,
while the back camera is used to inspect the posture during at-tawarrok.
Two servo motors are installed in the system located below the cameras. The function of these
servomotors is to change the camera angle when taking the video of the user’s salat using the system. This
system can be carried and implemented everywhere because of its unique features. In the base carpet, a force-
sensing resistor is installed to inspect the user during the sujud. The force-sensing resistor is used during
sujud to check whether or not the parts of the body, such as the forehead and nose, are touching the ground.
Int J Artif Intell ISSN: 2252-8938 
Intelligent system for Islamic prayer (salat) posture monitoring (Md Mozasser Rahman)
225
(a) (b)
Figure 7. Salat monitoring and training system (a) actual picture and (b) isometric view
3.2 Experimental Method
In this study, we adopted a template matching approach considering its simplicity in real time
application. To detect human body, the color space RGB (red, green, and blue) is chosen, then we de-
correlate the luminance and chrominance. With a given RGB image, it is converted to grayscale image using
the RGB-to-grayscale conversion equation. The mean values of the three components generate a gray scale
image:
𝐺𝑟𝑎𝑦 = (0.2126 × 𝑅𝑒𝑑2.2
+ 0.7152 × 𝐺𝑟𝑒𝑒𝑛2.2
+ 0.0722 × 𝐵𝑙𝑢𝑒2.2
)1/2.2
(1)
Furthermore, the input image converted to grayscale as we need to match with the database images
as template to the input image. However, to process the matching we would choose an approach. The
matching process moves the template image to all possible 35 positions in a larger source image and
computes a numerical index that indicates how well the template matches the image in that position. One of
the well know matching process is Euclidean distance, Let I be a gray level image and g be a gray-value
template of size (n×m):
𝑑(𝐼, 𝑔, 𝑟, 𝑐) = √∑ ∑ (𝐼(𝑟 + 𝑖, 𝑐 + 𝑗) − 𝑔(𝑖, 𝑗))2
𝑚
𝑗=1
𝑛
𝑖=1 (2)
where (r, c) denotes the top left corner of template g.
Second matching process which has the accuracy advantage and processing time over Euclidean distance is
grey-level correlation:
𝑐𝑜𝑟 =
∑ (𝑥𝑖−𝑥̅).(𝑦𝑖−𝑦
̅)
𝑁−1
𝑖=0
√∑ (𝑥𝑖−𝑥̅)2.∑ (𝑦𝑖−𝑦
̅)2
𝑁−1
𝑖=0
𝑁−1
𝑖=0
(3)
Where,
x is the template gray level image
𝑥̅ is the average gray level in the template image
y is the source image section
𝑦
̅ is the average gray level in the source image
N is the number of pixels in the section image
The value cor is between -1 t0+1, with larger values representing a strongrt relationship between the two
images.
As we now the correlation matching result never shows 100% matching as the images are different
in small details. Therefore, we should apply a threshold for the correlation result, the threshold can be set the
highest value of match accrued. Regarding the feature extraction of this system, we considered HOG
descriptors as one descriptor as shown in Figure 8 to show how the system could perform. Figure 8(a) shows
the equivalent histogram of an image and the obtained HOG feature of the image is shown in Figure 8(b).
However, there are many descriptors can be used or combined to work together. Other descriptor apply the
same operation of coveting the image pixels to vote to its colornumber as it described from (0–255), 0 for
white ad 255 for black color. However, Divide the feature into log-polar bins instead of dividing the feature
into square is the commune used approach. To identify the image to the computer we need to use descriptors,
 ISSN: 2252-8938
Int J Artif Intell, Vol. 12, No. 1, March 2023: 220-231
226
as high as we train the system with image descriptors as the error of the system decrease. However,
combining multi type of descriptors “such as scale-invariant feature transform (SIFT), gradient location
orientation histogram (GLOH) and speeded up robust features (SURF)” will help to enhance the
performance ofthe system.
(a) (b)
Figure 8. Process of calculating HOG (a) creating histogram from image and (b) visualization of HOG features
of the image
A sample HOG representation of both correct and wrong position during takbeerat alehram shown
in Figure 9. The correct position, the rising of hand above shoulder, with it descriptor is shown in Figure 9(a).
Wherease the wrong position, hand is below shoulder is shown in Figure 9(b). The descriptor of the image on
the right side showing intensity variation in the images.
(a) (b)
Figure 9. Comparison of right and wrong position of takbeerat alehram using HOG representation of
the image (a) correct position and (b) wrong position
Both HGR and HOG descriptors were used for matching the two positions as shown in Figure 10,
the error of the result can be recognized by the different between two extreme points. The difference in
strong corners between two overlay images will represent the amount of unmatched features or error
between matching two images. However, as this difference increase as the salat position had performed by
the prayer is wrong. Therefore, we need to increase the extracted feature by increasing the corners number, as
well as, threshold the matching result so our system would trigger the position as wrong if the difference
between two images exceeds 30%.
Figure 10. The result of the matching process using HGR and HOG
Int J Artif Intell ISSN: 2252-8938 
Intelligent system for Islamic prayer (salat) posture monitoring (Md Mozasser Rahman)
227
4. RESULTS AND DISCUSSION
In this part, the user will use the system’s graphical user interface installed on the laptop. Figure 11
show the interface provided by the system. The user can preview the correct posture of salat by clicking the
button preview for each posture. After that, they can click the blue button in the interface for the system to
start the inspection. Upon clicking the start button in blue colour, the video of the person who prays using the
salat inspection and training system was taken using the video camera attached to the system.
Two video cameras are needed to inspect the user. One is located at the front side of the user to
inspect the front parts of the body movement of the salat, such as hands during takbiratul ihram. Another
camera is located at the back of the user to inspect the back parts of the body, such as the legs during the at-
tawarrok. The video will be taken until a beep sound was heard, indicating the system has finished taking the
user’s picture.
Figure 11. Graphical user interface (GUI) of Intelligent Salat Monitoring System
Once the system finishes taking the video, it will start filtering the region of interest and undergo
color conversion from RGB to grayscale. Then the image will be matched in the database using the template
matching and Euclidean distance as the medium. In order to improve its accuracy, grey-level correlation is
used to increase the system’s performance.
If the salat is done correctly within the programmed value of the threshold, shown in the graph, a
message will pop up that says “GOOD PERFORMANCE” in a green-colored text. Otherwise, it will suggest
the correct postures that you should do in a red-colored text, indicating the performance of your salat is bad.
This stage applied to takbiratul ihram, ruku’ and at-tawarrok only. For sujud, a special graph indicates how
many force readings will be shown to the user with its performance. By doing this, it can train the user to
learn the salat until the correct posture is performed.
The front camera is used for taking the video during takbiratul ihram, ruku’ and sujud. The system
gives feedback to the user by showing the pictures and graph. Figure 12 dipicts the good takbeerat alehram.
The left side is the screen shoot of the recorded video, middle graph shows the matches percentage and the
notification image is shown in the right side. Figure 13 dipicts the bad performance during takbeerat alehram.
The left side is the screen shoot of the recorded video, middle graph shows the matches percentage and the
notification image is shown in the right side.
Performance of correct ruku’ is shown in Figure 14. The left side is the screen shoot of the recorded
video, middle graph shows the matches percentage and the notification image is shown in the right side.
Figure 15 describes the information regarding incorrect ruku’ performance. The left is the template for the
incorrect postures stored in the database, bad performance of ruku’ and matching percentages, which do not
reach the threshold.
If the system finds a frame with 98% matches a green rectangular will appear on the region of the
interest, if the matches last for more than three seconds the performance of the posture considered correct and
anotification pop up. However, if the matches didn’t last more than three second, the system will notify the
student via a pop image and comment on the image. This is simply the feedback mechanism of the salat
inspection and training system.
 ISSN: 2252-8938
Int J Artif Intell, Vol. 12, No. 1, March 2023: 220-231
228
Figure 12. Takbeerat alehram, from the left is the recorded video, matches percentage and the notification
image
Figure 13. Takbeerat alehram front camera, wrong posture performing
Figure 14. Performance of correct ruku’
Figure 15. Performance of wrong ruku’
Int J Artif Intell ISSN: 2252-8938 
Intelligent system for Islamic prayer (salat) posture monitoring (Md Mozasser Rahman)
229
All the results applied the same feedback concept in the system except for sujud. In sujud, additional
information was added. A graph shows the reading on force-sensing resistor indicates the user is performing
the sujud. Whenever the user’s nose and forehead touch the sensor located at the base carpet, the force will
trigger the sensor, thus giving the reading on the MATLAB. There are 12 forces reading during sujud, set as
the threshold to indicate good performance of the sujud. Figure 16 show the force sensor reading to verify the
performance of sujud. Figure 16(a) shows how feedback on good Sujud performance is shown to the user,
while Figure 16(b) indicates bad sujud performance.
(a)
(b)
Figure 16. Force-sensing resistor reading during the sujud showing the performance of (a) good sujud and (b)
bad sujud
 ISSN: 2252-8938
Int J Artif Intell, Vol. 12, No. 1, March 2023: 220-231
230
5. CONCLUSION
In conclusion, the first objective is to learn the correct postures in salat, whereby we analyze books
of hadith and Muslim scholars in the literature review section. The second objective is to develop an image-
processing algorithm using MATLAB, which is also achieved as the result show quite a good performance.
The third objective of this study is to test the salat performance and provide feedback to the user. This is also
achieved as the output result of the matching image pop up the message about the salat performance and train
the user by giving the correct instructions regarding the current postures of the salat. The results are quite
accurate, as the method proposed is able to identify and match the pattern to recognize up to 90% and inform
the user about their salat performance. The reading in the graph is more accurate when the user performs
salat using the system itself because the camera angle is fixed. Although the posture is correct, some results
show errors when the lighting is bad. This is because the pattern matching in MATLAB is confused when the
lighting is insufficient. It will affect the results of pattern matching, for example, the posture of salat is
correct, but the system keeps on giving bad performance feedback to the user. This issue can be solved by
using the system under sufficient light; hence, increasing the accuracy of the overall system. It is
recommended to bright room to ensure clear images captured. The camera angle also needs to be fixed and
constant between the database and the correct and wrong image for the system to detect the pattern without
an error.
ACKNOWLEDGEMENTS
Communication of this research is made possible through monetary assistance by Universiti Tun
Hussein Onn Malaysia and the UTHM Publisher’s Office via Publication Fund E15216.
REFERENCES
[1] O. Alobaid and K. Rasheed, “Prayer activity recognition using an accelerometer sensor,” in 2018 World Congress in Computer
Science, Computer Engineering and Applied Computing, CSCE 2018 - Proceedings of the 2018 International Conference on
Artificial Intelligence, ICAI 2018, 2018, pp. 271–277.
[2] R. Al-Ghannam and H. Al-Dossari, “Prayer activity monitoring and recognition using acceleration features with mobile phone,”
Arabian Journal for Science and Engineering, vol. 41, no. 12, pp. 4967–4979, May 2016, doi: 10.1007/s13369-016-2158-7.
[3] M. F. Rabbi, N. Wahidah Arshad, K. H. Ghazali, R. Abdul Karim, M. Z. Ibrahim, and T. Sikandar, “EMG activity of leg muscles
with knee pain during islamic prayer (salat),” in Proceedings - 2019 IEEE 15th International Colloquium on Signal Processing
and its Applications, CSPA 2019, Mar. 2019, pp. 213–216, doi: 10.1109/CSPA.2019.8696025.
[4] F. Ibrahim and S. A. Ahmad, “Assessment of upper body muscle activity during salat and stretching exercise: A pilot study,” in
Proceedings - IEEE-EMBS International Conference on Biomedical and Health Informatics: Global Grand Challenge of Health
Informatics, BHI 2012, Jan. 2012, pp. 412–415, doi: 10.1109/BHI.2012.6211603.
[5] R. R. Porle, A. Chekima, F. Wong, M. Mamat, N. Parimon, and Y. F. A. Gaus, “Two-dimensional human pose estimation for the
upper body using histogram and template matching techniques,” in 2013 1st International Conference on Artificial Intelligence,
Modelling & Simulation (AIMS), 2013, pp. 249–254, doi: 10.1109/AIMS.2013.46.
[6] N. A. Jaafar, N. A. Ismail, and Y. A. Yusoff, “An investigation of motion tracking for solat movement with dual sensor
approach,” ARPN Journal of Engineering and Applied Sciences, vol. 10, no. 23, pp. 17981–17986, 2015.
[7] P. Azad, T. Asfour, and R. Dillmann, “Combining appearance-based and model-based methods for real-time object recognition
and 6D localization,” in 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Oct. 2006, pp. 5339–5344,
doi: 10.1109/IROS.2006.282094.
[8] Y. Yang and D. Ramanan, “Articulated pose estimation with flexible mixtures-of-parts,” in Proceedings of the IEEE Computer
Society Conference on Computer Vision and Pattern Recognition, 2011, pp. 1385–1392, doi: 10.1109/CVPR.2011.5995741.
[9] M. Lora, S. Ghidoni, M. Munaro, and E. Menegatti, “A geometric approach to multiple viewpoint human body pose estimation,”
in 2015 European Conference on Mobile Robots, ECMR 2015 - Proceedings, 2015, pp. 1–6, doi: 10.1109/ECMR.2015.7324195.
[10] J. L. Raheja, K. Das, and A. Chaudhary, “Fingertip detection: a fast method with natural hand,” International Journal of
Embedded Systems and Computer Engineering, vol. 3, no. 2, pp. 85–89, 2012, doi: 10.48550/arXiv.1212.0134.
[11] L. Guo, Z. Lu, and L. Yao, “Human-machine interaction sensing technology based on hand gesture recognition: a review,” IEEE
Transactions on Human-Machine Systems, vol. 51, no. 4, pp. 300–309, Aug. 2021, doi: 10.1109/THMS.2021.3086003.
[12] P. Parvathy, K. Subramaniam, G. K. D. Prasanna Venkatesan, P. Karthikaikumar, J. Varghese, and T. Jayasankar, “Development
of hand gesture recognition system using machine learning,” Journal of Ambient Intelligence and Humanized Computing, vol. 12,
no. 6, pp. 6793–6800, 2021, doi: 10.1007/s12652-020-02314-2.
[13] M. M. Islam, S. Siddiqua, and J. Afnan, “Real time hand gesture recognition using different algorithms based on American sign
language,” 2017, doi: 10.1109/ICIVPR.2017.7890854.
[14] J. K. Aggarwal and S. Park, “Human motion: modeling and recognition of actions and interactions,” in Proceedings - 2nd
International Symposium on 3D Data Processing, Visualization, and Transmission. 3DPVT 2004, 2004, pp. 640–647, doi:
10.1109/TDPVT.2004.1335299.
[15] M. W. Lee and R. Nevatia, “Human pose tracking in monocular sequence using multilevel structured models,” IEEE Transactions
on Pattern Analysis and Machine Intelligence, vol. 31, no. 1, pp. 27–38, 2009, doi: 10.1109/TPAMI.2008.35.
[16] P. N. Maraskolhe and A. S. Bhalchandra, “Analysis of facial expression recognition using histogram of oriented gradient (HOG),”
in Proceedings of the 3rd International Conference on Electronics and Communication and Aerospace Technology, ICECA 2019,
2019, pp. 1007–1011, doi: 10.1109/ICECA.2019.8821814.
[17] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings - 2005 IEEE Computer Society
Conference on Computer Vision and Pattern Recognition, CVPR 2005, 2005, vol. 1, pp. 886–893, doi: 10.1109/CVPR.2005.177.
Int J Artif Intell ISSN: 2252-8938 
Intelligent system for Islamic prayer (salat) posture monitoring (Md Mozasser Rahman)
231
[18] S. Zhang and X. Wang, “Human detection and object tracking based on histograms of oriented gradients,” in Proceedings -
International Conference on Natural Computation, 2013, pp. 1349–1353, doi: 10.1109/ICNC.2013.6818189.
[19] K. V. V. Kumar and P. V. V. Kishore, “Indian classical dance mudra classification using HOG features and SVM classifier,”
International Journal of Electrical and Computer Engineering (IJECE), vol. 7, no. 5, pp. 2537–2546, Oct. 2017, doi:
10.11591/ijece.v7i1.pp2537-2546.
[20] C. Wattanapanich, H. Wei, and W. Petchkit, “Investigation of robust gait recognition for different appearances and camera view
angles,” International Journal of Electrical and Computer Engineering (IJECE), vol. 11, no. 5, pp. 3977–3987, Oct. 2021, doi:
10.11591/ijece.v11i5.pp3977-3987.
[21] L.Wang, W.Hu, and T.Tan, “Recent developments in human motion analysis,” Pattern Recognition, vol. 36, no. 3, pp. 585–601,
2003, doi: 10.1016/S0031-3203(02)00100-0.
[22] O. C. Ibe, Markov process for stochastic modeling, 2nd edition. Watthan, MA 02451, USA: Elsevier, 2009.
[23] M. Rocha and P. G. Ferreira, “Hidden markov models,” in Bioinformatics Algorithms, vol. 54, Elsevier, 2018, pp. 255–273.
[24] L. Wang, Y. Zhang, and J. Feng, “On the Euclidean distance of images,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 27, no. 8, pp. 1334–1339, Aug. 2005, doi: 10.1109/TPAMI.2005.165.
[25] A. F. Bobick and J. W. Davis, “The recognition of human movement using temporal templates,” IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 23, no. 3, pp. 257–267, Mar. 2001, doi: 10.1109/34.910878.
BIOGRAPHIES OF AUTHORS
Md Mozasser Rahman currently is an Associate Professor in Department of
Mechanical Engineering Technology, Universiti Tun Hussein Onn Malaysia (UTHM). He
received a B.Sc. Eng. degree from Bangladesh Institute of Technology (BIT) Khulna in
Mechanical Engineering in 1988. After graduation he worked for the same institute as a
lecturer. He was later conferred a M. Eng. degree and Ph.D. from Mie University, Japan, in
2000 and 2003 respectively. He has expertise in robotics and industrial automation. His
research area covers human-robot cooperation, movement characteristics of human arm,
artificial human organ etc. He serves as consultant for industrial automation, robotic system to
Universities and Industries. He received 1 academic award from JSME (Japan Society of
Mechanical Engineers) and 4 innovation awards. He is a Chartered Engineer registered with
the Engineering Council, UK. He can be contacted at email: mozasser@uthm.edu.my.
Eng. Rayan Abbas Ahmed Alharazi currently working as Project Sales
Engineer for one of the dominant Engineering Company ALABDULKAREEM Holding Co.
He received his B.Sc. degree in Mechatronics Engineering from the International Islamic
University, Malaysia IIUM, in 2015. Worked as Sales Engineer in the Saudi Market (western
region) for more than 5 years. After graduation he started discovering with DASH Control
System Company the industrial area and provide solution for power saving using power factor
correction (PFC), provide VFD application and solution and power monitoring system. After 3
years started work for LG air conditioning as commercial air conditioning sales engineer for a
year. He can be contacted at email: rayan.hrazi@gmail.com.
Muhammad Khairul Imran b Zainal Badri currently working as a Senior
System Engineer in IAM–Wonderware Sdn Bhd. Mr. Imran received a B. Eng. degree from
International Islamic University Malaysia (IIUM) in Mechatronics Engineering in 2018. After
graduation he worked for the IAM–Wonderware Sdn Bhd as System Engineer. He got
practical knowledge and experiences in programmable logic controller (PLC) and Supervisory
control and data acquisition (SCADA). He was later conferred a Graduate Engineer from
BEM in 2018 and Graduate Technologists from MBOT in 2021. He has expertise in SCADA
and industrial automation. He serves as site supervisor, project engineer and programmer in
multinational F&B and Semiconductor Company which specializes in control system and
system deployment. He can be contacted at email: imran371994@gmail.com.

More Related Content

What's hot

Continuous Quality Improvement (CQI)
Continuous Quality Improvement (CQI)Continuous Quality Improvement (CQI)
Continuous Quality Improvement (CQI)Suhailan Safei
 
Learning theories by various muslim scholars
Learning theories by various muslim scholarsLearning theories by various muslim scholars
Learning theories by various muslim scholarsZahoor Ahmad
 
Salat ul-musafir
Salat ul-musafirSalat ul-musafir
Salat ul-musafirPuja14882
 
Proposal Kajian Tindakan
Proposal Kajian TindakanProposal Kajian Tindakan
Proposal Kajian TindakanChitra Devaki
 
Introduction to Elementary Education in pakistan 626
Introduction to Elementary Education in pakistan 626Introduction to Elementary Education in pakistan 626
Introduction to Elementary Education in pakistan 626Zahid Mehmood
 
Ideology and education - dawn
Ideology and education   - dawnIdeology and education   - dawn
Ideology and education - dawnAnjali Tiwari
 
How to work in team for school improvement plan
How to work in team  for school improvement planHow to work in team  for school improvement plan
How to work in team for school improvement planMalik Sajjad Ahmad Awan
 
Pakistan educational system history
Pakistan educational system historyPakistan educational system history
Pakistan educational system historytoseef102
 
Curriculum from the Western and Islamic Perspective
Curriculum from the Western and Islamic PerspectiveCurriculum from the Western and Islamic Perspective
Curriculum from the Western and Islamic PerspectiveKaiyisah Yusof
 
Islam and peace
Islam and peaceIslam and peace
Islam and peaceHamid Raza
 
Distance education 1
Distance education 1Distance education 1
Distance education 1Soma Shabbir
 
INSTRUCTIONAL & COMMUNICATION TECHNOLOGY
INSTRUCTIONAL & COMMUNICATION TECHNOLOGYINSTRUCTIONAL & COMMUNICATION TECHNOLOGY
INSTRUCTIONAL & COMMUNICATION TECHNOLOGYuniprint
 
Hazarat Umer Farooq R.A
Hazarat Umer Farooq R.A Hazarat Umer Farooq R.A
Hazarat Umer Farooq R.A Ahmed kamal
 
Comparative Education Key Elements
Comparative Education Key ElementsComparative Education Key Elements
Comparative Education Key ElementsMaricris Guarin
 
Obe assessment-cqi-for-utm-may-2014-note
Obe assessment-cqi-for-utm-may-2014-noteObe assessment-cqi-for-utm-may-2014-note
Obe assessment-cqi-for-utm-may-2014-noteKashifHussain843809
 

What's hot (20)

Continuous Quality Improvement (CQI)
Continuous Quality Improvement (CQI)Continuous Quality Improvement (CQI)
Continuous Quality Improvement (CQI)
 
Learning theories by various muslim scholars
Learning theories by various muslim scholarsLearning theories by various muslim scholars
Learning theories by various muslim scholars
 
Salat ul-musafir
Salat ul-musafirSalat ul-musafir
Salat ul-musafir
 
Proposal Kajian Tindakan
Proposal Kajian TindakanProposal Kajian Tindakan
Proposal Kajian Tindakan
 
Introduction to Elementary Education in pakistan 626
Introduction to Elementary Education in pakistan 626Introduction to Elementary Education in pakistan 626
Introduction to Elementary Education in pakistan 626
 
Moodle, Use and Features of Moodle ?
Moodle, Use and Features of Moodle ?Moodle, Use and Features of Moodle ?
Moodle, Use and Features of Moodle ?
 
Ideology and education - dawn
Ideology and education   - dawnIdeology and education   - dawn
Ideology and education - dawn
 
How to work in team for school improvement plan
How to work in team  for school improvement planHow to work in team  for school improvement plan
How to work in team for school improvement plan
 
Pakistan educational system history
Pakistan educational system historyPakistan educational system history
Pakistan educational system history
 
Curriculum from the Western and Islamic Perspective
Curriculum from the Western and Islamic PerspectiveCurriculum from the Western and Islamic Perspective
Curriculum from the Western and Islamic Perspective
 
Islam and peace
Islam and peaceIslam and peace
Islam and peace
 
ISLAMIC SYSTEM OF EDUCATION
ISLAMIC SYSTEM OF EDUCATIONISLAMIC SYSTEM OF EDUCATION
ISLAMIC SYSTEM OF EDUCATION
 
Distance education 1
Distance education 1Distance education 1
Distance education 1
 
INSTRUCTIONAL & COMMUNICATION TECHNOLOGY
INSTRUCTIONAL & COMMUNICATION TECHNOLOGYINSTRUCTIONAL & COMMUNICATION TECHNOLOGY
INSTRUCTIONAL & COMMUNICATION TECHNOLOGY
 
Al farabi
Al farabiAl farabi
Al farabi
 
Mobile Learning
Mobile LearningMobile Learning
Mobile Learning
 
Hazarat Umer Farooq R.A
Hazarat Umer Farooq R.A Hazarat Umer Farooq R.A
Hazarat Umer Farooq R.A
 
RPS MEDIA DAN TEP.doc
RPS MEDIA DAN TEP.docRPS MEDIA DAN TEP.doc
RPS MEDIA DAN TEP.doc
 
Comparative Education Key Elements
Comparative Education Key ElementsComparative Education Key Elements
Comparative Education Key Elements
 
Obe assessment-cqi-for-utm-may-2014-note
Obe assessment-cqi-for-utm-may-2014-noteObe assessment-cqi-for-utm-may-2014-note
Obe assessment-cqi-for-utm-may-2014-note
 

Similar to Intelligent system for Islamic prayer (salat) posture monitoring

IRJET- Recurrent Neural Network for Human Action Recognition using Star S...
IRJET-  	  Recurrent Neural Network for Human Action Recognition using Star S...IRJET-  	  Recurrent Neural Network for Human Action Recognition using Star S...
IRJET- Recurrent Neural Network for Human Action Recognition using Star S...IRJET Journal
 
Study of AI Fitness Model Using Deep Learning
Study of AI Fitness Model Using Deep LearningStudy of AI Fitness Model Using Deep Learning
Study of AI Fitness Model Using Deep LearningIRJET Journal
 
Paper id 25201433
Paper id 25201433Paper id 25201433
Paper id 25201433IJRAT
 
A SURVEY ON HUMAN POSE ESTIMATION AND CLASSIFICATION
A SURVEY ON HUMAN POSE ESTIMATION AND CLASSIFICATIONA SURVEY ON HUMAN POSE ESTIMATION AND CLASSIFICATION
A SURVEY ON HUMAN POSE ESTIMATION AND CLASSIFICATIONIRJET Journal
 
A novel enhanced algorithm for efficient human tracking
A novel enhanced algorithm for efficient human trackingA novel enhanced algorithm for efficient human tracking
A novel enhanced algorithm for efficient human trackingIJICTJOURNAL
 
Using skeleton model to recognize human gait gender
Using skeleton model to recognize human gait genderUsing skeleton model to recognize human gait gender
Using skeleton model to recognize human gait genderIAESIJAI
 
Journal of Computer Science Research | Vol.4, Iss.1 January 2022
Journal of Computer Science Research | Vol.4, Iss.1 January 2022Journal of Computer Science Research | Vol.4, Iss.1 January 2022
Journal of Computer Science Research | Vol.4, Iss.1 January 2022Bilingual Publishing Group
 
Yoga Posture Classification using Computer Vision
Yoga Posture Classification using Computer VisionYoga Posture Classification using Computer Vision
Yoga Posture Classification using Computer VisionDr. Amarjeet Singh
 
IRJET- Implementing a Method for Stampede Detection and Safety of Pilgrim...
IRJET-  	  Implementing a Method for Stampede Detection and Safety of Pilgrim...IRJET-  	  Implementing a Method for Stampede Detection and Safety of Pilgrim...
IRJET- Implementing a Method for Stampede Detection and Safety of Pilgrim...IRJET Journal
 
Attendance management system using face recognition
Attendance management system using face recognitionAttendance management system using face recognition
Attendance management system using face recognitionIAESIJAI
 
Detection of uncontrolled motion behavior in human crowds
Detection of uncontrolled motion behavior in human crowdsDetection of uncontrolled motion behavior in human crowds
Detection of uncontrolled motion behavior in human crowdseSAT Publishing House
 
Recognition and classification of human motion
Recognition and classification of human motionRecognition and classification of human motion
Recognition and classification of human motionHutami Endang
 
AI Personal Trainer Using Open CV and Media Pipe
AI Personal Trainer Using Open CV and Media PipeAI Personal Trainer Using Open CV and Media Pipe
AI Personal Trainer Using Open CV and Media PipeIRJET Journal
 
Automation of Wheelchair Using Iris Movement
Automation of Wheelchair Using Iris MovementAutomation of Wheelchair Using Iris Movement
Automation of Wheelchair Using Iris Movementijceronline
 
YOGA POSE DETECTION USING MACHINE LEARNING LIBRARIES
YOGA POSE DETECTION USING MACHINE LEARNING LIBRARIESYOGA POSE DETECTION USING MACHINE LEARNING LIBRARIES
YOGA POSE DETECTION USING MACHINE LEARNING LIBRARIESIRJET Journal
 
BIOMETRIC AUTHORIZATION SYSTEM USING GAIT BIOMETRY
BIOMETRIC AUTHORIZATION SYSTEM USING GAIT BIOMETRYBIOMETRIC AUTHORIZATION SYSTEM USING GAIT BIOMETRY
BIOMETRIC AUTHORIZATION SYSTEM USING GAIT BIOMETRYIJCSEA Journal
 
AI Personal Trainer Using Open CV and Media Pipe
AI Personal Trainer Using Open CV and Media PipeAI Personal Trainer Using Open CV and Media Pipe
AI Personal Trainer Using Open CV and Media PipeIRJET Journal
 
Principal component analysis for human gait recognition system
Principal component analysis for human gait recognition systemPrincipal component analysis for human gait recognition system
Principal component analysis for human gait recognition systemjournalBEEI
 

Similar to Intelligent system for Islamic prayer (salat) posture monitoring (20)

IRJET- Recurrent Neural Network for Human Action Recognition using Star S...
IRJET-  	  Recurrent Neural Network for Human Action Recognition using Star S...IRJET-  	  Recurrent Neural Network for Human Action Recognition using Star S...
IRJET- Recurrent Neural Network for Human Action Recognition using Star S...
 
Study of AI Fitness Model Using Deep Learning
Study of AI Fitness Model Using Deep LearningStudy of AI Fitness Model Using Deep Learning
Study of AI Fitness Model Using Deep Learning
 
Paper id 25201433
Paper id 25201433Paper id 25201433
Paper id 25201433
 
A SURVEY ON HUMAN POSE ESTIMATION AND CLASSIFICATION
A SURVEY ON HUMAN POSE ESTIMATION AND CLASSIFICATIONA SURVEY ON HUMAN POSE ESTIMATION AND CLASSIFICATION
A SURVEY ON HUMAN POSE ESTIMATION AND CLASSIFICATION
 
A novel enhanced algorithm for efficient human tracking
A novel enhanced algorithm for efficient human trackingA novel enhanced algorithm for efficient human tracking
A novel enhanced algorithm for efficient human tracking
 
Using skeleton model to recognize human gait gender
Using skeleton model to recognize human gait genderUsing skeleton model to recognize human gait gender
Using skeleton model to recognize human gait gender
 
Journal of Computer Science Research | Vol.4, Iss.1 January 2022
Journal of Computer Science Research | Vol.4, Iss.1 January 2022Journal of Computer Science Research | Vol.4, Iss.1 January 2022
Journal of Computer Science Research | Vol.4, Iss.1 January 2022
 
Yoga Posture Classification using Computer Vision
Yoga Posture Classification using Computer VisionYoga Posture Classification using Computer Vision
Yoga Posture Classification using Computer Vision
 
IRJET- Implementing a Method for Stampede Detection and Safety of Pilgrim...
IRJET-  	  Implementing a Method for Stampede Detection and Safety of Pilgrim...IRJET-  	  Implementing a Method for Stampede Detection and Safety of Pilgrim...
IRJET- Implementing a Method for Stampede Detection and Safety of Pilgrim...
 
Attendance management system using face recognition
Attendance management system using face recognitionAttendance management system using face recognition
Attendance management system using face recognition
 
Detection of uncontrolled motion behavior in human crowds
Detection of uncontrolled motion behavior in human crowdsDetection of uncontrolled motion behavior in human crowds
Detection of uncontrolled motion behavior in human crowds
 
The Commemorate Service Planning with Assistance-Oriented Architecture
The Commemorate Service Planning with Assistance-Oriented ArchitectureThe Commemorate Service Planning with Assistance-Oriented Architecture
The Commemorate Service Planning with Assistance-Oriented Architecture
 
Recognition and classification of human motion
Recognition and classification of human motionRecognition and classification of human motion
Recognition and classification of human motion
 
Robotics report
Robotics reportRobotics report
Robotics report
 
AI Personal Trainer Using Open CV and Media Pipe
AI Personal Trainer Using Open CV and Media PipeAI Personal Trainer Using Open CV and Media Pipe
AI Personal Trainer Using Open CV and Media Pipe
 
Automation of Wheelchair Using Iris Movement
Automation of Wheelchair Using Iris MovementAutomation of Wheelchair Using Iris Movement
Automation of Wheelchair Using Iris Movement
 
YOGA POSE DETECTION USING MACHINE LEARNING LIBRARIES
YOGA POSE DETECTION USING MACHINE LEARNING LIBRARIESYOGA POSE DETECTION USING MACHINE LEARNING LIBRARIES
YOGA POSE DETECTION USING MACHINE LEARNING LIBRARIES
 
BIOMETRIC AUTHORIZATION SYSTEM USING GAIT BIOMETRY
BIOMETRIC AUTHORIZATION SYSTEM USING GAIT BIOMETRYBIOMETRIC AUTHORIZATION SYSTEM USING GAIT BIOMETRY
BIOMETRIC AUTHORIZATION SYSTEM USING GAIT BIOMETRY
 
AI Personal Trainer Using Open CV and Media Pipe
AI Personal Trainer Using Open CV and Media PipeAI Personal Trainer Using Open CV and Media Pipe
AI Personal Trainer Using Open CV and Media Pipe
 
Principal component analysis for human gait recognition system
Principal component analysis for human gait recognition systemPrincipal component analysis for human gait recognition system
Principal component analysis for human gait recognition system
 

More from IAESIJAI

Convolutional neural network with binary moth flame optimization for emotion ...
Convolutional neural network with binary moth flame optimization for emotion ...Convolutional neural network with binary moth flame optimization for emotion ...
Convolutional neural network with binary moth flame optimization for emotion ...IAESIJAI
 
A novel ensemble model for detecting fake news
A novel ensemble model for detecting fake newsA novel ensemble model for detecting fake news
A novel ensemble model for detecting fake newsIAESIJAI
 
K-centroid convergence clustering identification in one-label per type for di...
K-centroid convergence clustering identification in one-label per type for di...K-centroid convergence clustering identification in one-label per type for di...
K-centroid convergence clustering identification in one-label per type for di...IAESIJAI
 
Plant leaf detection through machine learning based image classification appr...
Plant leaf detection through machine learning based image classification appr...Plant leaf detection through machine learning based image classification appr...
Plant leaf detection through machine learning based image classification appr...IAESIJAI
 
Backbone search for object detection for applications in intrusion warning sy...
Backbone search for object detection for applications in intrusion warning sy...Backbone search for object detection for applications in intrusion warning sy...
Backbone search for object detection for applications in intrusion warning sy...IAESIJAI
 
Deep learning method for lung cancer identification and classification
Deep learning method for lung cancer identification and classificationDeep learning method for lung cancer identification and classification
Deep learning method for lung cancer identification and classificationIAESIJAI
 
Optically processed Kannada script realization with Siamese neural network model
Optically processed Kannada script realization with Siamese neural network modelOptically processed Kannada script realization with Siamese neural network model
Optically processed Kannada script realization with Siamese neural network modelIAESIJAI
 
Embedded artificial intelligence system using deep learning and raspberrypi f...
Embedded artificial intelligence system using deep learning and raspberrypi f...Embedded artificial intelligence system using deep learning and raspberrypi f...
Embedded artificial intelligence system using deep learning and raspberrypi f...IAESIJAI
 
Deep learning based biometric authentication using electrocardiogram and iris
Deep learning based biometric authentication using electrocardiogram and irisDeep learning based biometric authentication using electrocardiogram and iris
Deep learning based biometric authentication using electrocardiogram and irisIAESIJAI
 
Hybrid channel and spatial attention-UNet for skin lesion segmentation
Hybrid channel and spatial attention-UNet for skin lesion segmentationHybrid channel and spatial attention-UNet for skin lesion segmentation
Hybrid channel and spatial attention-UNet for skin lesion segmentationIAESIJAI
 
Photoplethysmogram signal reconstruction through integrated compression sensi...
Photoplethysmogram signal reconstruction through integrated compression sensi...Photoplethysmogram signal reconstruction through integrated compression sensi...
Photoplethysmogram signal reconstruction through integrated compression sensi...IAESIJAI
 
Speaker identification under noisy conditions using hybrid convolutional neur...
Speaker identification under noisy conditions using hybrid convolutional neur...Speaker identification under noisy conditions using hybrid convolutional neur...
Speaker identification under noisy conditions using hybrid convolutional neur...IAESIJAI
 
Multi-channel microseismic signals classification with convolutional neural n...
Multi-channel microseismic signals classification with convolutional neural n...Multi-channel microseismic signals classification with convolutional neural n...
Multi-channel microseismic signals classification with convolutional neural n...IAESIJAI
 
Sophisticated face mask dataset: a novel dataset for effective coronavirus di...
Sophisticated face mask dataset: a novel dataset for effective coronavirus di...Sophisticated face mask dataset: a novel dataset for effective coronavirus di...
Sophisticated face mask dataset: a novel dataset for effective coronavirus di...IAESIJAI
 
Transfer learning for epilepsy detection using spectrogram images
Transfer learning for epilepsy detection using spectrogram imagesTransfer learning for epilepsy detection using spectrogram images
Transfer learning for epilepsy detection using spectrogram imagesIAESIJAI
 
Deep neural network for lateral control of self-driving cars in urban environ...
Deep neural network for lateral control of self-driving cars in urban environ...Deep neural network for lateral control of self-driving cars in urban environ...
Deep neural network for lateral control of self-driving cars in urban environ...IAESIJAI
 
Attention mechanism-based model for cardiomegaly recognition in chest X-Ray i...
Attention mechanism-based model for cardiomegaly recognition in chest X-Ray i...Attention mechanism-based model for cardiomegaly recognition in chest X-Ray i...
Attention mechanism-based model for cardiomegaly recognition in chest X-Ray i...IAESIJAI
 
Efficient commodity price forecasting using long short-term memory model
Efficient commodity price forecasting using long short-term memory modelEfficient commodity price forecasting using long short-term memory model
Efficient commodity price forecasting using long short-term memory modelIAESIJAI
 
1-dimensional convolutional neural networks for predicting sudden cardiac
1-dimensional convolutional neural networks for predicting sudden cardiac1-dimensional convolutional neural networks for predicting sudden cardiac
1-dimensional convolutional neural networks for predicting sudden cardiacIAESIJAI
 
A deep learning-based approach for early detection of disease in sugarcane pl...
A deep learning-based approach for early detection of disease in sugarcane pl...A deep learning-based approach for early detection of disease in sugarcane pl...
A deep learning-based approach for early detection of disease in sugarcane pl...IAESIJAI
 

More from IAESIJAI (20)

Convolutional neural network with binary moth flame optimization for emotion ...
Convolutional neural network with binary moth flame optimization for emotion ...Convolutional neural network with binary moth flame optimization for emotion ...
Convolutional neural network with binary moth flame optimization for emotion ...
 
A novel ensemble model for detecting fake news
A novel ensemble model for detecting fake newsA novel ensemble model for detecting fake news
A novel ensemble model for detecting fake news
 
K-centroid convergence clustering identification in one-label per type for di...
K-centroid convergence clustering identification in one-label per type for di...K-centroid convergence clustering identification in one-label per type for di...
K-centroid convergence clustering identification in one-label per type for di...
 
Plant leaf detection through machine learning based image classification appr...
Plant leaf detection through machine learning based image classification appr...Plant leaf detection through machine learning based image classification appr...
Plant leaf detection through machine learning based image classification appr...
 
Backbone search for object detection for applications in intrusion warning sy...
Backbone search for object detection for applications in intrusion warning sy...Backbone search for object detection for applications in intrusion warning sy...
Backbone search for object detection for applications in intrusion warning sy...
 
Deep learning method for lung cancer identification and classification
Deep learning method for lung cancer identification and classificationDeep learning method for lung cancer identification and classification
Deep learning method for lung cancer identification and classification
 
Optically processed Kannada script realization with Siamese neural network model
Optically processed Kannada script realization with Siamese neural network modelOptically processed Kannada script realization with Siamese neural network model
Optically processed Kannada script realization with Siamese neural network model
 
Embedded artificial intelligence system using deep learning and raspberrypi f...
Embedded artificial intelligence system using deep learning and raspberrypi f...Embedded artificial intelligence system using deep learning and raspberrypi f...
Embedded artificial intelligence system using deep learning and raspberrypi f...
 
Deep learning based biometric authentication using electrocardiogram and iris
Deep learning based biometric authentication using electrocardiogram and irisDeep learning based biometric authentication using electrocardiogram and iris
Deep learning based biometric authentication using electrocardiogram and iris
 
Hybrid channel and spatial attention-UNet for skin lesion segmentation
Hybrid channel and spatial attention-UNet for skin lesion segmentationHybrid channel and spatial attention-UNet for skin lesion segmentation
Hybrid channel and spatial attention-UNet for skin lesion segmentation
 
Photoplethysmogram signal reconstruction through integrated compression sensi...
Photoplethysmogram signal reconstruction through integrated compression sensi...Photoplethysmogram signal reconstruction through integrated compression sensi...
Photoplethysmogram signal reconstruction through integrated compression sensi...
 
Speaker identification under noisy conditions using hybrid convolutional neur...
Speaker identification under noisy conditions using hybrid convolutional neur...Speaker identification under noisy conditions using hybrid convolutional neur...
Speaker identification under noisy conditions using hybrid convolutional neur...
 
Multi-channel microseismic signals classification with convolutional neural n...
Multi-channel microseismic signals classification with convolutional neural n...Multi-channel microseismic signals classification with convolutional neural n...
Multi-channel microseismic signals classification with convolutional neural n...
 
Sophisticated face mask dataset: a novel dataset for effective coronavirus di...
Sophisticated face mask dataset: a novel dataset for effective coronavirus di...Sophisticated face mask dataset: a novel dataset for effective coronavirus di...
Sophisticated face mask dataset: a novel dataset for effective coronavirus di...
 
Transfer learning for epilepsy detection using spectrogram images
Transfer learning for epilepsy detection using spectrogram imagesTransfer learning for epilepsy detection using spectrogram images
Transfer learning for epilepsy detection using spectrogram images
 
Deep neural network for lateral control of self-driving cars in urban environ...
Deep neural network for lateral control of self-driving cars in urban environ...Deep neural network for lateral control of self-driving cars in urban environ...
Deep neural network for lateral control of self-driving cars in urban environ...
 
Attention mechanism-based model for cardiomegaly recognition in chest X-Ray i...
Attention mechanism-based model for cardiomegaly recognition in chest X-Ray i...Attention mechanism-based model for cardiomegaly recognition in chest X-Ray i...
Attention mechanism-based model for cardiomegaly recognition in chest X-Ray i...
 
Efficient commodity price forecasting using long short-term memory model
Efficient commodity price forecasting using long short-term memory modelEfficient commodity price forecasting using long short-term memory model
Efficient commodity price forecasting using long short-term memory model
 
1-dimensional convolutional neural networks for predicting sudden cardiac
1-dimensional convolutional neural networks for predicting sudden cardiac1-dimensional convolutional neural networks for predicting sudden cardiac
1-dimensional convolutional neural networks for predicting sudden cardiac
 
A deep learning-based approach for early detection of disease in sugarcane pl...
A deep learning-based approach for early detection of disease in sugarcane pl...A deep learning-based approach for early detection of disease in sugarcane pl...
A deep learning-based approach for early detection of disease in sugarcane pl...
 

Recently uploaded

Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUK Journal
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoffsammart93
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherRemote DBA Services
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesBoston Institute of Analytics
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...DianaGray10
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024The Digital Insurer
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 

Recently uploaded (20)

Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 

Intelligent system for Islamic prayer (salat) posture monitoring

  • 1. IAES International Journal of Artificial Intelligence (IJ-AI) Vol. 12, No. 1, March 2023, pp. 220~231 ISSN: 2252-8938, DOI: 10.11591/ijai.v12.i1.pp220-231  220 Journal homepage: http://ijai.iaescore.com Intelligent system for Islamic prayer (salat) posture monitoring Md Mozasser Rahman1 , Rayan Abbas Ahmed Alharazi2 , Muhammad Khairul Imban b Zainal Badri2 1 Department of Mechnical Engineering Technology, Faculty of Engineering Technology, Universiti Tun Hussein Onn Malaysia, Johor, Malaysia 2 Department of Mechnical Engineering, Kulliyyah of Engineering, International Islamic University Malaysia, Selangor, Malaysia Article Info ABSTRACT Article history: Received Sep 1, 2021 Revised Aug 9, 2022 Accepted Sep 7, 2022 This paper introduced an Intelligent Salat Monitoring and Training System based on machine vision and image processing. In Islam, prayer (i.e. salat) is the second pillar of Islam. It is the most important and fundamental worshipping activity that believers have to perform five times a day. From gestures’ perspective, there are predefined human postures that must be performed in a precise manner. There are lots of materials on the internet and social media for training and correction purposes. However, some people do not perform these postures correctly due to being new to salat or even having learned prayers incorrectly. Furthermore, the time spent in each posture has to be balanced. To address these issues, we propose to develop an assistive intelligence framework that guides worshippers to evaluate the correctness of their prayer’s postures. Image comparison and pattern matching are used to study the system’s effectiveness by using several combining algorithms, such as Euclidean distance, template matching and grey-level correlation, to compare the images of the user and the database. The experiments’ results, both correct and incorrect salat performances, are shown via pictures and graph for each of the postures of salat. Keywords: Image processing Intelligent Monitoring Salat (Islamic prayer) Salat posture This is an open access article under the CC BY-SA license. Corresponding Author: Md Mozasser Rahman Department of Mechanical Engineering Technology, Faculty of Engineering Technology Universiti Tun Hussein Onn Malaysia Km 1, Jalan Panchor, 84600 Pagoh, Muar, Johor Darul Ta’zim, Malaysia Email: mozasser@uthm.edu.my 1. INTRODUCTION Salat (prayer) is one of the main pillars in Islam, which is considered one of the most important aspects of our faith. Our beloved Prophet Muhammad (peace be upon him) received the commandments of salat during Isra’ and Mi’raj (the night journey). Hence, giving hope for humanity once more as they have lost the light on how to worship the true and the only one God, The Almighty Allah (Glorious is He and He is Exalted). During that time, Muslims learn how to perform salat by following the orders and actions of Prophet Muhammad (peace be upon him). This is done by looking at the action and orders directly using the senses of sight and hearing. In other words, the technique used to learn salat at that time solely by using the human senses to detect the correct movements and words. Although Muslims during that time can only learn how to perform salat through the Prophet’s words and actions, the teaching and learning are highly effective because Muslims nowadays are still performing salat the same way as the Prophet. As the world is getting older, some Muslims tend to forget the proper way to perform salat as they are bound to the world. Today, technologies are improving at a very fast pace. Many kinds of research and development have been conducted to improve our lives. This raised a big responsibility for Muslim researchers to develop a technology that benefits this world and hereafter. With this goal, developing the
  • 2. Int J Artif Intell ISSN: 2252-8938  Intelligent system for Islamic prayer (salat) posture monitoring (Md Mozasser Rahman) 221 “Intelligent Salat Monitoring and Training System” as an educational tool could help many Muslims learn and recognize the proper way of performing salat. There are a few research have been done regarding the activity monitoring of salat. Alobaid and Rasheed [1], Al-Ghannam and Dossari [2] used smartphone technology to recognize Salat activities. Rabbi et.al. [3], Ibrahim and Ahmad [4] assessed the activities of salat using electromyographic (EMG) signals. Therefore, the most important technology that we need to learn and master to perform the salat inspection and training system for the Muslim community is machine vision and image processing. Computer vision is used to inspect and track human movement in various fields, such as sport, health care and even games. As Muslim, we usually notice and understand how to perform salat by following others. We follow the postures and movements of others in performing salat mainly by scanning using our eyes. Then we analyze and process the learning of salat, whether it is correct or wrong. However, by combining this technology with the religious aspect, we will gain many advantages. Using the system, we can learn the proper way of salat by looking at the correct posture of salat installed in the database system; thus, giving the proper feedback to the user. The system feedback can be in the form of words and numbers, indicating the percentage of error in the salat movement. Many researchers found algorithms to detect human parts, such as the face, hand, movements, and postures. Some of the algorithms can detect the posture of the human body [5]. Different algorithm leads to differences in need for the system. Therefore, some consideration must be made to have a fully functional system. Approaches and algorithms to perform the inspection and training system for salat must be chosen so that the image we need to measure and compare does not lack the information needed by the system. Elements like the angle of sight, size, color, and texture of the image need to be measured using multiple algorithms to get an accurate result so that the system does not give the wrong feedback to the user. In order to overcome the problems, the MATLAB program is used to implement and test the methods proposed. Muslim communities and others who convert to Islam across the world are in dire need of the basic knowledge of salat. By developing the salat inspection and training system, this technology can teach and share the knowledge with ease. The system resolves several problems as: i) help Muslim across the world in learning the correct ways of salat anytime and everywhere; ii) reduce time, cost, and does not use much space for learning salat; iii) avoid from being used by fake preachers in learning salat; iv) for Muslims who feel embarrassed to learn the salat from others; and v) help newly converted Muslims to learn salat with ease. Several movements in salat are considered important [6]. These movements and postures are needed to be in a correct manner so that Allah will accept our salat. Notably, salat has a few sequenced movements to have a complete cycle known as raka’ah. The sequences of one complete cycle are shown in Figure 1. Figure 1. Sequences for one complete cycle in salat [6] 2. RELATED WORKS 2.1. Human body modelling in machine vision Human motion and pose recognition can be categorized into two types of models, which are model- based and appearance-based methods. Model-based object tracking algorithms are based on simple CAD (computer-aided design) wire models of objects, as shown in Figure 2. Using this kind of models, we can
  • 3.  ISSN: 2252-8938 Int J Artif Intell, Vol. 12, No. 1, March 2023: 220-231 222 draw the starting and endpoint of the lines correctly into the image plane and granting a real-time tracking of objects at the cost of a small computational effort. Appearance-based method use no priori knowledge on the data present in the image. Instead, it analyzes the data by using the statistic of the available dataset in the database to extract the modes. By doing so, it will group the data in the best possible condition. According to Azad et al., the appearance-based method uses various algorithms to illustrate the object [7]. In other words, appearance-based approaches are more reliable in many types of situations because they do not require a specific object to be a model. Figure 2. Illustration of the object using wire model [7] 2.2. Representation of human figure 2.2.1. Bounding box One of the simplest representations of the human body is the bounding box. Although the function of the bounding box is limited, the model is useful when the image of the human body in the picture is very small because it only used a few pixels. This will reduce the complexity in image processing but at the cost of accuracy. Figure 3 shows the bounding box as a human representation in human body modelling [8]. Figure 3. Bounding box representation of human body [8] 2.2.2. Stick representation The stick or bone figure representation typically represents the human body in machine vision and image processing. The stick is acting as a bone and make the pose or movement of the human. Figure 4 shows the stick representation of the human body. The disadvantages of this figure are some of the movements like sitting will be difficult to make because of occlusion [9]. 2.2.3. Multi-dimensional representation Hand gesture, as one of the important ways for human to convey information and express intuitive intention, has the advantages of high degree of differentiation, strong flexibility and high efficiency of
  • 4. Int J Artif Intell ISSN: 2252-8938  Intelligent system for Islamic prayer (salat) posture monitoring (Md Mozasser Rahman) 223 information transmission, which makes hand gesture recognition (HGR) as one of the research hotspots in the field of human-machine interface (HMI) [10]. The two-dimensional (2D) contour representation used the human body and projected it from three-dimensional (3D) space onto the two-dimensional image plane [11]–[13]. It will approximate the human body by using deformable contours, ribbons, or cardboards [14]. Figure 5 shows 2D images of the hand. Three-dimensional (3D) representation describes the parts of the human body in 3D space using a combination of cylinders as shown in Figure 6 [15]. The 3D representation shape can also use other shapes such as a cone or sphere to represent the human body. Figure 4. Stick figure representation of human body [9] Figure 5. Pictures of 2D hand [12] Figure 6. 3D human motion representation [15]
  • 5.  ISSN: 2252-8938 Int J Artif Intell, Vol. 12, No. 1, March 2023: 220-231 224 2.3. Algorithm for pattern and image matching 2.3.1. Histogram of the oriented gradient A histogram of the oriented gradient (HOG) could be used for image processing matching purposes such as face recognition [16]–[18]. The theory behind HOG measurements is distributed within the region range in the image. It is very useful for matching and tracking textured objects, which have inorganic shapes. For application of computer vision on human hands indicate HOG as the better performer compared to other feature extraction models [19]. HOG was applied to the base feature images to generate feature descriptors [20]. 2.3.2. Hidden markov model The hidden markov model (HMM) is very famous among speech recognition, and it is one kind of model that uses statistic to extract the features. According to Wang et al., HMM is more reliable in analyzing time-varying data with variations in space-time conditions [21]. In matching procedures, it will compute the probability of HMM to generate the test symbol and its sequences which corresponds to the features of the input image. HMM is considered one of the best algorithms in matching the human motion pattern because it can handle uncertainty or unknown in its stochastic framework [22]. However, there is a significant disadvantage of this method. The HMM is inefficient in handling three or more processes that are independent [23]. 2.3.3. Euclidean distance Euclidean distance can define the metric of the image efficiently. It used the Euclidean metric to measure the distance between two connected points in a straight line in Euclidean space. According to Wang et al., this method consists of the summation of the pixel-wise intensity differences [24]. They stated that the traditional Euclidean distance might cause small deformation in using a large Euclidean distance. To solve the problem, they proposed a method that can solve any reasonable metric. The keys for their method are simplicity in computation, relative insensitivity to small deformation, and increased efficiency in embedding the system in most of the powerful image recognition. 2.3.4. Temporal template Bobick and Davis used temporal templates to recognize human movements by constructing a vector image to match it against the image, whereby the movement is known and stored in the database [25]. Two types of features were used, namely motion-energy image and motion-history image. There are many advantages to using these methods. They could support direct recognition of the motion, instantly perform temporal segmentation, invariant to linear changes in speed, and be run in real-time on a standard platform. Some limitations were detected, such as it cannot handle incidental motion, and occlusion may sometimes happen at a certain point. 3. METHOD 3.1. Actual picture and mechanical design To design the salat inspection and training system, we used polyvinyl chloride (PVC) pipe as the base in the design. In this design, we prioritize portability first as it requires large spaces to place or store it. By using PVC pipe, we can assemble and disassemble it easily, which takes less than five minutes. In order to build the base of the system, a combination of plain ended pipe, equal elbow pipe, end cap pipe, and equal tee pipe are needed. Figure 7(a) shows the actual picture the system and the isometric view of the system is shown in Figure 7(b). In the actual picture, two black lines are drawn on the base carpet. The middle black line is for the initial position of at-tawarrok. The user will sit there until the system finished the inspection. The black line located near the back camera is for the initial position for takbiratul ihram, ruku’ and sujud. The user will perform all these postures of the salat at their respective initial position, marked with the black lines. Two cameras are installed in the system as shown in Figure 7(b). One camera is installed at the front to inspect the front part of the body, such as hands and head, which are placed higher than the second one. The second camera is installed at the back to inspect the back part of the body, such as the legs, placed lower than the first one. The front camera is used to inspect the postures of salat for takbiratul ihram, ruku’ and sujud, while the back camera is used to inspect the posture during at-tawarrok. Two servo motors are installed in the system located below the cameras. The function of these servomotors is to change the camera angle when taking the video of the user’s salat using the system. This system can be carried and implemented everywhere because of its unique features. In the base carpet, a force- sensing resistor is installed to inspect the user during the sujud. The force-sensing resistor is used during sujud to check whether or not the parts of the body, such as the forehead and nose, are touching the ground.
  • 6. Int J Artif Intell ISSN: 2252-8938  Intelligent system for Islamic prayer (salat) posture monitoring (Md Mozasser Rahman) 225 (a) (b) Figure 7. Salat monitoring and training system (a) actual picture and (b) isometric view 3.2 Experimental Method In this study, we adopted a template matching approach considering its simplicity in real time application. To detect human body, the color space RGB (red, green, and blue) is chosen, then we de- correlate the luminance and chrominance. With a given RGB image, it is converted to grayscale image using the RGB-to-grayscale conversion equation. The mean values of the three components generate a gray scale image: 𝐺𝑟𝑎𝑦 = (0.2126 × 𝑅𝑒𝑑2.2 + 0.7152 × 𝐺𝑟𝑒𝑒𝑛2.2 + 0.0722 × 𝐵𝑙𝑢𝑒2.2 )1/2.2 (1) Furthermore, the input image converted to grayscale as we need to match with the database images as template to the input image. However, to process the matching we would choose an approach. The matching process moves the template image to all possible 35 positions in a larger source image and computes a numerical index that indicates how well the template matches the image in that position. One of the well know matching process is Euclidean distance, Let I be a gray level image and g be a gray-value template of size (n×m): 𝑑(𝐼, 𝑔, 𝑟, 𝑐) = √∑ ∑ (𝐼(𝑟 + 𝑖, 𝑐 + 𝑗) − 𝑔(𝑖, 𝑗))2 𝑚 𝑗=1 𝑛 𝑖=1 (2) where (r, c) denotes the top left corner of template g. Second matching process which has the accuracy advantage and processing time over Euclidean distance is grey-level correlation: 𝑐𝑜𝑟 = ∑ (𝑥𝑖−𝑥̅).(𝑦𝑖−𝑦 ̅) 𝑁−1 𝑖=0 √∑ (𝑥𝑖−𝑥̅)2.∑ (𝑦𝑖−𝑦 ̅)2 𝑁−1 𝑖=0 𝑁−1 𝑖=0 (3) Where, x is the template gray level image 𝑥̅ is the average gray level in the template image y is the source image section 𝑦 ̅ is the average gray level in the source image N is the number of pixels in the section image The value cor is between -1 t0+1, with larger values representing a strongrt relationship between the two images. As we now the correlation matching result never shows 100% matching as the images are different in small details. Therefore, we should apply a threshold for the correlation result, the threshold can be set the highest value of match accrued. Regarding the feature extraction of this system, we considered HOG descriptors as one descriptor as shown in Figure 8 to show how the system could perform. Figure 8(a) shows the equivalent histogram of an image and the obtained HOG feature of the image is shown in Figure 8(b). However, there are many descriptors can be used or combined to work together. Other descriptor apply the same operation of coveting the image pixels to vote to its colornumber as it described from (0–255), 0 for white ad 255 for black color. However, Divide the feature into log-polar bins instead of dividing the feature into square is the commune used approach. To identify the image to the computer we need to use descriptors,
  • 7.  ISSN: 2252-8938 Int J Artif Intell, Vol. 12, No. 1, March 2023: 220-231 226 as high as we train the system with image descriptors as the error of the system decrease. However, combining multi type of descriptors “such as scale-invariant feature transform (SIFT), gradient location orientation histogram (GLOH) and speeded up robust features (SURF)” will help to enhance the performance ofthe system. (a) (b) Figure 8. Process of calculating HOG (a) creating histogram from image and (b) visualization of HOG features of the image A sample HOG representation of both correct and wrong position during takbeerat alehram shown in Figure 9. The correct position, the rising of hand above shoulder, with it descriptor is shown in Figure 9(a). Wherease the wrong position, hand is below shoulder is shown in Figure 9(b). The descriptor of the image on the right side showing intensity variation in the images. (a) (b) Figure 9. Comparison of right and wrong position of takbeerat alehram using HOG representation of the image (a) correct position and (b) wrong position Both HGR and HOG descriptors were used for matching the two positions as shown in Figure 10, the error of the result can be recognized by the different between two extreme points. The difference in strong corners between two overlay images will represent the amount of unmatched features or error between matching two images. However, as this difference increase as the salat position had performed by the prayer is wrong. Therefore, we need to increase the extracted feature by increasing the corners number, as well as, threshold the matching result so our system would trigger the position as wrong if the difference between two images exceeds 30%. Figure 10. The result of the matching process using HGR and HOG
  • 8. Int J Artif Intell ISSN: 2252-8938  Intelligent system for Islamic prayer (salat) posture monitoring (Md Mozasser Rahman) 227 4. RESULTS AND DISCUSSION In this part, the user will use the system’s graphical user interface installed on the laptop. Figure 11 show the interface provided by the system. The user can preview the correct posture of salat by clicking the button preview for each posture. After that, they can click the blue button in the interface for the system to start the inspection. Upon clicking the start button in blue colour, the video of the person who prays using the salat inspection and training system was taken using the video camera attached to the system. Two video cameras are needed to inspect the user. One is located at the front side of the user to inspect the front parts of the body movement of the salat, such as hands during takbiratul ihram. Another camera is located at the back of the user to inspect the back parts of the body, such as the legs during the at- tawarrok. The video will be taken until a beep sound was heard, indicating the system has finished taking the user’s picture. Figure 11. Graphical user interface (GUI) of Intelligent Salat Monitoring System Once the system finishes taking the video, it will start filtering the region of interest and undergo color conversion from RGB to grayscale. Then the image will be matched in the database using the template matching and Euclidean distance as the medium. In order to improve its accuracy, grey-level correlation is used to increase the system’s performance. If the salat is done correctly within the programmed value of the threshold, shown in the graph, a message will pop up that says “GOOD PERFORMANCE” in a green-colored text. Otherwise, it will suggest the correct postures that you should do in a red-colored text, indicating the performance of your salat is bad. This stage applied to takbiratul ihram, ruku’ and at-tawarrok only. For sujud, a special graph indicates how many force readings will be shown to the user with its performance. By doing this, it can train the user to learn the salat until the correct posture is performed. The front camera is used for taking the video during takbiratul ihram, ruku’ and sujud. The system gives feedback to the user by showing the pictures and graph. Figure 12 dipicts the good takbeerat alehram. The left side is the screen shoot of the recorded video, middle graph shows the matches percentage and the notification image is shown in the right side. Figure 13 dipicts the bad performance during takbeerat alehram. The left side is the screen shoot of the recorded video, middle graph shows the matches percentage and the notification image is shown in the right side. Performance of correct ruku’ is shown in Figure 14. The left side is the screen shoot of the recorded video, middle graph shows the matches percentage and the notification image is shown in the right side. Figure 15 describes the information regarding incorrect ruku’ performance. The left is the template for the incorrect postures stored in the database, bad performance of ruku’ and matching percentages, which do not reach the threshold. If the system finds a frame with 98% matches a green rectangular will appear on the region of the interest, if the matches last for more than three seconds the performance of the posture considered correct and anotification pop up. However, if the matches didn’t last more than three second, the system will notify the student via a pop image and comment on the image. This is simply the feedback mechanism of the salat inspection and training system.
  • 9.  ISSN: 2252-8938 Int J Artif Intell, Vol. 12, No. 1, March 2023: 220-231 228 Figure 12. Takbeerat alehram, from the left is the recorded video, matches percentage and the notification image Figure 13. Takbeerat alehram front camera, wrong posture performing Figure 14. Performance of correct ruku’ Figure 15. Performance of wrong ruku’
  • 10. Int J Artif Intell ISSN: 2252-8938  Intelligent system for Islamic prayer (salat) posture monitoring (Md Mozasser Rahman) 229 All the results applied the same feedback concept in the system except for sujud. In sujud, additional information was added. A graph shows the reading on force-sensing resistor indicates the user is performing the sujud. Whenever the user’s nose and forehead touch the sensor located at the base carpet, the force will trigger the sensor, thus giving the reading on the MATLAB. There are 12 forces reading during sujud, set as the threshold to indicate good performance of the sujud. Figure 16 show the force sensor reading to verify the performance of sujud. Figure 16(a) shows how feedback on good Sujud performance is shown to the user, while Figure 16(b) indicates bad sujud performance. (a) (b) Figure 16. Force-sensing resistor reading during the sujud showing the performance of (a) good sujud and (b) bad sujud
  • 11.  ISSN: 2252-8938 Int J Artif Intell, Vol. 12, No. 1, March 2023: 220-231 230 5. CONCLUSION In conclusion, the first objective is to learn the correct postures in salat, whereby we analyze books of hadith and Muslim scholars in the literature review section. The second objective is to develop an image- processing algorithm using MATLAB, which is also achieved as the result show quite a good performance. The third objective of this study is to test the salat performance and provide feedback to the user. This is also achieved as the output result of the matching image pop up the message about the salat performance and train the user by giving the correct instructions regarding the current postures of the salat. The results are quite accurate, as the method proposed is able to identify and match the pattern to recognize up to 90% and inform the user about their salat performance. The reading in the graph is more accurate when the user performs salat using the system itself because the camera angle is fixed. Although the posture is correct, some results show errors when the lighting is bad. This is because the pattern matching in MATLAB is confused when the lighting is insufficient. It will affect the results of pattern matching, for example, the posture of salat is correct, but the system keeps on giving bad performance feedback to the user. This issue can be solved by using the system under sufficient light; hence, increasing the accuracy of the overall system. It is recommended to bright room to ensure clear images captured. The camera angle also needs to be fixed and constant between the database and the correct and wrong image for the system to detect the pattern without an error. ACKNOWLEDGEMENTS Communication of this research is made possible through monetary assistance by Universiti Tun Hussein Onn Malaysia and the UTHM Publisher’s Office via Publication Fund E15216. REFERENCES [1] O. Alobaid and K. Rasheed, “Prayer activity recognition using an accelerometer sensor,” in 2018 World Congress in Computer Science, Computer Engineering and Applied Computing, CSCE 2018 - Proceedings of the 2018 International Conference on Artificial Intelligence, ICAI 2018, 2018, pp. 271–277. [2] R. Al-Ghannam and H. Al-Dossari, “Prayer activity monitoring and recognition using acceleration features with mobile phone,” Arabian Journal for Science and Engineering, vol. 41, no. 12, pp. 4967–4979, May 2016, doi: 10.1007/s13369-016-2158-7. [3] M. F. Rabbi, N. Wahidah Arshad, K. H. Ghazali, R. Abdul Karim, M. Z. Ibrahim, and T. Sikandar, “EMG activity of leg muscles with knee pain during islamic prayer (salat),” in Proceedings - 2019 IEEE 15th International Colloquium on Signal Processing and its Applications, CSPA 2019, Mar. 2019, pp. 213–216, doi: 10.1109/CSPA.2019.8696025. [4] F. Ibrahim and S. A. Ahmad, “Assessment of upper body muscle activity during salat and stretching exercise: A pilot study,” in Proceedings - IEEE-EMBS International Conference on Biomedical and Health Informatics: Global Grand Challenge of Health Informatics, BHI 2012, Jan. 2012, pp. 412–415, doi: 10.1109/BHI.2012.6211603. [5] R. R. Porle, A. Chekima, F. Wong, M. Mamat, N. Parimon, and Y. F. A. Gaus, “Two-dimensional human pose estimation for the upper body using histogram and template matching techniques,” in 2013 1st International Conference on Artificial Intelligence, Modelling & Simulation (AIMS), 2013, pp. 249–254, doi: 10.1109/AIMS.2013.46. [6] N. A. Jaafar, N. A. Ismail, and Y. A. Yusoff, “An investigation of motion tracking for solat movement with dual sensor approach,” ARPN Journal of Engineering and Applied Sciences, vol. 10, no. 23, pp. 17981–17986, 2015. [7] P. Azad, T. Asfour, and R. Dillmann, “Combining appearance-based and model-based methods for real-time object recognition and 6D localization,” in 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Oct. 2006, pp. 5339–5344, doi: 10.1109/IROS.2006.282094. [8] Y. Yang and D. Ramanan, “Articulated pose estimation with flexible mixtures-of-parts,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2011, pp. 1385–1392, doi: 10.1109/CVPR.2011.5995741. [9] M. Lora, S. Ghidoni, M. Munaro, and E. Menegatti, “A geometric approach to multiple viewpoint human body pose estimation,” in 2015 European Conference on Mobile Robots, ECMR 2015 - Proceedings, 2015, pp. 1–6, doi: 10.1109/ECMR.2015.7324195. [10] J. L. Raheja, K. Das, and A. Chaudhary, “Fingertip detection: a fast method with natural hand,” International Journal of Embedded Systems and Computer Engineering, vol. 3, no. 2, pp. 85–89, 2012, doi: 10.48550/arXiv.1212.0134. [11] L. Guo, Z. Lu, and L. Yao, “Human-machine interaction sensing technology based on hand gesture recognition: a review,” IEEE Transactions on Human-Machine Systems, vol. 51, no. 4, pp. 300–309, Aug. 2021, doi: 10.1109/THMS.2021.3086003. [12] P. Parvathy, K. Subramaniam, G. K. D. Prasanna Venkatesan, P. Karthikaikumar, J. Varghese, and T. Jayasankar, “Development of hand gesture recognition system using machine learning,” Journal of Ambient Intelligence and Humanized Computing, vol. 12, no. 6, pp. 6793–6800, 2021, doi: 10.1007/s12652-020-02314-2. [13] M. M. Islam, S. Siddiqua, and J. Afnan, “Real time hand gesture recognition using different algorithms based on American sign language,” 2017, doi: 10.1109/ICIVPR.2017.7890854. [14] J. K. Aggarwal and S. Park, “Human motion: modeling and recognition of actions and interactions,” in Proceedings - 2nd International Symposium on 3D Data Processing, Visualization, and Transmission. 3DPVT 2004, 2004, pp. 640–647, doi: 10.1109/TDPVT.2004.1335299. [15] M. W. Lee and R. Nevatia, “Human pose tracking in monocular sequence using multilevel structured models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 1, pp. 27–38, 2009, doi: 10.1109/TPAMI.2008.35. [16] P. N. Maraskolhe and A. S. Bhalchandra, “Analysis of facial expression recognition using histogram of oriented gradient (HOG),” in Proceedings of the 3rd International Conference on Electronics and Communication and Aerospace Technology, ICECA 2019, 2019, pp. 1007–1011, doi: 10.1109/ICECA.2019.8821814. [17] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings - 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, 2005, vol. 1, pp. 886–893, doi: 10.1109/CVPR.2005.177.
  • 12. Int J Artif Intell ISSN: 2252-8938  Intelligent system for Islamic prayer (salat) posture monitoring (Md Mozasser Rahman) 231 [18] S. Zhang and X. Wang, “Human detection and object tracking based on histograms of oriented gradients,” in Proceedings - International Conference on Natural Computation, 2013, pp. 1349–1353, doi: 10.1109/ICNC.2013.6818189. [19] K. V. V. Kumar and P. V. V. Kishore, “Indian classical dance mudra classification using HOG features and SVM classifier,” International Journal of Electrical and Computer Engineering (IJECE), vol. 7, no. 5, pp. 2537–2546, Oct. 2017, doi: 10.11591/ijece.v7i1.pp2537-2546. [20] C. Wattanapanich, H. Wei, and W. Petchkit, “Investigation of robust gait recognition for different appearances and camera view angles,” International Journal of Electrical and Computer Engineering (IJECE), vol. 11, no. 5, pp. 3977–3987, Oct. 2021, doi: 10.11591/ijece.v11i5.pp3977-3987. [21] L.Wang, W.Hu, and T.Tan, “Recent developments in human motion analysis,” Pattern Recognition, vol. 36, no. 3, pp. 585–601, 2003, doi: 10.1016/S0031-3203(02)00100-0. [22] O. C. Ibe, Markov process for stochastic modeling, 2nd edition. Watthan, MA 02451, USA: Elsevier, 2009. [23] M. Rocha and P. G. Ferreira, “Hidden markov models,” in Bioinformatics Algorithms, vol. 54, Elsevier, 2018, pp. 255–273. [24] L. Wang, Y. Zhang, and J. Feng, “On the Euclidean distance of images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 8, pp. 1334–1339, Aug. 2005, doi: 10.1109/TPAMI.2005.165. [25] A. F. Bobick and J. W. Davis, “The recognition of human movement using temporal templates,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 3, pp. 257–267, Mar. 2001, doi: 10.1109/34.910878. BIOGRAPHIES OF AUTHORS Md Mozasser Rahman currently is an Associate Professor in Department of Mechanical Engineering Technology, Universiti Tun Hussein Onn Malaysia (UTHM). He received a B.Sc. Eng. degree from Bangladesh Institute of Technology (BIT) Khulna in Mechanical Engineering in 1988. After graduation he worked for the same institute as a lecturer. He was later conferred a M. Eng. degree and Ph.D. from Mie University, Japan, in 2000 and 2003 respectively. He has expertise in robotics and industrial automation. His research area covers human-robot cooperation, movement characteristics of human arm, artificial human organ etc. He serves as consultant for industrial automation, robotic system to Universities and Industries. He received 1 academic award from JSME (Japan Society of Mechanical Engineers) and 4 innovation awards. He is a Chartered Engineer registered with the Engineering Council, UK. He can be contacted at email: mozasser@uthm.edu.my. Eng. Rayan Abbas Ahmed Alharazi currently working as Project Sales Engineer for one of the dominant Engineering Company ALABDULKAREEM Holding Co. He received his B.Sc. degree in Mechatronics Engineering from the International Islamic University, Malaysia IIUM, in 2015. Worked as Sales Engineer in the Saudi Market (western region) for more than 5 years. After graduation he started discovering with DASH Control System Company the industrial area and provide solution for power saving using power factor correction (PFC), provide VFD application and solution and power monitoring system. After 3 years started work for LG air conditioning as commercial air conditioning sales engineer for a year. He can be contacted at email: rayan.hrazi@gmail.com. Muhammad Khairul Imran b Zainal Badri currently working as a Senior System Engineer in IAM–Wonderware Sdn Bhd. Mr. Imran received a B. Eng. degree from International Islamic University Malaysia (IIUM) in Mechatronics Engineering in 2018. After graduation he worked for the IAM–Wonderware Sdn Bhd as System Engineer. He got practical knowledge and experiences in programmable logic controller (PLC) and Supervisory control and data acquisition (SCADA). He was later conferred a Graduate Engineer from BEM in 2018 and Graduate Technologists from MBOT in 2021. He has expertise in SCADA and industrial automation. He serves as site supervisor, project engineer and programmer in multinational F&B and Semiconductor Company which specializes in control system and system deployment. He can be contacted at email: imran371994@gmail.com.