Deaf people suffer from difficulty in social communication, especially those who have been denied the blessing of hearing before the acquisition of spoken language and before learning to read and write. For the purpose of employing mobile devices for the benefit of these people, their teachers and everyone who has contact with them, this research aims to design an application for social communication and learning by translating Iraqi sign language into text in Arabic and vice versa. Iraqi sign language has been chosen because of a lack of applications for this field. The current research, to the best of our knowledge, is the first of its kind in Iraq. The application is open source; words that are not found in the application database can be processed by translating them into letters alphabetically. The importance of the application lies in the fact that it is a means of communication and e-learning through Iraqi sign language, reading and writing in Arabic. Similarly, it is regarded as a means of social communication between deaf people and those with normal hearing. This application is designed by using JAVA language and it was tested on several deaf students at Al-Amal Institute for Special Needs Care in Mosul, Iraq. It was well comprehended and accepted.
Speech Automated Examination for Visually Impaired Studentsvivatechijri
We that know education is not a luxury but a necessity in today's life. It's a human right and every individual including blind and visually impaired people should also take it. It is very difficult for them to manage their education and learning process. According to recent study it has been found that over 200+ million blind people and visually impaired people are there and the number is growing. so the user have develop a learning application in which a blind or visually impaired student can study on their own without including third party in it. The student will be able to separately operate the application. Here they will be able to study various subjects of their choice, they will be able to take their own notes, and also take tests on the subject. The prime goal of this application is to solve the problem of the blind students regarding their education or studying pattern. Here they will be able to move the cursor anywhere on the desktop and the text will be dictated in the human synthesized AI voice using TTS conversion. They can easily make notes by simply dictating their points to the system, and the system will be convert into text format and also we can listened to it in an audio format. Once the learning part is done, and the student have to revise the particular topic then he/she can give test on that topic by using hand gestures. So here the system will help the visually impaired/blind student to learn and operate the system independently and also they will be able to use it as per their suitable time preference. So, the system is developed to resolve the problem of students being dependent on the third party in their studies and to increase their interest in studies. Hence, not only blind people but also other handicapped people will be able to use this application.
An Android Communication Platform between Hearing Impaired and General PeopleAfif Bin Kamrul
This document proposes developing an Android application to facilitate communication between hearing impaired and general people in Bangladesh through Bangla speech-to-sign language and sign language-to-text translation. It aims to recognize Bangla speech and convert it to animated sign language displays and develop a sign language keyboard to type Bangla text. The methodology involves using speech recognition APIs to convert speech to text, tagging parts of speech, looking up signs from an animated database, and displaying them sequentially via a virtual agent. It will also design a keyboard with buttons for Bangla characters and their corresponding signs.
An Android Communication Platform between Hearing Impaired and General PeopleAfif Bin Kamrul
The document describes a thesis submitted by Afif Bin Kamrul for the degree of Bachelor of Science in Computer Science and Engineering on developing an Android application for communication between hearing impaired and general people, which recognizes Bangla speech and converts it to sign language as well as provides a sign language keyboard for typing in Bangla. The application was tested with students at a school for the deaf and received satisfactory results based on subjective evaluation and black box testing.
An Android Communication Platform between Hearing Impaired and General PeopleAfif Bin Kamrul
This document proposes developing an Android application to help hearing impaired people in Bangladesh communicate via text and animation in Bangla sign language. [1] It aims to recognize Bangla speech and convert it to sign language displayed by a virtual agent, and include a sign language keyboard to convert signs to Bangla text. [2] The application could bridge communication between deaf and hearing communities and reduce the language barrier, helping deaf persons learn sign language. [3]
Eye(I) Still Know! – An App for the Blind Built using Web and AIDr. Amarjeet Singh
This paper proposes eye(I) still know!, a voice control solution for the visually impaired people. The main purpose is even though the blind cannot see they can still know where to go and what to do! Nearby 60% of total blind population across the world is present in India. In a time where no one likes to rely on anyone, this is a small effort to make the blind independent individuals. This can be achieved using wireless communication, voice recognition and image scanning. The application with the use of object identification will priorly inform about the barriers in the path.
The software will use the camera of the device and scan all the obstacles with their corresponding distances from the user. This will be followed by audio instructions through audio output of the device.
This will efficiently direct the user through his/her way.
This curriculum vitae is for Karam Mustafa Ignaim. It summarizes their work experience as a teaching assistant at Al-Balqa' Applied University since 2005, where they have taught various computer science and software engineering courses. It also lists their education as a M.Sc. in Computer Science from Al-Balqa' Applied University and B.Sc. in Information Technology from the same university. Relevant skills are described such as programming languages, operating systems, and good communication and organizational abilities.
This document discusses the human-computer interface of an information system. It defines HCI as the intersection of computer science, behavioral sciences, and design fields that studies the interaction between humans and computers. The document then provides an example of using facial recognition technology as an HCI for biometric attendance tracking in organizations. It describes how such a system would work and the requirements needed for implementation, including a database, ICT infrastructure, and IS team. The advantages of automation and time savings are outlined, along with potential disadvantages such as system reliance on power and vulnerability to makeup.
Industrial Applications of Automatic Speech Recognition SystemsIJERA Editor
Current trends in developing technologies form important bridges to the future, fortified by the early and
productive use of technology for enriching the human life. Speech signal processing, which includes automatic
speech recognition, synthetic speech, and natural language processing, is beginning to have a significant impact
on business, industry and ease of operation of personal computers. Apart from this, it facilitates the deeper
understanding of complex mechanism of functioning of human brain. Advances in speech recognition
technology, over the past five decades, have enabled a wide range of industrial applications. Yet today's
applications provide a small preview of a rich future for speech and voice interface technology that will
eventually replace keyboards with microphones for designing human machine interface for providing easy
access to increasingly intelligent machines. It also shows how the capabilities of speech recognition systems in
industrial applications are evolving over time to usher in the next generation of voice-enabled services. This
paper aims to present an effective survey of the speech recognition technology described in the available
literature and integrate the insights gained during the process of study of individual research and developments.
The current applications of speech recognition for real world and industry have also been outlined with special
reference to applications in the areas of medical, industrial robotics, forensic, defence and aviation
Speech Automated Examination for Visually Impaired Studentsvivatechijri
We that know education is not a luxury but a necessity in today's life. It's a human right and every individual including blind and visually impaired people should also take it. It is very difficult for them to manage their education and learning process. According to recent study it has been found that over 200+ million blind people and visually impaired people are there and the number is growing. so the user have develop a learning application in which a blind or visually impaired student can study on their own without including third party in it. The student will be able to separately operate the application. Here they will be able to study various subjects of their choice, they will be able to take their own notes, and also take tests on the subject. The prime goal of this application is to solve the problem of the blind students regarding their education or studying pattern. Here they will be able to move the cursor anywhere on the desktop and the text will be dictated in the human synthesized AI voice using TTS conversion. They can easily make notes by simply dictating their points to the system, and the system will be convert into text format and also we can listened to it in an audio format. Once the learning part is done, and the student have to revise the particular topic then he/she can give test on that topic by using hand gestures. So here the system will help the visually impaired/blind student to learn and operate the system independently and also they will be able to use it as per their suitable time preference. So, the system is developed to resolve the problem of students being dependent on the third party in their studies and to increase their interest in studies. Hence, not only blind people but also other handicapped people will be able to use this application.
An Android Communication Platform between Hearing Impaired and General PeopleAfif Bin Kamrul
This document proposes developing an Android application to facilitate communication between hearing impaired and general people in Bangladesh through Bangla speech-to-sign language and sign language-to-text translation. It aims to recognize Bangla speech and convert it to animated sign language displays and develop a sign language keyboard to type Bangla text. The methodology involves using speech recognition APIs to convert speech to text, tagging parts of speech, looking up signs from an animated database, and displaying them sequentially via a virtual agent. It will also design a keyboard with buttons for Bangla characters and their corresponding signs.
An Android Communication Platform between Hearing Impaired and General PeopleAfif Bin Kamrul
The document describes a thesis submitted by Afif Bin Kamrul for the degree of Bachelor of Science in Computer Science and Engineering on developing an Android application for communication between hearing impaired and general people, which recognizes Bangla speech and converts it to sign language as well as provides a sign language keyboard for typing in Bangla. The application was tested with students at a school for the deaf and received satisfactory results based on subjective evaluation and black box testing.
An Android Communication Platform between Hearing Impaired and General PeopleAfif Bin Kamrul
This document proposes developing an Android application to help hearing impaired people in Bangladesh communicate via text and animation in Bangla sign language. [1] It aims to recognize Bangla speech and convert it to sign language displayed by a virtual agent, and include a sign language keyboard to convert signs to Bangla text. [2] The application could bridge communication between deaf and hearing communities and reduce the language barrier, helping deaf persons learn sign language. [3]
Eye(I) Still Know! – An App for the Blind Built using Web and AIDr. Amarjeet Singh
This paper proposes eye(I) still know!, a voice control solution for the visually impaired people. The main purpose is even though the blind cannot see they can still know where to go and what to do! Nearby 60% of total blind population across the world is present in India. In a time where no one likes to rely on anyone, this is a small effort to make the blind independent individuals. This can be achieved using wireless communication, voice recognition and image scanning. The application with the use of object identification will priorly inform about the barriers in the path.
The software will use the camera of the device and scan all the obstacles with their corresponding distances from the user. This will be followed by audio instructions through audio output of the device.
This will efficiently direct the user through his/her way.
This curriculum vitae is for Karam Mustafa Ignaim. It summarizes their work experience as a teaching assistant at Al-Balqa' Applied University since 2005, where they have taught various computer science and software engineering courses. It also lists their education as a M.Sc. in Computer Science from Al-Balqa' Applied University and B.Sc. in Information Technology from the same university. Relevant skills are described such as programming languages, operating systems, and good communication and organizational abilities.
This document discusses the human-computer interface of an information system. It defines HCI as the intersection of computer science, behavioral sciences, and design fields that studies the interaction between humans and computers. The document then provides an example of using facial recognition technology as an HCI for biometric attendance tracking in organizations. It describes how such a system would work and the requirements needed for implementation, including a database, ICT infrastructure, and IS team. The advantages of automation and time savings are outlined, along with potential disadvantages such as system reliance on power and vulnerability to makeup.
Industrial Applications of Automatic Speech Recognition SystemsIJERA Editor
Current trends in developing technologies form important bridges to the future, fortified by the early and
productive use of technology for enriching the human life. Speech signal processing, which includes automatic
speech recognition, synthetic speech, and natural language processing, is beginning to have a significant impact
on business, industry and ease of operation of personal computers. Apart from this, it facilitates the deeper
understanding of complex mechanism of functioning of human brain. Advances in speech recognition
technology, over the past five decades, have enabled a wide range of industrial applications. Yet today's
applications provide a small preview of a rich future for speech and voice interface technology that will
eventually replace keyboards with microphones for designing human machine interface for providing easy
access to increasingly intelligent machines. It also shows how the capabilities of speech recognition systems in
industrial applications are evolving over time to usher in the next generation of voice-enabled services. This
paper aims to present an effective survey of the speech recognition technology described in the available
literature and integrate the insights gained during the process of study of individual research and developments.
The current applications of speech recognition for real world and industry have also been outlined with special
reference to applications in the areas of medical, industrial robotics, forensic, defence and aviation
This document describes a mobile application called "Back to School" that aims to improve adult literacy in India. It divides users into three categories - Start, Moderate, and Expert - based on a survey of their English and language skills. The Start level teaches basics like alphabets and numbers, Moderate covers sentence formation and tenses, and Expert includes English grammar and computer basics. The application was created to provide educational opportunities to adults who missed formal schooling. A database stores user data, survey questions, course content, and test results to track progress. The goal is to help more Indians gain basic literacy skills through a convenient mobile learning system.
SMARCOS Abstract Paper submitted to ICCHP 2012Smarcos Eu
This study is part of the European project "Smarcos" (http://www.smarcos-project.eu/) that includes among its goals the development of services which are specifically designed and accessible for blind users.
In this paper we present the prototype application designed to make the main phone features available in a way which is accessible for a blind user. The prototype has been developed to firstly evaluate the interaction modalities based on gestures, audio and vibro-tactile feedback.
This document discusses human-computer interfaces, including an introduction to interfaces and how people interact with computers through information transfer. It outlines different types of interfaces such as command line, menu driven, graphical user interface, and natural language. Existing technologies are described as varying in complexity across physical, cognitive, and affective levels and involving human senses of vision, audio, and touch. Applications mentioned include intelligent homes/offices, games, and e-commerce.
IRJET - A Robust Sign Language and Hand Gesture Recognition System using Conv...IRJET Journal
This document presents a robust sign language and hand gesture recognition system using convolutional neural networks. The system captures image frames and processes them through various neural network layers to classify hand gestures into letters, numbers, or other symbols. It segments the hand from images using color thresholds and edge detection. The images then undergo preprocessing like resizing before being classified by the CNN. The CNN is trained on a large dataset to accurately recognize gestures in sign language and provide text output to bridge communication between deaf and non-signing individuals. The system achieved good results classifying several alphabet letters but could be expanded to recognize word combinations.
IRJET - Speech Recognition using AndroidIRJET Journal
This document describes a speech recognition Android application that allows users to convert speech to text and text to speech. The application provides a fast, user-friendly interface for voice recognition and conversion. It uses modern algorithms to efficiently process speech signals and generate text. The application is useful for people with physical disabilities as it allows input and output without typing. The application development is discussed along with the technologies used, including the Android speech API and text-to-speech functionality.
Human computer interaction with NalaemtonNalaemton S
Human Computer Interaction is one of the emerging concept in the field of Computer Science and Engineering. It is a technology that is going to change the way we interact with the computer and so on.....Enjoy viewing the magic of Computer Science and Engineering.
Technology in your hands....
_S.Nalaemton
This document discusses Human Computer Interfaces (HCI). It defines HCI as the interaction between humans and computers. It describes different types of interfaces like command line, menu driven, and graphical user interfaces. It also discusses existing technologies in HCI like physical devices that rely on vision, audio, and touch. The document outlines advances in areas like wearable devices, wireless technologies, and virtual realities. It describes architectures like uni-modal and multi-modal systems. Applications discussed include intelligent homes and helping disabled people.
Toward a New Algorithm for Hands Free BrowsingCSCJournals
The role of the Internet in the life of everyone is becoming more and more important. People who usually use the internet are normal people with use of all physical parts of their body. Unfortunately, members of society who are physically handicapped do not get access to this medium as they don’t have the ability to access the internet or use it. The Integrated Browser is a computer application that browses the internet and displays information on screen. It targets handicapped users who have difficulties using a standard keyboard and/or a standard mouse. In addition, this browser, unlike the conventional method, works by responding to voice commands. It is characterized as an economical and humanitarian approach because it is handicap-friendly and is especially helpful to those who cannot use their hands. The Integrated Browser is a state-of-art technology that provides better and faster Internet experience. It is a program that will facilitate and enhance learning of slow or/and handicapped learners
IRJET - Voice based E-Mail for Visually ChallengedIRJET Journal
This document describes a voice-based email system designed for visually impaired users. The system allows blind users to access email functions like sending, receiving, and reading emails using only voice commands without needing a keyboard, mouse, or screen reader. Key features include text-to-speech and speech-to-text functions to convert between voice and text. The proposed system aims to make email more accessible for visually impaired people by eliminating the need to remember keyboard shortcuts or rely on others to access emails. It could help about 250 million people with visual impairments globally who currently have difficulty using the internet and email.
This document provides an overview of human-computer interfaces (HCI). It defines HCI as the process of information transfer between users and machines. The document outlines different types of interfaces including command line, menu driven, and graphical user interfaces. It discusses advances in HCI like wearable, wireless, and virtual devices. The document also covers HCI architecture, including unimodal and multimodal systems. It provides examples of applications of HCI and discusses advantages and disadvantages of various interface types.
Speech Based Search Engine System Control and User InteractionIOSR Journals
The document discusses a proposed speech-based search engine system to help visually impaired users interact with computers without keyboards or mice. The system would allow users to control the computer and search for information using only voice commands through speech recognition and synthesis technologies. It aims to make computers more accessible for visually impaired people and others who have difficulty using keyboards and mice. The proposed system could provide educational benefits and allow independent computer use through spoken interactions. A feasibility analysis found the system would be economically and technically feasible to implement using existing hardware, software and open-source technologies.
Assistive technology refers to any device or program that helps individuals with physical, cognitive, or sensory impairments. This includes simple devices like pencil grips as well as more advanced technologies like touchscreen computers. Federal laws like the Technology-Related Assistance Act and the Individuals with Disabilities Education Act require schools to consider and provide assistive technologies to support students with disabilities. Examples described in the document include hearing aids, screen readers, dictation software, and sip-and-puff systems that allow individuals to control computers through alternative means.
IRJET - Voice based E-Mail for Visually ImpairedIRJET Journal
The document describes a voice-based email system for visually impaired people. It aims to allow visually impaired users to send and receive emails independently through voice commands without needing a screen reader or keyboard. The system uses speech recognition to convert voice inputs to text for composing emails and text-to-speech to read composed emails and responses aloud to users. It includes modules for user registration, login, accessing the inbox and sending emails entirely through voice while eliminating the need for keyboard shortcuts or screen readers. The system aims to improve accessibility and communication through email for visually impaired users.
The document discusses various types of human-computer interfaces. It describes interfaces such as command line interfaces, menu driven interfaces, and graphical user interfaces. It outlines the advantages and disadvantages of each type. The document also discusses other interfaces including natural language interfaces, virtual reality interfaces, and interfaces that can help disabled users interact with computers.
This document discusses assistive technology and its uses for people with disabilities. It defines assistive technology as tools that help people with everyday activities by working around specific deficits. Assistive technology includes both high-tech and low-tech devices that can help people of all ages with learning disabilities reach their full potential. The document also outlines several laws related to assistive technology and provides examples of specific assistive technologies such as FM systems, screen readers, alternative keyboards, and speech recognition software.
Ed 508 assistive technology powerpoint commanderkcommander
This document defines assistive technology as any item or equipment that is used to increase, maintain, or improve the functional capabilities of individuals with disabilities. It discusses key assistive technology laws like the Americans with Disabilities Act and the Individuals with Disabilities Education Act. The document provides examples of low-tech and high-tech assistive devices that help with tasks like seeing, hearing, communicating, reading, writing, and using computers. Specific assistive devices are described for hearing impairment (Aria device), learning disabilities (Quicktionary Reading Pen), visual impairment (screen readers), and physical impairments (alternative mouse).
Users interact with ICT systems through interfaces that allow effective communication between humans and machines. Appropriate interface design is important to provide usability for all users. Key considerations for interface design include the need for dialogue between users and systems, as well as providing help and support tailored to user characteristics such as experience level, physical abilities, environment, and tasks to be performed. Effective interface types include graphical user interfaces (GUIs) and specialized interfaces for tasks like gaming that mimic real-world equipment.
Personal computers are primarily used by individual users to perform general tasks like word processing, emailing, browsing the internet, playing multimedia or games. More specific uses include working, learning, researching, banking, shopping, and interacting with public services online. While users understand how to operate the computer and common programs, most do not write their own programs and software for personal computers aims to be easy to use. A wide variety of software continues to be developed for personal computers, targeted at both expert and non-expert users.
This document discusses working with special needs students and Individualized Education Plans (IEPs). It explains that students with disabilities may qualify for IEPs to receive accommodations. Parents, teachers, and counselors collaborate to ensure students receive needed services. Some disabilities that may qualify students for services include autism, deafness, emotional/behavioral disorders, intellectual disabilities, orthopedic impairments, speech/language impairments, traumatic brain injuries, visual impairments, and learning disorders. Schools use differentiated instruction and assistive technologies to accommodate special needs. Examples of accommodations discussed include textbooks in large print or simplified language, screen readers, and optical character recognition software.
Automatic recognition of Arabic alphabets sign language using deep learningIJECEIAES
Technological advancements are helping people with special needs overcome many communications’ obstacles. Deep learning and computer vision models are innovative leaps nowadays in facilitating unprecedented tasks in human interactions. The Arabic language is always a rich research area. In this paper, different deep learning models were applied to test the accuracy and efficiency obtained in automatic Arabic sign language recognition. In this paper, we provide a novel framework for the automatic detection of Arabic sign language, based on transfer learning applied on popular deep learning models for image processing. Specifically, by training AlexNet, VGGNet and GoogleNet/Inception models, along with testing the efficiency of shallow learning approaches based on support vector machine (SVM) and nearest neighbors algorithms as baselines. As a result, we propose a novel approach for the automatic recognition of Arabic alphabets in sign language based on VGGNet architecture which outperformed the other trained models. The proposed model is set to present promising results in recognizing Arabic sign language with an accuracy score of 97%. The suggested models are tested against a recent fully-labeled dataset of Arabic sign language images. The dataset contains 54,049 images, which is considered the first large and comprehensive real dataset of Arabic sign language to the furthest we know.
This document describes a mobile application called "Back to School" that aims to improve adult literacy in India. It divides users into three categories - Start, Moderate, and Expert - based on a survey of their English and language skills. The Start level teaches basics like alphabets and numbers, Moderate covers sentence formation and tenses, and Expert includes English grammar and computer basics. The application was created to provide educational opportunities to adults who missed formal schooling. A database stores user data, survey questions, course content, and test results to track progress. The goal is to help more Indians gain basic literacy skills through a convenient mobile learning system.
SMARCOS Abstract Paper submitted to ICCHP 2012Smarcos Eu
This study is part of the European project "Smarcos" (http://www.smarcos-project.eu/) that includes among its goals the development of services which are specifically designed and accessible for blind users.
In this paper we present the prototype application designed to make the main phone features available in a way which is accessible for a blind user. The prototype has been developed to firstly evaluate the interaction modalities based on gestures, audio and vibro-tactile feedback.
This document discusses human-computer interfaces, including an introduction to interfaces and how people interact with computers through information transfer. It outlines different types of interfaces such as command line, menu driven, graphical user interface, and natural language. Existing technologies are described as varying in complexity across physical, cognitive, and affective levels and involving human senses of vision, audio, and touch. Applications mentioned include intelligent homes/offices, games, and e-commerce.
IRJET - A Robust Sign Language and Hand Gesture Recognition System using Conv...IRJET Journal
This document presents a robust sign language and hand gesture recognition system using convolutional neural networks. The system captures image frames and processes them through various neural network layers to classify hand gestures into letters, numbers, or other symbols. It segments the hand from images using color thresholds and edge detection. The images then undergo preprocessing like resizing before being classified by the CNN. The CNN is trained on a large dataset to accurately recognize gestures in sign language and provide text output to bridge communication between deaf and non-signing individuals. The system achieved good results classifying several alphabet letters but could be expanded to recognize word combinations.
IRJET - Speech Recognition using AndroidIRJET Journal
This document describes a speech recognition Android application that allows users to convert speech to text and text to speech. The application provides a fast, user-friendly interface for voice recognition and conversion. It uses modern algorithms to efficiently process speech signals and generate text. The application is useful for people with physical disabilities as it allows input and output without typing. The application development is discussed along with the technologies used, including the Android speech API and text-to-speech functionality.
Human computer interaction with NalaemtonNalaemton S
Human Computer Interaction is one of the emerging concept in the field of Computer Science and Engineering. It is a technology that is going to change the way we interact with the computer and so on.....Enjoy viewing the magic of Computer Science and Engineering.
Technology in your hands....
_S.Nalaemton
This document discusses Human Computer Interfaces (HCI). It defines HCI as the interaction between humans and computers. It describes different types of interfaces like command line, menu driven, and graphical user interfaces. It also discusses existing technologies in HCI like physical devices that rely on vision, audio, and touch. The document outlines advances in areas like wearable devices, wireless technologies, and virtual realities. It describes architectures like uni-modal and multi-modal systems. Applications discussed include intelligent homes and helping disabled people.
Toward a New Algorithm for Hands Free BrowsingCSCJournals
The role of the Internet in the life of everyone is becoming more and more important. People who usually use the internet are normal people with use of all physical parts of their body. Unfortunately, members of society who are physically handicapped do not get access to this medium as they don’t have the ability to access the internet or use it. The Integrated Browser is a computer application that browses the internet and displays information on screen. It targets handicapped users who have difficulties using a standard keyboard and/or a standard mouse. In addition, this browser, unlike the conventional method, works by responding to voice commands. It is characterized as an economical and humanitarian approach because it is handicap-friendly and is especially helpful to those who cannot use their hands. The Integrated Browser is a state-of-art technology that provides better and faster Internet experience. It is a program that will facilitate and enhance learning of slow or/and handicapped learners
IRJET - Voice based E-Mail for Visually ChallengedIRJET Journal
This document describes a voice-based email system designed for visually impaired users. The system allows blind users to access email functions like sending, receiving, and reading emails using only voice commands without needing a keyboard, mouse, or screen reader. Key features include text-to-speech and speech-to-text functions to convert between voice and text. The proposed system aims to make email more accessible for visually impaired people by eliminating the need to remember keyboard shortcuts or rely on others to access emails. It could help about 250 million people with visual impairments globally who currently have difficulty using the internet and email.
This document provides an overview of human-computer interfaces (HCI). It defines HCI as the process of information transfer between users and machines. The document outlines different types of interfaces including command line, menu driven, and graphical user interfaces. It discusses advances in HCI like wearable, wireless, and virtual devices. The document also covers HCI architecture, including unimodal and multimodal systems. It provides examples of applications of HCI and discusses advantages and disadvantages of various interface types.
Speech Based Search Engine System Control and User InteractionIOSR Journals
The document discusses a proposed speech-based search engine system to help visually impaired users interact with computers without keyboards or mice. The system would allow users to control the computer and search for information using only voice commands through speech recognition and synthesis technologies. It aims to make computers more accessible for visually impaired people and others who have difficulty using keyboards and mice. The proposed system could provide educational benefits and allow independent computer use through spoken interactions. A feasibility analysis found the system would be economically and technically feasible to implement using existing hardware, software and open-source technologies.
Assistive technology refers to any device or program that helps individuals with physical, cognitive, or sensory impairments. This includes simple devices like pencil grips as well as more advanced technologies like touchscreen computers. Federal laws like the Technology-Related Assistance Act and the Individuals with Disabilities Education Act require schools to consider and provide assistive technologies to support students with disabilities. Examples described in the document include hearing aids, screen readers, dictation software, and sip-and-puff systems that allow individuals to control computers through alternative means.
IRJET - Voice based E-Mail for Visually ImpairedIRJET Journal
The document describes a voice-based email system for visually impaired people. It aims to allow visually impaired users to send and receive emails independently through voice commands without needing a screen reader or keyboard. The system uses speech recognition to convert voice inputs to text for composing emails and text-to-speech to read composed emails and responses aloud to users. It includes modules for user registration, login, accessing the inbox and sending emails entirely through voice while eliminating the need for keyboard shortcuts or screen readers. The system aims to improve accessibility and communication through email for visually impaired users.
The document discusses various types of human-computer interfaces. It describes interfaces such as command line interfaces, menu driven interfaces, and graphical user interfaces. It outlines the advantages and disadvantages of each type. The document also discusses other interfaces including natural language interfaces, virtual reality interfaces, and interfaces that can help disabled users interact with computers.
This document discusses assistive technology and its uses for people with disabilities. It defines assistive technology as tools that help people with everyday activities by working around specific deficits. Assistive technology includes both high-tech and low-tech devices that can help people of all ages with learning disabilities reach their full potential. The document also outlines several laws related to assistive technology and provides examples of specific assistive technologies such as FM systems, screen readers, alternative keyboards, and speech recognition software.
Ed 508 assistive technology powerpoint commanderkcommander
This document defines assistive technology as any item or equipment that is used to increase, maintain, or improve the functional capabilities of individuals with disabilities. It discusses key assistive technology laws like the Americans with Disabilities Act and the Individuals with Disabilities Education Act. The document provides examples of low-tech and high-tech assistive devices that help with tasks like seeing, hearing, communicating, reading, writing, and using computers. Specific assistive devices are described for hearing impairment (Aria device), learning disabilities (Quicktionary Reading Pen), visual impairment (screen readers), and physical impairments (alternative mouse).
Users interact with ICT systems through interfaces that allow effective communication between humans and machines. Appropriate interface design is important to provide usability for all users. Key considerations for interface design include the need for dialogue between users and systems, as well as providing help and support tailored to user characteristics such as experience level, physical abilities, environment, and tasks to be performed. Effective interface types include graphical user interfaces (GUIs) and specialized interfaces for tasks like gaming that mimic real-world equipment.
Personal computers are primarily used by individual users to perform general tasks like word processing, emailing, browsing the internet, playing multimedia or games. More specific uses include working, learning, researching, banking, shopping, and interacting with public services online. While users understand how to operate the computer and common programs, most do not write their own programs and software for personal computers aims to be easy to use. A wide variety of software continues to be developed for personal computers, targeted at both expert and non-expert users.
This document discusses working with special needs students and Individualized Education Plans (IEPs). It explains that students with disabilities may qualify for IEPs to receive accommodations. Parents, teachers, and counselors collaborate to ensure students receive needed services. Some disabilities that may qualify students for services include autism, deafness, emotional/behavioral disorders, intellectual disabilities, orthopedic impairments, speech/language impairments, traumatic brain injuries, visual impairments, and learning disorders. Schools use differentiated instruction and assistive technologies to accommodate special needs. Examples of accommodations discussed include textbooks in large print or simplified language, screen readers, and optical character recognition software.
Automatic recognition of Arabic alphabets sign language using deep learningIJECEIAES
Technological advancements are helping people with special needs overcome many communications’ obstacles. Deep learning and computer vision models are innovative leaps nowadays in facilitating unprecedented tasks in human interactions. The Arabic language is always a rich research area. In this paper, different deep learning models were applied to test the accuracy and efficiency obtained in automatic Arabic sign language recognition. In this paper, we provide a novel framework for the automatic detection of Arabic sign language, based on transfer learning applied on popular deep learning models for image processing. Specifically, by training AlexNet, VGGNet and GoogleNet/Inception models, along with testing the efficiency of shallow learning approaches based on support vector machine (SVM) and nearest neighbors algorithms as baselines. As a result, we propose a novel approach for the automatic recognition of Arabic alphabets in sign language based on VGGNet architecture which outperformed the other trained models. The proposed model is set to present promising results in recognizing Arabic sign language with an accuracy score of 97%. The suggested models are tested against a recent fully-labeled dataset of Arabic sign language images. The dataset contains 54,049 images, which is considered the first large and comprehensive real dataset of Arabic sign language to the furthest we know.
This document describes a smart glove system that translates sign language gestures into speech and text to help deaf and mute people communicate. The system uses flex sensors on a glove to detect hand gestures, which are processed by an Arduino microcontroller. The Arduino identifies letters and words from the gestures and outputs them as speech from a connected speaker and as text on an Android phone app. The goal is to help deaf-mute individuals effectively convey information to people without sign language training by translating their gestures into audio and text in real-time.
IRJET- Hand Gesture based Recognition using CNN MethodologyIRJET Journal
This document summarizes a research paper on hand gesture recognition using convolutional neural networks (CNN). The paper aims to develop a system to recognize American Sign Language (ASL) to help facilitate communication for deaf individuals. The system would capture hand gestures via video and translate them into text. The researchers conducted a literature review on previous work using CNNs and 3D convolutional models for sign language recognition. They intend to implement a 3D CNN model on ASL data and analyze the results to improve recognition accuracy for communicating via sign language.
Digital voice over is a social project aimed at improving the ability of speaking and hearing by enabling people to communicate better with the public. There are approximately 9.1 billion deaf and hard of hearing people worldwide. They encounter many problems while trying to communicate with the society in daily life. Deaf and speech impaired people often use language to communicate but have difficulty communicating with people who do not understand the language. Sign language uses sign language patterns i.e., body language, gestures and movements of arms and fingers etc. to convey information about people. relies on. This project was designed to meet the need to create electronic devices that can translate sign language into speech to facilitate communication between the deaf and dumb and the public. Venkat P. Patil | Suyash Mali | Girish Ghadi | Chintamani Satpute | Amey Deshmukh "Hand Gesture Vocalizer" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-7 | Issue-2 , April 2023, URL: https://www.ijtsrd.com.com/papers/ijtsrd55157.pdf Paper URL: https://www.ijtsrd.com.com/engineering/electronics-and-communication-engineering/55157/hand-gesture-vocalizer/venkat-p-patil
The document describes a proposed mobile application to assist visually impaired users. The application would allow users to perform basic phone functions like reading messages, making calls, checking the time and date, and battery level through touch and voice inputs. It aims to provide these core capabilities through a single application, as most existing apps only focus on navigation. The system would use tap gestures as inputs and speech outputs to make smartphone usage easier for the visually impaired.
This document summarizes a research paper on developing a real-time sign language detector using computer vision and machine learning techniques. The researchers created a dataset of hand gestures for letters, numbers, and common signs in Indian Sign Language (ISL) using webcam photos. They used a pre-trained SSD MobileNet V2 model with transfer learning to classify the gestures with 70-80% accuracy. Their goal was to build a free and user-friendly app to help deaf and hard of hearing people communicate through automated sign language detection and translation, with the aim of closing communication gaps. The technology identifies selected ISL signs in low light and uncontrolled backgrounds using image processing and human movement classification algorithms.
This document is a project report submitted by three students - Ashwani Kumar, Ankit Raj, and Anand Abhishek - to Cochin University of Science & Technology in partial fulfillment of their Bachelor of Technology degree in Information Technology. The report describes a voice recognition mobile application called HandOVRS designed for physically handicapped users that can recognize common sounds in the home like doorbells, phones, and alarms and allow the user to select notification options like sending text messages.
IRJET- Communication Aid for Deaf and Dumb PeopleIRJET Journal
This document describes a communication aid system for deaf and mute people that translates sign language gestures to text and speech. The system uses a glove with flex sensors that detect hand gestures. When a gesture is made, the sensors produce a signal that is matched to stored gesture inputs to translate letters, words, and sentences to speech and text. This helps remove communication barriers for the deaf by allowing them to convey meanings through gestures that are automatically translated. The system aims to bridge the gap between those who can hear and those with speech and hearing impairments.
This document describes the development of an automatic language translation software to aid communication between Indian Sign Language and spoken English using LabVIEW. The software aims to translate one-handed finger spelling input in Indian Sign Language alphabets A-Z and numbers 1-9 into spoken English audio output, and 165 spoken English words input into Indian Sign Language picture display output. It utilizes the camera and microphone of the device for image and speech acquisition, and performs vision and speech analysis for translation. The software is intended to help communication between deaf or speech-impaired individuals and those who do not understand sign language.
Hand Gesture Recognition and Translation ApplicationIRJET Journal
The document describes a project to develop an Android application that can recognize American Sign Language (ASL) gestures in real-time using machine learning, translate the gestures to text, and translate the text to other languages. It discusses challenges faced by deaf people in communication and education. It then reviews different approaches to sign language recognition, including sensor-based methods using gloves or cameras, and vision-based methods using cameras and deep learning models. The goal of the project is to create a more accessible sign language translation tool without the need for specialized hardware.
While a hearing-impaired individual depends on sign language and gestures, non-hearing-impaired person uses verbal language. Thus, there is need for means of arbitration to forestall situation when a non-hearing-impaired individual who does not understand the sign language wants to communicate with a hearing-impaired person. This paper is concerned with the development of a PC-based sign language translator to facilitate effective communication between hearing-impaired and non-hearing-impaired persons. Database of hand gestures in American sign language (ASL) is created using Python scripts. TensorFlow (TF) is used in the creation of a pipeline configuration model for machine learning of annotated images of gestures in the database with the real time gestures. The implementation is done in Python software environment and it runs on a PC equipped with a web camera to capture real time gestures for comparison and interpretations. The developed sign language translator is able to translate ASL/gestures to written texts along with corresponding audio renderings at an average duration of about one second. In addition, the translator is able to match real time gestures with the equivalent gesture images stored in the database even at 44% similarity.
This document is a project report submitted by Yellapu Madhuri for the degree of Master of Technology in Biomedical Engineering at SRM University. The project aims to develop an automatic language translation software to aid communication between Indian Sign Language and spoken English using LabVIEW. The software is designed to translate one-handed sign representations of alphabets, numbers, and 165 word phrases from sign language to speech and vice versa. It utilizes a camera for sign input and audio output for translated speech. The report discusses materials and methodology, results and discussions, and conclusions and future enhancements of the project.
This document discusses the development of an Indian Sign Language recognition system called SignReco. It begins with an abstract describing the challenges faced by deaf individuals communicating with others without translation and the benefits of a system that can recognize sign language. The paper then provides background on sign language and the goals of the proposed system, which is to classify and recognize Indian Sign Language in real-time using CNN and neural networks. A literature review covers prior work on sign language recognition systems. The proposed system's workflow and modules for model creation, language translation and app development are described. It concludes that the survey helped in developing an effective approach for an Indian Sign Language recognition system using CNN to improve accuracy.
IRJET- Voice Assistant for Visually Impaired PeopleIRJET Journal
The document proposes developing a voice assistant application for Android to help visually impaired people perform daily tasks using their smartphones. The application would allow users to check messages, calls, take notes, use optical character recognition to read text images, get navigation assistance, and browse the web using text-to-speech and speech recognition. The application aims to make smartphones more accessible and useful for people with visual impairments.
IRJET- Hand Talk- Assistant Technology for Deaf and DumbIRJET Journal
This document describes a smart glove system that translates sign language gestures into speech to help deaf and mute people communicate. The glove uses flex sensors on each finger to detect finger bending motions. An Arduino microcontroller processes the sensor data and sends it wirelessly via Bluetooth to an Android app. The app displays the sign language gesture and converts it to speech output. The goal is to help deaf and mute individuals communicate with hearing people by interpreting their sign language gestures into audible speech in real-time. The system is intended to bridge communication between those who understand sign language and those who do not.
Review Paper on Two Way Communication System with Binary Code Medium for Peop...IRJET Journal
1) The document discusses a proposed system to aid communication between blind, deaf, and mute individuals using binary code.
2) It reviews existing research on communication systems using Morse code and tactile methods.
3) The proposed system would convert speech to visual contexts and vibrations, and vice versa, using a multimodal approach to allow communication across disabilities.
Design of a Communication System using Sign Language aid for Differently Able...IRJET Journal
This document describes a proposed system to design a communication system using sign language to aid differently abled people. The system aims to use image processing and artificial intelligence techniques to recognize characters in sign language from video input and convert them to text and speech output. It discusses technologies like blob detection, skin color recognition and template matching that would be used for sign recognition. The system is intended to help deaf and mute people communicate by translating their sign language to a format understandable by others.
Touch is one of the most common forms of sign language used in oral communication. It is most commonly used by deaf and dumb people who have difficulty hearing or speaking. Communication between them or ordinary people. Various sign-language programs have been developed by many manufacturers around the world, but they are relatively flexible and affordable for end users. Therefore, this paper has presented software that introduces a type of system that can automatically detect sign language to help deaf and mute people communicate better with other people or ordinary people. Pattern recognition and hand recognition are developing fields of research. Being an integral part of meaningless hand-to-hand communication plays a major role in our daily lives. The handwriting system gives us a new, natural, easy-to-use communication system with a computer that is very common to humans. Considering the similarity of the human condition with four fingers and one thumb, the software aims to introduce a real-time hand recognition system based on the acquisition of some of the structural features such as position, mass centroid, finger position, thumb instead of raised or folded finger.
GLOVE BASED GESTURE RECOGNITION USING IR SENSORIRJET Journal
This document summarizes research on a glove-based gesture recognition system using IR sensors. The system aims to help those who are deaf and mute communicate through hand gestures. An IR sensor and LED placed on a glove detect hand gestures based on the amount of light received by the sensor. The Arduino microcontroller recognizes the gestures and displays the meaning on an LCD screen while playing an audio message. The researchers claim this method is more accurate and has a lower error rate than conventional image processing approaches. It is intended to help address both safety and communication issues faced by those who are deaf or speech-impaired. Experimental results showed the system successfully recognized gestures and could help reduce the gap between those who are normal and speech-impaired.
Similar to Application for Iraqi sign language translation on Android system (20)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Neural network optimizer of proportional-integral-differential controller par...IJECEIAES
Wide application of proportional-integral-differential (PID)-regulator in industry requires constant improvement of methods of its parameters adjustment. The paper deals with the issues of optimization of PID-regulator parameters with the use of neural network technology methods. A methodology for choosing the architecture (structure) of neural network optimizer is proposed, which consists in determining the number of layers, the number of neurons in each layer, as well as the form and type of activation function. Algorithms of neural network training based on the application of the method of minimizing the mismatch between the regulated value and the target value are developed. The method of back propagation of gradients is proposed to select the optimal training rate of neurons of the neural network. The neural network optimizer, which is a superstructure of the linear PID controller, allows increasing the regulation accuracy from 0.23 to 0.09, thus reducing the power consumption from 65% to 53%. The results of the conducted experiments allow us to conclude that the created neural superstructure may well become a prototype of an automatic voltage regulator (AVR)-type industrial controller for tuning the parameters of the PID controller.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
A review on features and methods of potential fishing zoneIJECEIAES
This review focuses on the importance of identifying potential fishing zones in seawater for sustainable fishing practices. It explores features like sea surface temperature (SST) and sea surface height (SSH), along with classification methods such as classifiers. The features like SST, SSH, and different classifiers used to classify the data, have been figured out in this review study. This study underscores the importance of examining potential fishing zones using advanced analytical techniques. It thoroughly explores the methodologies employed by researchers, covering both past and current approaches. The examination centers on data characteristics and the application of classification algorithms for classification of potential fishing zones. Furthermore, the prediction of potential fishing zones relies significantly on the effectiveness of classification algorithms. Previous research has assessed the performance of models like support vector machines, naïve Bayes, and artificial neural networks (ANN). In the previous result, the results of support vector machine (SVM) were 97.6% more accurate than naive Bayes's 94.2% to classify test data for fisheries classification. By considering the recent works in this area, several recommendations for future works are presented to further improve the performance of the potential fishing zone models, which is important to the fisheries community.
Electrical signal interference minimization using appropriate core material f...IJECEIAES
As demand for smaller, quicker, and more powerful devices rises, Moore's law is strictly followed. The industry has worked hard to make little devices that boost productivity. The goal is to optimize device density. Scientists are reducing connection delays to improve circuit performance. This helped them understand three-dimensional integrated circuit (3D IC) concepts, which stack active devices and create vertical connections to diminish latency and lower interconnects. Electrical involvement is a big worry with 3D integrates circuits. Researchers have developed and tested through silicon via (TSV) and substrates to decrease electrical wave involvement. This study illustrates a novel noise coupling reduction method using several electrical involvement models. A 22% drop in electrical involvement from wave-carrying to victim TSVs introduces this new paradigm and improves system performance even at higher THz frequencies.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
2. Int J Elec & Comp Eng ISSN: 2088-8708
Application for Iraqi sign language translation on Android system (Miaad Ahmed Hussein)
5227
Sign language recognition systems (SLRS) are one of the areas of human interaction with
computers (HCI) that have had a significant impact on the deaf community [2]. Sign language differs from
one country to another [3] because it is often the deaf who develop the sign language by identifying signs that
have a special meaning for them. This is affected by the culture and environment in which they live,
for example, when a deaf person in most Gulf countries refers to his chest to indicate the color white, which
relates to the color white Gulf uniform they are used to wearing [4], but people of other countries and
cultures do not wear this traditional attire so it will not have any special meaning for them.
The aim of this paper is to create an application for smart phone devices to translate Iraqi sign
language into what it corresponds to in classical Arabic and vice versa , as well as sending messages (SMS).
This application helps the deaf to communicate with each other as well as with people with normal hearing.
This application is of great significance for the deaf to learn to read and write Arabic. Through monitoring of
many deaf people, it has been noticed that their ability to read and write Arabic gradually weakens after
graduating from deaf teaching institutes because they do not use these skills as they usually use sign
language. This constitutes a problem. Therefore, this application is designed to present the optimal solution
as it provides the deaf person with a feasible way to learn and use the Arabic language continuously.
This application is not only a means of communication, but also a way to continue learning and using Arabic.
It also aims at increasing their level of awareness and mastery of this language in a way that is no less
efficient than their level of sign language.
The use of this application falls within the areas of e-learning and distance learning as it facilitates
the education of deaf people who cannot join deaf institutes. It does not require the physical presence of
a person, but it suffices for the deaf person to have the free application on a smart phone device and learn
while being at home. This application seeks to store images of signs of all words, letters and numbers in
the Iraqi sign language and then link these images to words that correspond to them in the Arabic.
2. RELATED WORKS
The development of sign language translation applications is a research-worthy topic that has been
given much attention by scholars. Being the focus of attention of many researchers, fruitful research papers in
this field have been published. The following is a review of some of this research.
Ludeña et al., introduced a system for translating Spanish sign language. This system is designed to
translate spoken sentences in Spanish into their equivalent in the Spanish sign language. A speech
recognition system was used to convert spoken sentences into written sentences, then the written sentences
are translated into the equivalent in Spanish sign language, using a virtual human (avatar). This system has
been applied to work in two domains: bus transport information and hotel reception, hence, most of
the vocabulary needed by staff and deaf users in these two domains has been absorbed [5].
In 2012, a chat system was designed by Hanadi et al. for deaf people, the first of its kind in the Arab
world. Through this system, Deaf people can communicate with each other, share pictures and multimedia
and learn about each other's experiences. The system consists of two applications: sign language translation
tool and chat system, both of which use the same database which includes the unified Arabic sign language
dictionary to translate Arabic sign language [6]. Harsh et al., use a Kinect device that contains a set of sensors
with the ability to capture a user photo and take many shots that embody the user's movements when his
hands are above the trunk area. These snapshots are used as input to systems that use special algorithms to
distinguish images and compare them with special images in Indian sign language stored within a special
database. It is then translated into the corresponding Indian language [7].
Mohammed et al., eveloped a data acquisition and control (DAC) system that translates sign
language into text that can be read by anyone. The system is called gesture recognition and sign language
translator and it utilizes a smart glove that captures the gesture of the hand and interprets these gestures into
readable text. This text can be sent wirelessly to a smart phone or shown in an embedded LCD display [8].
In 2016, the health questionnaire EuroQol EQ-5D-5L used to assess an individual's general health
was of interest to Rogers et al. They translated this questionnaire from English to British sign language.
The descriptive system for this tool consists of five dimensions: movement – self-care – usual activities –
pain/discomfort – anxiety/depression. Each dimension has five levels: no problems – simple problems –
moderate problems – acute problems – severe problems. All these levels are written in the English language,
and when you click on any level, the video clip that represents it in the British sign language (BSL) is
displayed, so that the deaf person can choose the level that represents his health status [9].
Another system produced by Basma et al., uses the leap motion controller (LMC). This device
works at a rate of 200 frames per second. These frames are used to determine the number, position and
rotation of hands and fingers. The workflow consists of several steps: Pre-processing – Tracking – Features –
Extraction – sign classification. This system can recognize fixed signals such as letters and numbers. It also
3. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 10, No. 5, October 2020 : 5227 - 5234
5228
recognizes dynamic signals that include motions. Similarly, it proposes a method for dividing a series of
continuous signals based on palm speed tracking. Thus, the system not only translates the already fragmented
signals but is also able to translate continuous sentences [10].
Sujay et al., roposed a system to translate American sign language; this system consists of two stages:
- Stage 1: isolated gloss recognition system: The inputs into this stage are video clips. Systems are used to
distinguish American Sign Language from these videos. These systems consist of video pre-processing
and a time series neural network module.
- Stage 2: gloss to speech neural translator: This stage works on the outputs of the previous stage, where
the signals of the American Sign Language derived from the previous stage are translated into
the corresponding words in English [11].
Ebling et al., introduced a system for translating German sign language. Microsoft Kinect v2 depth
sensor and two GoPro Hero 4 Black video cameras were used, placed in front of the signer, in addition to
three cameras that are installed on the top, right, and left of the signer, so that signals are captured from
different angles. The well-known recording program BosphorusSign Recording Software was developed to
synchronize the capture from various devices [12].
A mobile application was designed to translate Pakistani sign language by Parvez et al.
Samples were selected from 192 deaf participants aged 5-10 years from two special institutes for children.
The aim of this study is to determine the effectiveness of the mobile interface through an advanced mobile
application to learn basic mathematical concepts using Pakistan sign language (PSL). This study bridges
the gap between technology-based and traditional teaching methods, which are used to teach mathematical
concepts using PSL [13].
3. BACKGROUND
Sign language translation using a computer or cell phone device is a form of machine translation
where words are translated from source language into target language using a sign language translation
system. Many scholars and researchers have dealt with the topic of translating sign language in different
ways, each method having its own scope of application and use. They have achieved different success rates.
Some studies have utilized cameras [14-16] or gloves with sensors [17-19] and many algorithms and artificial
intelligence techniques to distinguish the sign language. An optimal method cannot be identified because
what suits a particular situation may not be appropriate for another. For example, if a deaf person gives
a lecture to a group of students, in this case it is preferable to utilize the methods used with the camera or
electronic gloves equipped with sensors to translate the sign language. However, during the daily life of
a deaf person when wanting to communicate with family and friends, or anyone encountered on the road or at
the work site, it is better to use the sign language translation systems represented by using an application on
a mobile device. The data entry into this application is through a dedicated keyboard designed for this
purpose. Using this application is easy and cost efficient. It is worth noting that most sign language
translation systems are designed to be one-way only. Some of these systems work to translate sign language
into spoken language only [20], while others work to translate spoken language into sign language only [21].
This does not allow the full integration of deaf people into society. It is also noted that many of these systems
use video clips or images of very large sizes to represent the sign language [22, 23]. Consequently,
this requires the use of a dedicated server for storing video clips or images. Therefore, it needs to use
the Internet, which leads to restricting the use of these systems and thus reduces the flexibility of use. It was
noticed through the study of the Arabic language and sign language that the root word, its derivatives and
conjugations have the same translation in the sign language [24, 25]. Hence, this research aims at providing
a flexible and free application.
4. METHODOLOGY
To explain how the Iraqi sign language translation system works, we offer the following.
4.1. General overview
An overview of the steps involved in implementing the proposed system are listed as follows.
a. In this research, a free application for smart phones is designed to translate Iraqi sign language into
the equivalent in spoken Arabic. This application is designed using “Android Studio” as a development
environment to produce applications that run on devices that are managed by Android as an operating
system. As for the programming language, Java Micro Edition has been used because of the ease of this
language in transferring the application on different devices. Due to the nature of this application that
requires the use of a very large number of images that represent words, letters and numbers in the sign
4. Int J Elec & Comp Eng ISSN: 2088-8708
Application for Iraqi sign language translation on Android system (Miaad Ahmed Hussein)
5229
language, needing very large storage space, an efficient way to store images with the least possible
storage space was required, and for this purpose “Blender” was used. It is a free 3D design program
launched under the open-source General Public License (GNU), which is used to create 3D movies,
movie tricks, interactive 3D programs, and video games. Blender capabilities include modeling, cladding,
bone positioning and tuning [26, 27]. This program provides the ability to use a virtual human (avatar) to
visualize sign language movements, and then store the images of the avatar in the application database for
use in the translation process.
b. Sign language grammar has been studied so that we can properly represent the sign language images, as
the sign language has its own way of representing derivations, conjugation of verbs, expressing singular,
folded, and plural, as well as expressing feminine and masculine aspects of grammar amongst others [1].
4.2. User interface
The user interface is designed to be as easy to use as in Figure 1. It contains the following components:
- Tape to display the Arabic text.
- The button (ترجم / translate) to implement the translation of the text into the Iraqi sign language.
- Virtual human (avatar) to display translation in the Iraqi sign language.
- Button to show and hide the keyboard.
- SMS button.
- The keyboard to enter text in classical Arabic, containing all the 28 Arabic language characters in
addition to the special symbols (,ى ,ء ,ئ ,ؤ ,)ة as well as a special button to display numbers (0-9).
Each button in it was represented so that it carries an image and symbol of the letter or the number that
represents it. Thus it can be used easily by the healthy person and the deaf and mute person alike.
Also, a special button was added to put a space between words, as well as a button for scanning.
- The button “االشارة ”لغة to go to the window for translating Iraqi sign language into standard Arabic.
Figure 1. User interface
4.3. Database
A database is designed with several fields. The first field “word” is designated for storing words,
letters, and numbers. The second field “img_name” contains the names of the images that correspond to
the words. It is to be noted that some words need more than one image to be translated. The third
field “Categories” contains the names of the categories, and the fourth field “Additional-img” contains
the names of additional images to express the direction of movement. When the user presses the button
“/ترجم translate”, the application, through a special code, searches inside the database for the names of
the images that correspond to each word separately, then a function for displaying images is called using
“img_name” field. The process of searching for the names of images and calling the image display function
for each word continues until the end of the sentence is reached.
5. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 10, No. 5, October 2020 : 5227 - 5234
5230
4.4. Images display
The process of displaying images from the database is done through a programmed function that is
provided with 2 parameters: the first is a series of letters representing the name of the image and the second is
a numerical value representing the speed of display determined at (500 ms). Prior experience has proven that
this is an ideal speed for display on mobile devices that are managed by the Android operating system.
Through implementing this function, sign language images are displayed as animated films on the application
screen, represented by the movements that the virtual human (avatar) performs.
5. THE PROPOSED SYSTEM WORKS
Application works includes two modes:
a. First mode: from a hearing person to a deaf person see Figure 2(a)
The text written in classical Arabic is translated into its equivalent in Iraqi sign language once
the user uses the keyboard to write text for the purpose of translating it, clicking on each letter or number that
represents it. It will be written in a dedicated bar to display the text. This procedure is implemented through
the use of a special method programmed in Java language. The work of this method depends on the index for
each button on the keyboard, where each index is linked to a letter or number representing that button.
After completing writing the text, the user presses the button “ترجم /translate”. The program searches within
the database for the names of images that correspond to each word separately. As a result, the image display
method is called upon. If the user enters words that are not in the database, such as the names of people or
words that contain spelling errors, the application displays images of the letters of the text alphabetically.
b. Second mode: from the deaf person to the hearing person see Figure 2(b)
Iraqi sign language is translated into classical Arabic, where the button “االشارة ”لغة designed in
the user interface, when pressed, will move to the second mode, and to facilitate the process of finding
the required images, the images are organized into categories (alphabet, language, verbs, adjectives, history,
geography, etc.). Once the button “االشارة ”لغة is clicked, a list of these categories will be displayed.
In the same way, when any category is clicked, images that represent the words in this category will be
displayed. Moreover, clicking on any image will have the word representing that image written in a tape for
translation in Arabic text. The user is able to send the messages via (SMS). As the application deals with
a complete language, it is difficult to absorb all vocabularies of the language in a short period. So, this
application is designed to be open source so that more words can be added whenever needed so that as many
words as possible can be stored over time.
(a)
(b)
Figure 2. Application's modes: (a) first mode and (b) second mode
6. Int J Elec & Comp Eng ISSN: 2088-8708
Application for Iraqi sign language translation on Android system (Miaad Ahmed Hussein)
5231
6. PRACTICAL EXAMPLES
6.1. Example 1
If the user uses the first mode and writes a sentence for example: “ألتعلم المدرسة الى .”أذهب Clicking on
the button”,”ترجم the sentence will be translated into the Iraqi sign language as shown in Figure 3.
- Note that the word “”أذهب has been translated by Figure 3(a) and (b).
- The word “”الى was translated by the display of Figure 3(c) and (d).
- The word “”المدرسة is translated by displaying images of Figure 3(e) and (f).
- The translation of the word “”ألتعلم is displayed by displaying Figure 3(g) and (h).
(a) (b) (c) (d)
(e) (f) (g) (h)
Figure 3. Translate a sentence written in standard Arabic into its equivalent in Iraqi sign language
6.2. Example 2
If the user uses the first mode and writes the name of a person, for example “حسين ”احمد and as is
well known, the sign language does not contain images of the names, so the names will be translated
alphabetically, as shown in Figure 4.
- In Figure 4(a), we notice the image represents the letter “.”الف
- In Figure 4(b), the image represents the letter “.”حاء
- In Figure 4(c), the image represents the letter “.”ميم
- In Figure 4(d), the image represents the letter “.”دال
- In Figure 4(e), the image represents the letter “.”حاء
- In Figure 4(f), the image represents the letter “.”سين
- In Figure 4(g), the image represents the letter “.”ياء
- In Figure 4(h), the image represents the letter “.”نون
As for the speed with which these images are displayed, it programmatically determined so that they
are comfortable for the eye and clear.
6.3. Example 3
If the user uses the second mode by clicking on the button “االشارة ,”لغة a list of categories names will
appear, as shown in Figure 5. By selecting the category and clicking on it, the sign language images for this
category will be displayed. For example, if the user wants to choose images for the phrase “كتاب ”أقرأ he will
7. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 10, No. 5, October 2020 : 5227 - 5234
5232
click on the category “”أالفعال and from it he will select the image that represent the word “”أقرأ as shown in
Figure 6(a). Then he moves to the category “ومستلزماتها ”المدرسة and specifies the image for the word “”كتاب as
in Figure 6(b), thus, the sentence is translated from sign language into standard Arabic.
(a) (b) (c) (d)
(e) (f) (g) (h)
Figure 4. Translate words alphabetically
(a) (b)
Figure 5. List of categories names Figure 6. Translating from Iraqi sign language to standard Arabic
8. Int J Elec & Comp Eng ISSN: 2088-8708
Application for Iraqi sign language translation on Android system (Miaad Ahmed Hussein)
5233
7. CONCLUSION
The use of this application facilitates the integration of deaf people into society through the use of
the easiest and most widely used means in the world, which are mobile phone applications; this paper puts
forward a system for translating the Iraqi sign language, due to the lack of applications related to this dialect.
This application is an educational system that can be applied to teach deaf people Iraqi sign language. It also
teaches them how to read and write in standard Arabic. Additionally, the application teaches healthy people
Iraqi sign language easily, thus removing barriers between deaf people and healthy people and facilitating
the process of communication between them. Since this application works without using the Internet, this is
easy to use it at any time and any place. The use of (Blender) program makes it possible to store the huge
amount of images that the application needs by dividing the images and reducing the display resolution, thus
reducing the size of the images.
There are two proposes for future works: 1) to expand this system to include the use of sound for
input and output by healthy people, thus facilitating further communication; and 2) use the root word
extraction algorithm in Arabic language, so that the program works on the outputs of this algorithm.
Thus, it may be possible to reduce the size of the databases to some extent as well as reducing the number of
images that we need for translation. The root word is stored in databases on behalf of the derivatives or
the conjugation of that word, which all have the same translation in sign language. This increases
the efficiency of the translation process.
ACKNOWLEDGEMENTS
Our thanks to the University of Mosul, represented by all colleges and departments, especially
the Department of Computer Science. Our thanks and appreciation to the members of the journal for allowing
us to present our research. We especially thank “Al-Amal Institute for Special Needs Care, Mosul” for their
cooperation in highlighting effective ways in helping the deaf to be integrated into society.
REFERENCES
[1] S. Samreen, M. Al-Binali, “Standard Qatari Sign Language,” Doha The Supreme Council for Doha Affairs, 2010.
[2] N. B. Ibrahim, M. M. Selim, and H. H. Zayed, “An Automatic Arabic Sign Language Recognition
System (ArSLRS),” Journal of King Saud University - Computer and Information Sciences, vol. 30, issue 4,
pp. 470-477, 2018.
[3] M. Abdo, et al., “Arabic Alphabet and Numbers Sign Language Recognition,” International Journal of Advanced
Computer Science and Applications, vol. 6, no. 11, pp. 209-214, 2015.
[4] H. H. Nasereddin, “MMLSL: Modelling Mobile Learning for Sign Language,” Of Engineering and Computer
Science Research and Reviews in Applied Sciences, vol. 9, no. 2, pp. 20267-20272, 2017.
[5] V. López-Ludeña, et al., “Methodology for developing a Speech into Sign Language Translation System in a New
Semantic Domain,” 2012.
[6] H. Al-Dosri, N. Alawfi, and Y. Alginahi, “Arabic sign language easy communicate ArSLEC,” in Proc. Int. Conf.
Comput. Inf. Technol, pp. 474-479, 2012.
[7] H. V. Verma, E. Aggarwal and S. Chandra, "Gesture recognition using kinect for sign language translation," 2013
IEEE Second International Conference on Image Information Processing (ICIIP-2013), Shimla, pp. 96-100, 2013.
[8] M. Elmahgiubi, M. Ennajar, N. Drawil and M. S. Elbuni, "Sign language translator and gesture recognition," 2015
Global Summit on Computer & Information Technology (GSCIT), Sousse, pp. 1-6, 2015.
[9] K. D. Rogers, et al., “Translation, validity and reliability of the British Sign Language (BSL) version of
the EQ-5D-5L,” Quality of Life Research, vol. 25, no. 7, pp. 1825–1834, 2016..
[10] B. Hisham and A. Hamouda, “Arabic Static and Dynamic Gestures Recognition Using Leap Motion,” JCS, vol. 13,
no. 8, pp. 337-354, 2017.
[11] S. S Kumar, T. Wangyal, V. Saboo and R. Srinath, "Time Series Neural Networks for Real Time Sign Language
Translation," 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), Orlando,
FL, pp. 243-248, 2018.
[12] S. Ebling, et al., “SMILE Swiss German sign language dataset”. In Proceedings of the Eleventh International
Conference on Language Resources and Evaluation (LREC 2018), pp. 4221-4229, 2018.
[13] K. Parvez, et al. “Measuring Effectiveness of Mobile Application in Learning Basic Mathematical Concepts Using
Pakistan Sign Language,” Sustainability, vol. 11, no. 11, p. 3064, 2019. doi: https://doi.org/10.3390/su11113064
[14] A. A. Youssif, et al., “Arabic Sign Language (ArSL) Recognition System Using HMM,” International Journal of
Advanced Computer Science and Applications, vol. 2, no. 11, pp. 45-51, 2011.
[15] N. R. Albelwi and Y. M. Alginahi, “Real-time Arabic sign language (arsl) recognition,” in International
Conference on Communications and Information Technology, pp. 497-501, 2012.
[16] S. Shahriar et al., "Real-Time American Sign Language Recognition Using Skin Segmentation and Image Category
Classification with Convolutional Neural Network and Deep Learning," TENCON 2018 - 2018 IEEE Region 10
Conference, Jeju, Korea (South), pp. 1168-1171, 2018.
9. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 10, No. 5, October 2020 : 5227 - 5234
5234
[17] P. D. Rosero-Montalvo et al., "Sign Language Recognition Based on Intelligent Glove Using Machine Learning
Techniques," 2018 IEEE Third Ecuador Technical Chapters Meeting (ETCM), Cuenca, pp. 1-5, 2018.
[18] B. Sapkota, et al., “Smart Glove for Sign Language Translation Using Arduino,” in 1st KEC Conference
Proceedings, vol. 1, pp. 5-11, 2018.
[19] S. A. Mehdi and Y. N. Khan, "Sign language recognition using sensor gloves," Proceedings of the 9th International
Conference on Neural Information Processing, 2002. ICONIP '02., Singapore, vol. 5, pp. 2204-2206, 2002.
[20] R. Ambar, et al., “Development of a Wearable Device for Sign Language Recognition,” Journal of Physics:
Conference Series, vol. 1019, p. 012017, 2018. doi: 10.1088/1742-6596/1019/1/012017
[21] A. Wray, et al., “A formulaic approach to translation at the post office: reading the signs,” Language &
Communication, vol. 24, no. 1, pp. 59–75, 2004.
[22] F. Al Ameiri, M. J. Zemerly and M. Al Marzouqi, "Mobile Arabic sign language," 2011 International Conference
for Internet Technology and Secured Transactions, Abu Dhabi, pp. 363-367, 2011.
[23] A. Almohimeed, M. Wald, and R. I. Damper, “Arabic text to Arabic sign language translation system for the deaf
and hearing-impaired community,” in Proceedings of the Second Workshop on Speech and Language Processing
for Assistive Technologies, pp. 101-109, 2011.
[24] R. Z. Al-Obaidy, “Sentiment analysis for Arabic tweets using support vector machine,” Master's thesis, College of
Computer Sciences And Mathematics, University of Mosul, 2019.
[25] A. Y. Taqa, “Constructing Anti – spam filter based on naive bayesian classifier,” PhD thesis, College of Computer
Sciences And Mathematics, University of Mosul, 2007.
[26] “Blender.” https://ar.wikipedia.org/wiki/%D8%A8%D9%84%D9%86% D8%AF%D8%B1 (accessed 2/1/2020).
[27] “Top 55+ Best Animation Software: The Ultimate List 2019,” Renderforest, 2019. [Online]. Available:
https://www.renderforest.com/blog/best-animation-software, [accessed: Aug 9 2019].
BIOGRAPHIES OF AUTHORS
Miaad Ahmed Alobaidy received her B.Sc.2017 in computer sciences from computer sciences
and mathematics college in university of Mosul. She is currently M.Sc. student in computer
sciences field of image processing in university of Mosul. She has interests in fields like
database and website design.
Sundus Khaleel Ebraheem received her B.Sc. 1996 in computer sciences and received her
M.Sc. degree 2001 in computer sciences field of image processing from university of Mosul. She
is currently working as assistant professor in computer sciences and mathematics college in
university of Mosul. She has 22 years experience in teaching field and has 19 published research
papers in various scientific journals.