This paper describes a command interface for games based on hand gestures and voice command defined by postures, movement and location. The system uses computer vision requiring no sensors or markers by the user. In voice command the speech recognizer, recognize the input from the user. It stores and passes command to the game, action takes place. We propose a simple architecture for performing real time colour detection and motion tracking using a webcam. The next step is to track the motion of the specified colours and the resulting actions are given as input commands to the system. We specify blue colour for motion tracking and green colour for mouse pointer. The speech recognition is the process of automatically recognizing a certain word spoken by a particular speaker based on individual information included in speech waves. This application will help in reduction in hardware requirement and can be implemented in other electronic devices also.
A Framework For Dynamic Hand Gesture Recognition Using Key Frames ExtractionNEERAJ BAGHEL
Abstract—Hand Gesture Recognition is one of the natural
ways of human computer interaction (HCI) which has wide
range of technological as well as social applications. A dynamic
hand gesture can be characterized by its shape, position and
movement. This paper presents a user independent framework
for dynamic hand gesture recognition in which a novel algorithm
for extraction of key frames is proposed. This algorithm is based
on the change in hand shape and position, to find out the most
important and distinguishing frames from the video of the hand
gesture, using certain parameters and dynamic threshold. For
classification, Multiclass Support Vector Machine (MSVM) is
used. Experiments using the videos of hand gestures of Indian
Sign Language show the effectiveness of the proposed system for
various dynamic hand gestures. The use of key frame extraction
algorithm speeds up the system by selecting essential frames and
therefore eliminating extra computation on redundant frames.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A man-machine interaction project is described which aims to establish an automated voice to sign
language translator for communication with the deaf using integrated open technologies. The first
prototype consists of a robotic hand designed with OpenSCAD and manufactured with a low-cost 3D
printer which smoothly reproduces the alphabet of the sign language controlled by voice only. The core
automation comprises an Arduino UNO controller used to activate a set of servo motors that follow
instructions from a Raspberry Pi mini-computer having installed the open source speech recognition engine
Julius. We discuss its features, limitations and possible future developments.
A Dynamic hand gesture recognition for human computer interactionKunika Barai
A Dynamic hand gesture recognition for human computer interaction gives opportunity to dumb and deaf people to interact directly to computer using sign language
A Framework For Dynamic Hand Gesture Recognition Using Key Frames ExtractionNEERAJ BAGHEL
Abstract—Hand Gesture Recognition is one of the natural
ways of human computer interaction (HCI) which has wide
range of technological as well as social applications. A dynamic
hand gesture can be characterized by its shape, position and
movement. This paper presents a user independent framework
for dynamic hand gesture recognition in which a novel algorithm
for extraction of key frames is proposed. This algorithm is based
on the change in hand shape and position, to find out the most
important and distinguishing frames from the video of the hand
gesture, using certain parameters and dynamic threshold. For
classification, Multiclass Support Vector Machine (MSVM) is
used. Experiments using the videos of hand gestures of Indian
Sign Language show the effectiveness of the proposed system for
various dynamic hand gestures. The use of key frame extraction
algorithm speeds up the system by selecting essential frames and
therefore eliminating extra computation on redundant frames.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A man-machine interaction project is described which aims to establish an automated voice to sign
language translator for communication with the deaf using integrated open technologies. The first
prototype consists of a robotic hand designed with OpenSCAD and manufactured with a low-cost 3D
printer which smoothly reproduces the alphabet of the sign language controlled by voice only. The core
automation comprises an Arduino UNO controller used to activate a set of servo motors that follow
instructions from a Raspberry Pi mini-computer having installed the open source speech recognition engine
Julius. We discuss its features, limitations and possible future developments.
A Dynamic hand gesture recognition for human computer interactionKunika Barai
A Dynamic hand gesture recognition for human computer interaction gives opportunity to dumb and deaf people to interact directly to computer using sign language
Speech is the most important way of communication for people. Using the speech as the interface for processes became more important with the improvements of artificial intelligence. In this project, it is implemented to control a wheelchair with speech comment. Speech commends were taken to the computer by the microphone, the features were extracted with The Mel Frequency Spectral Coefficients algorithms and they were recognized by the help of Artificial Neural Networks. Finally, the comments have converted the form in which the wheel chair can recognize and move accordingly. Our proposed system aim at a robotic vehicle operated by human speech commands. The system operates with the use of an android device which transmits voice commands to raspberry pi to achieve this functionality. The transmitter consists of the android phone Bluetooth device. The voice commands recognized by the module are transmitted through the Bluetooth transmitter. These commands are detected by the wheel chair in order to move it in left, right, backward and front directions. The Bluetooth receiver mounted on raspberry pi is used to recognize the transmitted commands and decode them. The controller then drives the vehicle motors to move it accordingly. This is done with the use of a driver IC used to control the motor movements. The Bluetooth technology used to transmit and receive data allows for remotely operating the system within a good range. Voice operated robot is used for one moving object is developed such that it is moved as per commands are given by the voice recognition module and that command is received by robot and robot is matched the given command with stored program and then set the command as per voice using wireless communication.
Smart Assistant for Blind Humans using Rashberry PIijtsrd
An OCR (Optical Character Recognition) system which is a branch of computer vision and in turn a sub-class of Artificial Intelligence. Optical character recognition is the translation of optically scanned bitmaps of printed or hand-written text into audio output by using of Raspberry pi. OCRs developed for many world languages are already under efficient use. This method extracts moving object region by a mixture-of-Gaussians-based background subtraction method. A text localization and recognition are conducted to acquire text information. To automatically localize the text regions from the object, a text localization and Tesseract algorithm by learning gradient features of stroke orientations and distributions of edge pixels in an Adaboost model. Text characters in the localized text regions are then binaries and recognized by off-the-shelf optical character recognition software. The recognized text codes are output to blind users in speech. Performance of the proposed text localization algorithm. As the recognition process is completed, the character codes in the text file are processed using Raspberry pi device on which recognize character using Tesseract algorithm and python programming, the audio output is listed. Abish Raj. M. S | Manoj Kumar. A. S | Murali. V"Smart Assistant for Blind Humans using Rashberry PI" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-3 , April 2018, URL: http://www.ijtsrd.com/papers/ijtsrd11498.pdf http://www.ijtsrd.com/computer-science/embedded-system/11498/smart-assistant-for-blind-humans-using-rashberry-pi/abish-raj-m-s
SixthSense is a name for extra information supplied by a wearable computer, such as the device called EyeTap (Mann), Telepointer (Mann), and "WuW" (Wear yoUr World) by Pranav Mistry
Day by day lots of efforts are been taken towards
developing an intelligent and natural interface between computer
system and users. And looking at the technologies now a day’s it
has become possible by means of variety of media information like
visualization, audio, paint etc. Gesture has become important part
of human communication to convey the information. Thus In this
paper we proposed a method for HAND GESTURE
RECOGNIZATION which includes Hand Segmentation, Hand
Tracking and Edge Traversal Algorithm. We have designed a
system which is limited to the hardware parts such as computer
and webcam. The system consists of four modules: Hand
Tracking and Segmentation, Feature Extraction, Neural
Training, and Testing. The objective of this system to explore the
utility of a neural network-based approach to the recognition of
the hand gestures that create a system that will easily identify the
gesture and use them for device control and convey information
instead of normal inputs devices such as mouse and keyboard.
Natural Hand Gestures Recognition System for Intelligent HCI: A SurveyEditor IJCATR
Gesture recognition is to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head,
and/or body. Hand Gestures have greater importance in designing an intelligent and efficient human–computer interface. The applications
of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality. In this paper a survey on
various recent gesture recognition approaches is provided with particular emphasis on hand gestures. A review of static hand posture
methods are explained with different tools and algorithms applied on gesture recognition system, including connectionist models, hidden
Markov model, and fuzzy clustering. Challenges and future research directions are also highlighted.
Speech is the most important way of communication for people. Using the speech as the interface for processes became more important with the improvements of artificial intelligence. In this project, it is implemented to control a wheelchair with speech comment. Speech commends were taken to the computer by the microphone, the features were extracted with The Mel Frequency Spectral Coefficients algorithms and they were recognized by the help of Artificial Neural Networks. Finally, the comments have converted the form in which the wheel chair can recognize and move accordingly. Our proposed system aim at a robotic vehicle operated by human speech commands. The system operates with the use of an android device which transmits voice commands to raspberry pi to achieve this functionality. The transmitter consists of the android phone Bluetooth device. The voice commands recognized by the module are transmitted through the Bluetooth transmitter. These commands are detected by the wheel chair in order to move it in left, right, backward and front directions. The Bluetooth receiver mounted on raspberry pi is used to recognize the transmitted commands and decode them. The controller then drives the vehicle motors to move it accordingly. This is done with the use of a driver IC used to control the motor movements. The Bluetooth technology used to transmit and receive data allows for remotely operating the system within a good range. Voice operated robot is used for one moving object is developed such that it is moved as per commands are given by the voice recognition module and that command is received by robot and robot is matched the given command with stored program and then set the command as per voice using wireless communication.
Smart Assistant for Blind Humans using Rashberry PIijtsrd
An OCR (Optical Character Recognition) system which is a branch of computer vision and in turn a sub-class of Artificial Intelligence. Optical character recognition is the translation of optically scanned bitmaps of printed or hand-written text into audio output by using of Raspberry pi. OCRs developed for many world languages are already under efficient use. This method extracts moving object region by a mixture-of-Gaussians-based background subtraction method. A text localization and recognition are conducted to acquire text information. To automatically localize the text regions from the object, a text localization and Tesseract algorithm by learning gradient features of stroke orientations and distributions of edge pixels in an Adaboost model. Text characters in the localized text regions are then binaries and recognized by off-the-shelf optical character recognition software. The recognized text codes are output to blind users in speech. Performance of the proposed text localization algorithm. As the recognition process is completed, the character codes in the text file are processed using Raspberry pi device on which recognize character using Tesseract algorithm and python programming, the audio output is listed. Abish Raj. M. S | Manoj Kumar. A. S | Murali. V"Smart Assistant for Blind Humans using Rashberry PI" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-3 , April 2018, URL: http://www.ijtsrd.com/papers/ijtsrd11498.pdf http://www.ijtsrd.com/computer-science/embedded-system/11498/smart-assistant-for-blind-humans-using-rashberry-pi/abish-raj-m-s
SixthSense is a name for extra information supplied by a wearable computer, such as the device called EyeTap (Mann), Telepointer (Mann), and "WuW" (Wear yoUr World) by Pranav Mistry
Day by day lots of efforts are been taken towards
developing an intelligent and natural interface between computer
system and users. And looking at the technologies now a day’s it
has become possible by means of variety of media information like
visualization, audio, paint etc. Gesture has become important part
of human communication to convey the information. Thus In this
paper we proposed a method for HAND GESTURE
RECOGNIZATION which includes Hand Segmentation, Hand
Tracking and Edge Traversal Algorithm. We have designed a
system which is limited to the hardware parts such as computer
and webcam. The system consists of four modules: Hand
Tracking and Segmentation, Feature Extraction, Neural
Training, and Testing. The objective of this system to explore the
utility of a neural network-based approach to the recognition of
the hand gestures that create a system that will easily identify the
gesture and use them for device control and convey information
instead of normal inputs devices such as mouse and keyboard.
Natural Hand Gestures Recognition System for Intelligent HCI: A SurveyEditor IJCATR
Gesture recognition is to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head,
and/or body. Hand Gestures have greater importance in designing an intelligent and efficient human–computer interface. The applications
of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality. In this paper a survey on
various recent gesture recognition approaches is provided with particular emphasis on hand gestures. A review of static hand posture
methods are explained with different tools and algorithms applied on gesture recognition system, including connectionist models, hidden
Markov model, and fuzzy clustering. Challenges and future research directions are also highlighted.
The goal of this project is to provide a platform that allows for communication between able-bodied and disabled people or between computers and human beings. There has been great emphasis on Human-Computer-Interaction research to create easy-to-use interfaces by directly employing natural communication and manipulation skills of humans . As an important part of the body, recognizing hand gesture is very important for Human-Computer-Interaction. In recent years, there has been a tremendous amount of research on hand gesture recognition
Demonstration of visual based and audio-based hci systemeSAT Journals
Abstract This paper is an attempt to provide a bird’s eye view to the concept of Human Compute Interaction (HCI). The intention is to focus on the uni-modal architecture of HCI; especially the HCI system based on visual-based and color-based communication channels viz-a-viz color recognition and speech recognition. We have developed a Graphical User Interface (GUI) for the same using MATLAB; one push button assigned for color input (through webcam) and the other push button assigned for speech input (through microphone). In color recognition, primary colors i.e. RGB are detected in frames captured in real time or images uploaded offline. Subsequently, desired operation is executed (we have set commands to open D drive). In speech recognition, audio input through microphone is compared with a pre-stored audio file and then an operation is performed automatically (here, we have set commands to open Google web browser). The respective algorithms of these two processes have been described with flow-charts and snapshots of MATLAB results have been displayed. Keywords: Human Computer Interaction, Uni-Modal Architecture, Color Recognition, Speech Recognition
Wake-up-word speech recognition using GPS on smart phoneIJERA Editor
Wake-Up-Word (WUW) is a new prototype of speech recognition not widely recognized. Lately, the use of GPS is widely increased in everyday life that means that our necessities have changed. We can use a new paradigm in controlling the voice of a map in the digital era. This would bring benefit for people while driving a car. In this paper we present a set of voice commands to integrate within the map and navigation voice control. Using a voice control for Global Positioning System (GPS) helps to determine and track the precise location using a technology called Google API. The benefit of this application would be avoiding car accidents using speech command instead of typing.
A Translation Device for the Vision Based Sign Languageijsrd.com
The Sign language is very important for people who have hearing and speaking deficiency generally called Deaf and Mute. It is the only mode of communication for such people to convey their messages and it becomes very important for people to understand their language. This paper proposes the method or algorithm for an application which would help in recognizing the different signs which is called Indian Sign Language. The images are of the palm side of right and left hand and are loaded at runtime. The method has been developed with respect to single user. The real time images will be captured first and then stored in directory and on recently captured image and feature extraction will take place to identify which sign has been articulated by the user through SIFT(scale invariance Fourier transform) algorithm. The comparisons will be performed in arrears and then after comparison the result will be produced in accordance through matched key points from the input image to the image stored for a specific letter already in the directory or the database the outputs for the following can be seen in below sections. There are 26 signs in Indian Sign Language corresponding to each alphabet out which the proposed algorithm provided with 95% accurate results for 9 alphabets with their images captured at every possible angle and distance.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Similar to HCI BASED APPLICATION FOR PLAYING COMPUTER GAMES | J4RV4I1014 (20)
Design and Analysis of Hydraulic Actuator in a Typical Aerospace vehicle | J4...Journal For Research
An Aerospace Vehicle is capable of flight both within and outside the sensible atmosphere. An Actuation System is one of the most important Systems of an Aerospace vehicle. This paper study involves detailed study of various controls Actuation System and Design of a typical Hydraulic Actuation Systems. An actuator control system concerned with electrical, electronic or electro mechanical. Actuator control systems may take the form of extremely simple, manually-operated start-and-stop stations, or sophisticated, programmable computer systems. Hydraulic Actuation System contains Electro Hydraulic Actuators, Servo Valves, Feedback Sensing elements, Pump Motor package, Hydraulic Reservoir, Accumulator, various safety valves, Filters etc. The main objective of this study involves design of Hydraulic Actuator and selection of various other components for the Actuation Systems of an Aerospace Vehicle. Design of the system includes design of Hydraulic actuator and also the Modeling and Analysis of actuator using sophisticated Software.
Experimental Verification and Validation of Stress Distribution of Composite ...Journal For Research
Now a day in all sector weight reduction is most important criteria for lowering the cost & high performance. For weight reduction composite material is good option to solve weight related problems. In this paper we describe analysis of composite glass fibre material with mild steel material comparison. For analysis purpose we can use FEA software. The objective of this paper is compare things like different loading conditions stress distribution etc.
Image Binarization for the uses of Preprocessing to Detect Brain Abnormality ...Journal For Research
Computerized MR of brain image binarization for the uses of preprocessing of features extraction and brain abnormality identification of brain has been described. Binarization is used as intermediate steps of many MR of brain normal and abnormal tissues detection. One of the main problems of MRI binarization is that many pixels of brain part cannot be correctly binarized due to the extensive black background or the large variation in contrast between background and foreground of MRI. Proposed binarization determines a threshold value using mean, variance, standard deviation and entropy followed by a non-gamut enhancement that can overcome the binarization problem. The proposed binarization technique is extensively tested with a variety of MRI and generates good binarization with improved accuracy and reduced error.
A Research Paper on BFO and PSO Based Movie Recommendation System | J4RV4I1016Journal For Research
The objective of this work is to assess the utility of personalized recommendation system (PRS) in the field of movie recommendation using a new model based on neural network classification and hybrid optimization algorithm. We have used advantages of both the evolutionary optimization algorithms which are Particle swarm optimization (PSO) and Bacteria foraging optimization (BFO). In its implementation a NN classification model is used to obtain a movie recommendation which predict ratings of movie. Parameters or attributes on which movie ratings are dependent are supplied by user's demographic details and movie content information. The efficiency and accuracy of proposed method is verified by multiple experiments based on the Movie Lens benchmark dataset. Hybrid optimization algorithm selects best attributes from total supplied attributes of recommendation system and gives more accurate rating with less time taken. In present scenario movie database is becoming larger so we need an optimized recommendation system for better performance in terms of time and accuracy.
IoT based Digital Agriculture Monitoring System and Their Impact on Optimal U...Journal For Research
Although precision agriculture has been adopted in few countries, the greenhouse based modern agriculture industry in India still needs to be modernized with the involvement of technology for better production and cost control. In this paper we proposed a multifunction model for smart agriculture based on IoT. Due to variable atmospheric circumstances these conditions sometimes may vary from place to place in large farmhouse, which makes very difficult to maintain the uniform condition at all the places in the farmhouse manually. Soil and environment properties are sensed and periodically sent to cloud network through IoT. Analysis on cloud data is done for water requirement, total production and maintaining uniform environment conditions throughout greenhouse farm. Proposed model is beneficial for increase in agricultural production and for cost control and real time monitoring of farm.
A REVIEW PAPER ON BFO AND PSO BASED MOVIE RECOMMENDATION SYSTEM | J4RV4I1015Journal For Research
Recommendation system plays important role in Internet world and used in many applications. It has created the collection of many application, created global village and growth for numerous information. This paper represents the overview of Approaches and techniques generated in recommendation system. Recommendation system is categorized in three classes: Collaborative Filtering, Content based and hybrid based Approach. This paper classifies collaborative filtering in two types: Memory based and Model based Recommendation .The paper elaborates these approaches and their techniques with their limitations. The result of our system provides much better recommendations to users because it enables the users to understand the relation between their emotional states and the recommended movies.
A REVIEW ON DESIGN OF PUBLIC TRANSPORTATION SYSTEM IN CHANDRAPUR CITY | J4RV4...Journal For Research
As we know the population of Chandrapur City has increased so far in this years and with that has increased the vehicles causing high traffic volume & rise in pollution. But the transportation system in Chandrapur City is still the same. To reduce the traffic volume & pollution, we have to study & design the new transportation system in Chandrapur City. The system would be as similar to Nagpur City with the implementation of Star City Buses. In this Study we would first compare the speed of various vehicles. Collection of population details of Chandrapur City, approximate number of vehicles running on road, collection of data with respect to Ticket fares in Nagpur City- whether it is according to Kilometers or places to be reached, calculation of Ticket Fares for Chandrapur City on the basis data collected. By all these, the best mode of transport in City can be studied. On the basis of above data collected from various respected fields, we will then proceed for the Design part of urban transport system in Chandrapur City. For Design purpose, firstly we have to mark the centre of the City, when the centre is decided; we will then select the Bus Terminus. From centre of the city, we would prefer to select the routes of the Buses. One route will be for the city side like Jatpura Gate, Pathanpura Gate. One route will be for Ballarpur going road. The other one for Mul going road, then next for Nagpur road. We could decide as many routes once we get the clear idea about all data. By getting all this details, the next step is to design the destination points of Buses. Then we have to design about the Bus bays, to reduce congestion in the particular intersections or Stops of bus. After the design also can suggest for Bus lanes. Implementation of Bus Rapid Transit System (BRT system) is the main aim behind to develop transportation mode of City. The design of the Transport System can be designed with the help of various software’s like AutoCAD and Revit.
A REVIEW ON LIFTING AND ASSEMBLY OF ROTARY KILN TYRE WITH SHELL BY FLEXIBLE G...Journal For Research
Heavy kiln tyre Lifting, rigging and assembly with kiln shell is done manually by use of heavy crane and labour. This traditional technique is not safe. The challenge is find out solution for ease the process and cost effective because of limitations of the rigging system, erection area, can be managed safely by the kiln tyre suspender equipped by jaws and suspender beam. This review paper deals with the study and analysis of different papers which are deals with different lifting, gripping and installation techniques and other aspects analysis with software, experimentation and optimization etc.
LABORATORY STUDY OF STRONG, MODERATE AND WEAK SANDSTONES | J4RV4I1012Journal For Research
Sandstones from seven different hydroelectric projects have been assessed to compare their water-related properties and engineering parameters and the comprehensive analysis has been presented. The study has been done by categorizing the sandstones in to three categories i.e. weak, moderate & strong sandstones. The study leads to four broad inferences: (1), there could be very large variation between two sandstones; e.g., here, sandstone S2, S4 & S5, vis-à-vis other two strong sandstones, is superior in all respects. (2), the four weak sandstones differ in respect of some – not all – properties and parameters. (3), none of the four weak sandstones is better than the other two in respect of all properties and parameters. (4), moderate sandstone shows higher values of shear strength parameters in comparison to all the sandstones (including stronger sandstones also) except S3 strong sandstone. In respect of individual properties, the grain density of all sandstones is similar, though their bulk densities, apparent porosity and water content show great variation. The weak, moderate and strong sandstones show qualitative difference in their uniaxial compressive strength and wave velocity (compression and shear, both); and the two are directly proportional. The study clearly demonstrates that there is no one-to-one correspondence between any two properties and parameters, but there is a diffused and/ or qualitative relationship between different sandstones, or certain properties and parameters of a particular variant.
DESIGN ANALYSIS AND FABRICATION OF MANUAL RICE TRANSPLANTING MACHINE | J4RV4I...Journal For Research
Need of rice transplanting machine is growing nowadays because of unique feature seeding in well sequence and well manners. This will save too much efforts of human being. Class of people who uses this kind of machine is farmers and they are having poor economic background. To feed growing population is a huge challenge. Importation of rice will lead to drain out the economy of the country. Mechanization of paddy sector will lead to higher productivity with releasing of work force to other sectors. The objective of this project is to design a paddy transplanting mechanism to transplant paddy seedling by small scale farmers in the country. Hence, this is considered as an activity that needed mechanization. For mechanization the modeling and simulation evaluated for hand operated rice seeding machine, which is help the farmers to planting more and more amount of rice in good quality with low energy consumption and less harm to the environment. India is predominately an agricultural country with rice as one of its main food crop. It Produce about 80 million tons rice annually which is about 22% of the world rice production. Culturally transplanting of young seeding is preferred over direct seeding for better yield and better crop management practice. But this operation requires large amount of manpower (about 400 Man-Hour/ha) and task is very laborious involving working in stopping posture and moving in muddy field.
AN OVERVIEW: DAKNET TECHNOLOGY - BROADBAND AD-HOC CONNECTIVITY | J4RV4I1009Journal For Research
DakNet, is an ad hoc network and an internet service planted on the applied science, which uses wireless technology to provide an asynchronous digital connectivity, it is the intermediate of wireless and asynchronous service that is the beginning of a technical way to universal broadband connectivity. The major process is it provides the broadband connectivity as wider. This paper broadly describes about the technology, architecture behind and its working principles.
Line following is one of the most important aspects of Robotics. A Line Follower Robot is an autonomous robot which is able to follow either a black or white line that is drawn on the surface consisting of a contrasting color. It is designed to move automatically and follow the made plot line. The path can be visible like a black line on a white surface or it can be invisible like a magnetic field. It will move in a particular direction Specified by the user and avoids the obstacle which is coming in the path. Autonomous Intelligent Robots are robot that can perform desired tasks in unstructured environments without continuous human guidance. It is an integrated design from the knowledge of Mechanical, Electrical, and Computer Engineering. LDR sensors based line follower robot design and Fabrication procedure which always direct along the black mark on the white surface. The robot uses several sensors to identify the line thus assisting the bot to stay on the track. The robot is driven by DC motors to control the movements of the wheels.
The project is to ask college related queries and get the responses through a chatbot an Artificial Conversational Entity. This System is a web application which provides answer to the query of the student. Students just have to query through the bot which is used for chatting. Students can chat using any format there is no specific format the user has to follow. This system helps the student to be updated about the college activities.
AN INTEGRATED APPROACH TO REDUCE INTRA CITY TRAFFIC AT COIMBATORE | J4RV4I1002Journal For Research
Coimbatore (11.0168°N,76.9558°E) is a fast developing cosmopolitan city with large number of industries and educational institutions. The development has lead to a large number of vehicles causing heavy traffic. The traffic congestion at Coimbatore has been a major problem which causes traffic jams and accidents. The major reason for traffic has been the mofussil buses that operate in the city. Around 1300 mofussil buses enter into the city, these buses play an important role in traffic congestion. The best solution is to construct a centralized bus stand at the outskirts of the city. This would reduce the traffic, accidents and also leads to development of the outskirts of the city. A suitable location near the city with sufficient road access to connecting cities has been chosen and the bus terminus has been designed, modeled with all facilities and features.
A REVIEW STUDY ON GAS-SOLID CYCLONE SEPARATOR USING LAPPLE MODEL | J4RV4I1001Journal For Research
Cyclone is the most commonly used device to separate dust particles from gas and dust flow. The performance of cyclone separator can be measured in terms of collection efficiency and pressure drop. Parameters like Inlet Flow velocity, the particle size distribution in feed, dimensions of inlet and outlet ducts and cyclone affects the performance of cyclone significantly. Various Mathematical models used for calculation of cut off diameter of separator, flow rate, target efficiency and no. of vortex inside the cyclone to design and study to check the performance of existing cyclone separator. Also new dimensions can be design with help of models. Here, in this study the efficiency achieved with Lapple model cumulatively 86.47%.
During past few years, brain tumor segmentation in CT has become an emergent research area in the field of medical imaging system. Brain tumor detection helps in finding the exact size and location of tumor. An efficient algorithm is proposed in this project for tumor detection based on segmentation and morphological operators. Firstly quality of scanned image is enhanced and then morphological operators are applied to detect the tumor in the scanned image. The problem with biopsy is that the patient has to be hospitalized and also the results (around 15%) give false negative. Scan images are read by radiologist but it's a subjective analysis which requires more experience. In the proposed work we segment the renal region and then classify the tumors as benign or malignant by using ANFIS, which is a non-invasive automated process. This approach reduces the waiting time of the patient.
USE OF GALVANIZED STEELS FOR AUTOMOTIVE BODY- CAR SURVEY RESULTS AT COASTAL A...Journal For Research
An extensive study of automotive body corrosion was conducted in Mumbai area to track corrosion performance of currently used materials of construction for automotive, especially cars with low end cost. The study consisted of a wide range of areas, starting from a closed car parking to several coastal and other humid regions such as Juhu Beach, Varsova beach and other adjoining areas. Data such as visible perforations, paint blisters, and surface rust were seen especially at vulnerable areas such as doors, mudguards, bonnet areas etc. Also, a comparison was done with low cost cars built with normal steel with those built using galvanized steels.
The main objective of our work is to deliver the goods at proper time by an unmanned drone. An Autonomous drone for delivering the goods such as bombs, medical kids, and foods mainly for military uses. This drone was used for dispatching the bombs and armed guns in battle field. And it is also used for delivering the medicines and foods for soldiers in our country borders.
SURVEY ON A MODERN MEDICARE SYSTEM USING INTERNET OF THINGS | J4RV3I12024Journal For Research
Since the population of the world is aging rapidly, how to provide appropriate health care to the elderly and unwell people becomes an important issue and draws high attention from medical, academic and industrial fields of the society. The Internet of Things (IoT) drives the evolution of the Internet and is regarded as a great potential to improve quality of life for the surging number of elderly people, significantly. As Android operating system gains immense popularity nowadays, it is a trend to make use of it for the wider access of IoT utility. This project presents a health monitoring system prototype based on IoT, with the increasing use of sensors by medical devices, remote and continuous monitoring of a patient’s health. This network of sensors and other mobile communication devices referred to as the Internet of Things for Medical Devices (IoT-MD), is poised to revolutionize the functioning of the healthcare industry. Untimed medicine administration can always show adverse effects on the health of the patients. The proposed system is designed to help these patients to take the required medicine in the right proportion at the right time. The basic ideology is integrating the principle of IoT with weight-based slot sensing on a normal pillbox. To make it more state-of-the-art, it is inbuilt with a Wi-Fi module for alerting the patient and also the chemist at the needed instant using IoT.
AN IMPLEMENTATION FOR FRAMEWORK FOR CHEMICAL STRUCTURE USING GRAPH GRAMMAR | ...Journal For Research
Modeling molecules as undirected graphs and chemical reactions as graph rewriting operations is a natural and convenient approach to modeling chemistry. Graph grammar rules are most naturally employed to model elementary reactions like merging, splitting, and isomerisation of molecules. In this paper a generic approach for composing graph grammar rules to define a chemically useful rule compositions. We iteratively apply these rule compositions to elementary transformations in order to automatically infer complex transformation patterns.
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdfKamal Acharya
The College Bus Management system is completely developed by Visual Basic .NET Version. The application is connect with most secured database language MS SQL Server. The application is develop by using best combination of front-end and back-end languages. The application is totally design like flat user interface. This flat user interface is more attractive user interface in 2017. The application is gives more important to the system functionality. The application is to manage the student’s details, driver’s details, bus details, bus route details, bus fees details and more. The application has only one unit for admin. The admin can manage the entire application. The admin can login into the application by using username and password of the admin. The application is develop for big and small colleges. It is more user friendly for non-computer person. Even they can easily learn how to manage the application within hours. The application is more secure by the admin. The system will give an effective output for the VB.Net and SQL Server given as input to the system. The compiled java program given as input to the system, after scanning the program will generate different reports. The application generates the report for users. The admin can view and download the report of the data. The application deliver the excel format reports. Because, excel formatted reports is very easy to understand the income and expense of the college bus. This application is mainly develop for windows operating system users. In 2017, 73% of people enterprises are using windows operating system. So the application will easily install for all the windows operating system users. The application-developed size is very low. The application consumes very low space in disk. Therefore, the user can allocate very minimum local disk space for this application.
TECHNICAL TRAINING MANUAL GENERAL FAMILIARIZATION COURSEDuvanRamosGarzon1
AIRCRAFT GENERAL
The Single Aisle is the most advanced family aircraft in service today, with fly-by-wire flight controls.
The A318, A319, A320 and A321 are twin-engine subsonic medium range aircraft.
The family offers a choice of engines
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Democratizing Fuzzing at Scale by Abhishek Aryaabh.arya
Presented at NUS: Fuzzing and Software Security Summer School 2024
This keynote talks about the democratization of fuzzing at scale, highlighting the collaboration between open source communities, academia, and industry to advance the field of fuzzing. It delves into the history of fuzzing, the development of scalable fuzzing platforms, and the empowerment of community-driven research. The talk will further discuss recent advancements leveraging AI/ML and offer insights into the future evolution of the fuzzing landscape.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Quality defects in TMT Bars, Possible causes and Potential Solutions.PrashantGoswami42
Maintaining high-quality standards in the production of TMT bars is crucial for ensuring structural integrity in construction. Addressing common defects through careful monitoring, standardized processes, and advanced technology can significantly improve the quality of TMT bars. Continuous training and adherence to quality control measures will also play a pivotal role in minimizing these defects.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Automobile Management System Project Report.pdfKamal Acharya
The proposed project is developed to manage the automobile in the automobile dealer company. The main module in this project is login, automobile management, customer management, sales, complaints and reports. The first module is the login. The automobile showroom owner should login to the project for usage. The username and password are verified and if it is correct, next form opens. If the username and password are not correct, it shows the error message.
When a customer search for a automobile, if the automobile is available, they will be taken to a page that shows the details of the automobile including automobile name, automobile ID, quantity, price etc. “Automobile Management System” is useful for maintaining automobiles, customers effectively and hence helps for establishing good relation between customer and automobile organization. It contains various customized modules for effectively maintaining automobiles and stock information accurately and safely.
When the automobile is sold to the customer, stock will be reduced automatically. When a new purchase is made, stock will be increased automatically. While selecting automobiles for sale, the proposed software will automatically check for total number of available stock of that particular item, if the total stock of that particular item is less than 5, software will notify the user to purchase the particular item.
Also when the user tries to sale items which are not in stock, the system will prompt the user that the stock is not enough. Customers of this system can search for a automobile; can purchase a automobile easily by selecting fast. On the other hand the stock of automobiles can be maintained perfectly by the automobile shop manager overcoming the drawbacks of existing system.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Immunizing Image Classifiers Against Localized Adversary Attacks
HCI BASED APPLICATION FOR PLAYING COMPUTER GAMES | J4RV4I1014
1. Journal for Research | Volume 04 | Issue 01 | March 2018
ISSN: 2395-7549
All rights reserved by www.journal4research.org 48
HCI based Application for Playing Computer
Games
Mr. K. S. Chandrasekaran A. Varun Ignatius
Assistant Professor UG Student
Department of Computer Science & Engineering Department of Computer Science & Engineering
Saranathan College of Engineering-620012, India Saranathan College of Engineering-620012, India
R. Vasanthaguru B. Vengataramanan
UG Student UG Student
Department of Computer Science & Engineering Department of Computer Science & Engineering
Saranathan College of Engineering-620012, India Saranathan College of Engineering-620012, India
B. Vishnu Shankar
UG Student
Department of Computer Science & Engineering
Saranathan College of Engineering-620012, India
Abstract
This paper describes a command interface for games based on hand gestures and voice command defined by postures, movement
and location. The system uses computer vision requiring no sensors or markers by the user. In voice command the speech
recognizer, recognize the input from the user. It stores and passes command to the game, action takes place. We propose a simple
architecture for performing real time colour detection and motion tracking using a webcam. The next step is to track the motion of
the specified colours and the resulting actions are given as input commands to the system. We specify blue colour for motion
tracking and green colour for mouse pointer. The speech recognition is the process of automatically recognizing a certain word
spoken by a particular speaker based on individual information included in speech waves. This application will help in reduction
in hardware requirement and can be implemented in other electronic devices also.
Keywords: Computer Vision, Gesture Recognition, Voice Command, Human Computer Interaction
_______________________________________________________________________________________________________
I. INTRODUCTION
Computer games are one of the most successful application domains in the history of interactive systems even with conventional
input systems like mouse and keyboard. The existing system, mouse for instance seriously limits the way humans interact with
computers. Introduction of HCI techniques to the gaming world would revolutionize the way humans play games. It is a command
interface for games based on hand gestures and voice command defined by postures, movement and location. The proposed system
uses a simple webcam and a PC for recognizing the input from the user and thus uses natural hand movements for playing games.
This effectively reduces the cost of implementing HCI in conventional PCs. We propose a simple architecture for performing real
time colour detection and motion tracking using a webcam. Since many colours are detected it is important to distinguish the
specified colours distinctly. The next step is to track the motion of the specified colours and the resulting actions are given as input
commands to the system.
Speech technology is a very popular term right now. And Speech Recognition is the process of automatically recognizing a
certain word spoken by a particular speaker based on individual information included in speech waves. Using speech recognition,
we can give commands to computer and the computer will perform the given task. The main objective of this project is to construct
and develop a system to execute commands of operating system by using speech recognition system that is capable of recognizing
and responding to speech inputs rather than using traditional means of input(e.g. computer keyboard, mouse), thus saving time and
effort of the user. The proposed system is easier to increase the interaction between people and computers by using speech
recognition; especially for those who suffer from health problems, for example, the proposed system helps physically challenged
persons. This application will help in reduction in hardware requirement and can be implemented in other electronic devices also.
In this case, we are using TIC-TAC-TOE GAME.
II. RELATED WORK
A few works have been proposed recently to use free hand gestures in games using computer vision. A multimodal multiplayer
gaming system combines a small number of postures, their location on a table-based interaction system and speech commands to
interact with games and discusses results of using this platform to interact with popular games. In this study, a colour pointer has
2. HCI based Application for Playing Computer Games
(J4R/ Volume 04 / Issue 01 / 010)
All rights reserved by www.journal4research.org 49
been used for object recognition and tracking. Instead of conventional finger tips a colour pointer has been used to make object
detection easy and fast. Other tools facilitate the use of gesture recognition for applications in general, not only games.
Speech technology is a very popular term right now. Speech recognition is highly demanded and has many useful applications.
Conventional system users use pervasive devices such a mouse and keyboard to interact with the system. Further more people with
physical challenges find conventional system hand to use. A good system has minimal restrictions in interacting with its user. The
speed of typing and hand writing is usually one word per second, so speaking may be the fastest communication form with a
computer. The applications with voice recognition can also be a very helpful to for handicapped people who have difficulties
with typing.
III. SPECIFIC REQUIREMENTS
For Hand Gestures, we are using the OpenCV. In this study, a colour pointer has been used for object recognition and tracking.
Blue and green colours are used as colour pointers. Colour detection is performed using inbuild functions in OpenCV.
OpenCV (Open Source Computer Vision) is a Library of programming functions mainly aimed at real-time computer vision.
Originally developed by Intel’s research centre. OpenCV is written in C++ and its primary interface is in C++, but it still retains a
less comprehensive though extensive older C interface. OpenCV’s applications areas include: Facial recognition system, Gesture
recognition, Human-computer interaction (HCI), Mobile robotics. OpenCV is an open source computer vision and machine
learning software library. OpenCV was built to provide common infrastructure for computer vision applications and to accelerate
the use of machine perception in the commercial products. Being a BSD-Licensed products, OpenCV makes it easy for business
to utilize and modify the code. It has C, C++, Python, Java and MATLAB interfaces and supports Windows, Linux, Android and
Mac OS. OpenCV means mostly towards real-time vision applications and takes advantage of MMX and SSE instructions when
available. A full-featured CUDA AND OpenCV interfaces are being actively developed right now. There are over 500 algorithms
and above 10 times as many functions that compose or support those algorithms. OpenCV is written natively in C++ and has a
template interface that works seamlessly with STL containers.
For Voice Input, we are using the Sphinx package. CMU Sphinx, also called Sphinx in short, is the general term to describe a
group of speech recognition systems. SPIHNX the first large-vocabulary speaker-independent continuous-speech recognizer using
multiple code books of various LPC-derived features. Two types of HMMs are used in SPHINX: context-independent phone model
and function-word-dependent phone model. On a task using bigram grammar, SPHINX achieved word accuracy. This demonstrates
the feasibility of speaker-independent continuous-speech recognition, and the feasibility of appropriateness of hidden Markov
models for such a task. These include a series of speech recognizers. The speech decoders come with acoustic models and sample
applications. The available resources include in addition software for acoustic model training.
These two specific requirements are used in the TIC-TAC-TOE GAME.
IV. DESCRIPTION
Hand Gestures
In the TicTacToe Game, the OpenCV works:
The user use one color for the movement and another color for click options in their fingers. The user move the hands into the
webcam and the images are captured by openCV tool.
The captured images go to the segment analysis by BG Removal, and the segmented image is recognizes by the recognizer. The
recognized movement is mapped to the mouse tracker, and if the another color will identified the action takes placed.
Both the use of gestures and having games as an application brings specific requirements to an interface and analyzing these
requirements was one of the most important steps in designing Gestures. Gestures are most often used to relay singular commands
or actions to the system, instead of tasks that may require continuous control, such as navigation. Therefore, it is recommended that
gestures be part of a multimodal interface. This also brings other advantages, such as decoupling different tasks in different
interaction modalities, which may reduce the user's cognitive load. So, while gestures have been used for other interaction tasks in
the past, including navigation. All this leads to the requirement that the vocabulary of gestures in each context of the interface,
while small, must be as simply and quickly modifiable as possible. Systems that require retraining for each set of possible gestures,
for instance, could prove problematic in this case, unless such training could be easily automated. The interface should also accept
small variations for each gesture. Demanding that postures and movements be precise, while possibly making the recognition task
easier, makes the interaction considerably harder to use and learn, demanding not only that the user remember the gestures and
their meanings but also train how to do them precisely, greatly reducing usability. It could be argued that, for particular games,
reducing the usability could actually be part of the challenge presented to the player (the challenge could be remembering a large
number of gestures, or learning how to execute them precisely, for instance). While the discussion of whether that is a good game
design practice or not is beyond the scope of this paper, Gestures opts for the more general goal of increasing usability as much as
possible. This agrees with the principle that, for home and entertainment applications, ease of learning, reducing user errors,
satisfaction and low cost are among the most important design goals. The system should also allow playing at home with minimal
setup time required. Players prefer games where they can be introduced to the action as soon as possible, even while still learning
the game and the interface. Therefore, the system should not require specific background or lighting conditions, complex
3. HCI based Application for Playing Computer Games
(J4R/ Volume 04 / Issue 01 / 010)
All rights reserved by www.journal4research.org 50
calibration or repeated training. Allowing the use of the gesture-based interface with conventional games is also advantageous to
the user, providing new options to enjoy a larger number of games. From the developer point of view, the system should be as easy
as possible to integrate within a game, without requiring specific knowledge of areas such as computer vision or machine learning.
The Abstract Framework
Figure 1 shows a UML Activity Diagram Representing Gesture object flow model. It is responsible for the gesture model, while
GestureAnalysis and GestureRecognition define the interfaces for the classes that will implement gesture analysis and recognition.
To these activities are added image capture and segmentation. GestureCapture provides an interface for capturing 2D images from
one or multiple cameras or prerecorded video streams (mostly for testing). The images must have the same size, but not necessarily
the same color depth. A device could provide, for instance, one or more color images and a grayscale image to represent a dense
depth map. GestureSegmentation should usually find in the original image(s) one or both hands and possibly the head (to determine
relative hand position).
Fig. 1: Hand Gesture Process
Figure 1 shows that the usual flow of information in Gestures in each time step is as follows: one or more images serve as input
to the image capture model, which makes these images available as an OpenCV's Image object. The segmentation uses this image
and provides a segmented image as an object of the same class (and same image size, but not necessarily color depth). Based on
the segmented image, the analysis provides a collection of features as a GestureFeatureCol object which is in turn used by the
recognition to output a gesture.
GestreFeatureCol is a collection of GestureFeature objects. GestureFeature contains an identifier string to describe the feature
and either a scalar or an array of values (more often used) or an image (useful, for instance, for features in the frequency domain).
GestureFeature already defines several identifiers, for those features most often found in the gesture recognition literature, to
facilitate the interface between analysis and recognition, but user-created identifiers may also be used. Input is an optional module
that accompanies but is actually separate from Gestures. It is responsible for facilitating, in a very simple way, both multimodal
input and integration with games or engines not necessarily aware of Gestures.
It simply translates its input, which is a description (a numerical or string ID or a XML description, for instance) that may be
supplied either by Gestures or any other system (and here lies the possibility of multimodal interaction), into another type of input,
such as a system input (like a key down event) or input data to a particular game engine. In one of the tests, for instance, gestures
are used for commands and a dancing mat is used for navigation. Because this architecture consists mostly of interfaces, it is
possible to create a single class that, through multiple inheritance, implements the entire system functionality. This is usually
considered a bad practice in object orientation (should be avoided) and is actually one of the reasons why aggregation is preferred
to inheritance. There are design patterns that could have been used to force the use of aggregation and avoid multiple inheritance,
but Gestures opts for allowing it for a reason. Gesture recognition may be a very costly task in terms of processing and must be
done in real time for the purpose of interaction. Many algorithms may be better optimized for speed when performing more than
one task (such as segmentation and analysis) together.
Furthermore, analysis and recognition are very tightly coupled in some algorithms and forcing their separation could be difficult.
So, while it is usually recommended to avoid using multiple inheritance and to implement each task in a different class, making it
much easier to exchange one module for the other or to develop modules in parallel and in teams, the option to do otherwise exists,
and for good reason.
4. HCI based Application for Playing Computer Games
(J4R/ Volume 04 / Issue 01 / 010)
All rights reserved by www.journal4research.org 51
Speech Recognition
For Voice Input, we are using the Sphinx Library. Sphinx consists of in-build functions which are used to detect the speech input
given by a particular user. In this case, we are giving the position number as the input for TicTacToe Game through voice command.
The Speech Recognizer, recognize the user input and store it, so that it can be used as an input in TicTacToe Game.
Fig. 2: Speech Recognition Process
For Voice Input, human voice which is sampled at rate of 16,000per second. It should be given in live mode. It should be noted
that the system would have difficulty in recognizing accented English, hence it is recommended to give input as native English
speakers would do. In Speech Recognizer, Platform speed directly affected our choice of a speech recognition system for our work.
It is used to recognize the voice commands as spoken by user. The voice input is supposed to be given from its microphone. The
speech recognition process decodes these input files, identifies the command and generates output accordingly. The Dictionary file
is a separate file that describes the phonics of each word. It is a collection of pre-defined words that are relevant to various activities.
Each word is separated into syllables. The output in from the recognizer(i.e.) Recognition result is cross referenced with the
dictionary file and correct word is returned as the result. The Decoder module then parses this word and converts into equivalent
text. This text is then used by the command executer to execute the required functions. The decoder module is inbuild the SPHINX
System. The class Microphone from the sphinx for microphone control using java provides solve the essential functions for working
with microphone connected to the system. The StartRecording() function in the microphone class can be used to capture the audio
from the microphone connected to the system. It returns an object of the result class which can be converted to text command using
the recognize() function in Recognizer class.
Speech Recognition is the process of identifying the speech in the recorded audio by using the phonetics dictionary. The
phonetics for each word is stored in the dictionary file. The recognize() function in the recognizer class uses the grammar file to
recognize speech if any in the audio recorded by the microphone.
A different approach that has been studied very well is the analysis of the human voice for means of human computer interaction.
The topic of speech processing has been studied since the 1960’s and is very well researched. Even though the process of speech
processing, and more precise speech recognition. Speaker Recognition: The goal of a speaker recognition system is to determine,
to whom a recorded voice sample belongs. To achieve this, possible users of this system need first to enroll their voice. This is
used to calculate a so-called model of the user. This model is then later again used to perform the matching, when the system needs
to decide the owner of a recorded voice section. Such a system can either be built text-dependent, meaning that the user needs to
speak the same phrase during enrollment and usage of the system. This can be seen similar to the setting a password for a computer,
which is then required to gain access. A recognition system can also operate text independent, so that the user may speak a different
phrase during enrollment and usage. This is more challenging to perform but provides a more flexible way the system can be used.
Speaker recognition is mainly used for two tasks: Speaker verification assumes that a user claims to be of a certain identity. The
5. HCI based Application for Playing Computer Games
(J4R/ Volume 04 / Issue 01 / 010)
All rights reserved by www.journal4research.org 52
system is then used to verify or refute this claim. A possible use case is to control access for a building. Here, the identity claim
could be provided by the usage of a secondary system like smart cards or fingerprints. Speaker identification: It does not assume
a prior identity claim and tries to determine to whom a certain voice belongs. These systems can either be built for a closed group,
where all possible users are known to the system beforehand. Or it can be used for an open group, meaning that not all possible
users of the system are already enrolled. Most of the time this is achieved by building a model for an “unknown speaker”. If the
recorded voice sample fits this “unknown model” the speaker is not yet enrolled. The system can then either just ignore the voice
or build a new model for this speaker, so that he later can be identified.
Speech Recognition
The idea behind speech recognition is to provide a means to transcribe spoken phrases into written text. Such a system has many
versatile capabilities. From controlling home appliances as well as light and heating in a home automation system, where only
certain commands and keywords need to be recognized, to full speech transcription for note keeping or dictation. There exist many
approaches to achieve this goal. The most simple technique is to build a model for every word that needs to be recognized. As
shown in the section pattern matching this is not feasible for bigger vocabularies like a whole language. Apart from constraining
the accepted phrases, it must be considered if the system is only used by a single individual, so-called speaker dependent, or if it
should perform equally well for a broad spectrum of people, called speaker independent.
Speech Processing
Although speaker recognition and speech recognition systems achieve different goals, both are built from the same general
structure. This structure can be divided into following stages:
Signal generation models the process of speaking in the human body.
Signal capturing & preconditioning deals with the digitalization of voice. This stage can also apply certain filters and
techniques to reduce noise or echo.
Feature extraction takes the captured signal and extracts only the information that is of interest to the system.
Pattern matching then tries to determine to what word or phrase the extracted featured belong and concludes on the system
output.
V. CONCLUSION
The System architecture proposed will completely revolutionize the way gaming applications are performed. The system makes
use of web camera and which are an integral part of any standard system, eliminating the necessity of additional peripheral devices.
Our endeavor foe object detection and image processing in OpenCV for the implementation of the gaming console provide to be
practically successful. Here a person’s motions tracted and interpreted as commands. Most gaming application required additional
hardware which is often very costly. The motive was to create this technology in the cheapest possible way under a standardized
operating system using a set of wearable gestureable interfaces. This technique can be further extended to other services also. In
Speech Recognition, the proposed system provides a better performance and experience I user interaction. It reduces potential time
involved in conventional interactive methods. Voice recognition is irrefutably the feature of human computer interaction and our
study proposes a new approach to utilize this approach for enhanced user experience and extend in ability of computer interaction
to motor impaired users.
REFERENCES
[1] D. Bowman, E. Kruijff, J. LaViola, I. Poupyrev, 3D User Interface: Theory and Practice, Addison-Wesley, 2005.
[2] T. Stamer, B. Leibe, B. Singletary, J. Pair, "Mind-warping: towards creating a compelling collaborative augmented reality game", Proc. of the 5th international
conference on Intelligent user interfaces (IUI '00), pp. 256-259, 2000.
[3] M. Snider, "Microsoft unveils hands-free gaming", USA Today, June 2009, [online] Available: http://www.usatoday.com/tech/gaming/2009-06-01-hands-
free-microsoft_N.htm.
[4] V. Dobnik, Surgeons may err less by playing video games, 2004, [online] Available: http://www.msnbc.msn.com/id/4685909.
[5] S. LAAKSO, M. LAAKSO, "Design of a Body-Driven Multiplayer Game System", ACM Comput. Entertain. vol. 4, no. 4, 2006.
[6] A. Camurri, B. Mazzarino, G. Volpe, "Analysis of Expressive Gesture: The EyesWeb Expressive Gesture Processing Library" in Gesture-Based
Communication in Human-Computer Interaction, Springer, pp. 469-470, 2004.
[7] B. Shneiderman, Designing the User Interface: Strategies for Effective Human-Computer Interaction, Addison-Wesley, 1998.
[8] Cinneide, A., Linear Prediction: The Technique, Its Solution and Application to Speech. Dublin Institute of Technology, 2008.
[9] Gutierrez-Osuna, R. CSCE 689-604: Special Topics in Speech Processing, Texas A&M University, 2011, courses.cs.tamu.edu/rgutier/csce689 s11.
[10] Schalkwyk, J., et al., Google Search by Voice: A case study. In Advances in Speech Recognition. Springer, 2010.
[11] Rabiner, L., A tutorial on hidden Markov models and selected applications in speech recognition. In Proceedings of the IEEE, vol. 77, no. 2 pp. 257-286,
1989.
[12] Campbell, J.P. Jr., Speaker recognition: a tutorial. In Proceedings of the IEEE, vol. 85, no. 9 pp. 1437-1462, 1997.