This document describes a virtual mouse system that uses computer vision and OpenCV to detect hand gestures from video input and use those gestures to control cursor movements and mouse clicks. Specifically, it tracks colored markers on fingertips to determine pointer position and recognizes gestures like clicking to emulate mouse functions without physical hardware. The system is implemented using Python libraries like OpenCV, MediaPipe, and PyAutoGUI to process video frames in real-time, identify hand and finger positions, and map those positions to mouse events. This allows users to control the computer interface entirely through natural hand motions detected by a webcam.
Virtual Mouse Control Using Hand Gesture RecognitionIRJET Journal
This document describes a system for controlling a computer mouse using hand gestures without any physical mouse device. The system uses a webcam to capture video of the user's hand. Computer vision and machine learning algorithms are used to recognize hand gestures in the video frames. Specific gestures like raising different fingers are mapped to mouse actions like left click, right click, and cursor movement. The system is implemented using Python and OpenCV libraries to process the video and detect hand and finger positions. It allows for fully controlling the computer mouse interface through natural hand gestures performed in front of the webcam.
This document describes an AI-based virtual mouse system that is operated using hand gestures detected by a webcam, without needing to physically touch a mouse or other device. The system uses computer vision and mediapipe to detect hand landmarks and track finger positions in real-time video input. By analyzing which fingers are raised, it can determine mouse movement or click functions. The goal is to create a touchless input that could be useful during the pandemic by reducing virus transmission through shared surfaces. The virtual mouse is implemented using OpenCV and other Python libraries to process video, smooth output, and perform mouse functions based on hand and finger tracking.
Controlling Computer using Hand GesturesIRJET Journal
This document describes a research project on controlling a computer using hand gestures. The researchers created a real-time gesture recognition system using convolutional neural networks (CNNs). They developed a dataset of 3000 training images of 10 different hand gestures for tasks like opening apps. A CNN model was trained to detect hands in images and recognize gestures. The model achieved 80.4% validation accuracy and was able to successfully perform operations like opening WhatsApp, PowerPoint and other apps based on detected gestures in real-time. The system provides a cost-effective and contactless way of interacting with computers using hand gestures only.
Virtual Mouse Control Using Hand GesturesIRJET Journal
This document describes a system for controlling a computer mouse using hand gestures detected by a webcam. The system uses computer vision and image processing techniques to track hand movements and identify gestures. It analyzes video frames from the webcam to extract the hand contour and detect gestures. Specific gestures are mapped to mouse functions like movement, left/right clicks, and scrolling. The system aims to provide an intuitive, hands-free way to control the mouse for physically disabled people or those uncomfortable with touchpads. It could help the millions affected by carpal tunnel syndrome annually in India. The document outlines the system architecture, methodology including hand tracking and gesture recognition, and concludes the technology provides better human-computer interaction without requiring a physical mouse.
VIRTUAL PAINT APPLICATION USING HAND GESTURESIRJET Journal
This document presents a virtual paint application that uses hand gesture recognition for real-time drawing or sketching. The application uses MediaPipe and OpenCV to track hand movements and joints in real-time. It identifies different gestures like selecting tools, writing on the canvas, and clearing the canvas. This allows for an intuitive human-computer interaction method without any physical devices. The application provides a dust-free classroom solution and makes online lessons more engaging. It analyzes video frames from a webcam to detect hand landmarks and identify gestures based on finger positions. This allows users to draw on screen by simply moving their hands.
Smart Presentation Control by Hand Gestures Using Computer Vision and Google’...IRJET Journal
This document describes a smart presentation control system using hand gesture recognition with computer vision and Google's MediaPipe framework. The system uses a webcam to capture videos and photos of hand gestures as input. MediaPipe is used to detect hand landmarks and gestures in real-time. Various hand gestures like changing slides, drawing on slides, and erasing can be used to control the presentation without needing a keyboard or mouse. The system aims to provide a natural and intuitive human-computer interaction experience for presentation control through hand gesture recognition.
Virtual Mouse Using Hand Gesture RecognitionIRJET Journal
The document describes a virtual mouse system that uses hand gesture recognition instead of physical mouse devices. The system uses a webcam to capture hand movements and detect hand landmarks using mediapipe. Various hand gestures correspond to mouse functions like move, click, scroll etc. The system is portable, low-cost and provides a user-friendly way to control the computer without additional hardware. It aims to overcome limitations of prior systems that required colored fingertips or multiple cameras. The virtual mouse was implemented using libraries like OpenCV, PyAutoGUI and tested successfully.
Gesture Based Interface Using Motion and Image Comparisonijait
This paper gives a new approach for movement of mouse and implementation of its functions using a real time camera. Here we propose to change the hardware design. Most of the existing technologies mainly depend on changing the mouse parts features like changing the position of tracking ball and adding more buttons. We use a camera, colored substance, image comparison technology and motion detection technology to control mouse movement and implement its functions (right click, left click, scrolling and double click) .
Virtual Mouse Control Using Hand Gesture RecognitionIRJET Journal
This document describes a system for controlling a computer mouse using hand gestures without any physical mouse device. The system uses a webcam to capture video of the user's hand. Computer vision and machine learning algorithms are used to recognize hand gestures in the video frames. Specific gestures like raising different fingers are mapped to mouse actions like left click, right click, and cursor movement. The system is implemented using Python and OpenCV libraries to process the video and detect hand and finger positions. It allows for fully controlling the computer mouse interface through natural hand gestures performed in front of the webcam.
This document describes an AI-based virtual mouse system that is operated using hand gestures detected by a webcam, without needing to physically touch a mouse or other device. The system uses computer vision and mediapipe to detect hand landmarks and track finger positions in real-time video input. By analyzing which fingers are raised, it can determine mouse movement or click functions. The goal is to create a touchless input that could be useful during the pandemic by reducing virus transmission through shared surfaces. The virtual mouse is implemented using OpenCV and other Python libraries to process video, smooth output, and perform mouse functions based on hand and finger tracking.
Controlling Computer using Hand GesturesIRJET Journal
This document describes a research project on controlling a computer using hand gestures. The researchers created a real-time gesture recognition system using convolutional neural networks (CNNs). They developed a dataset of 3000 training images of 10 different hand gestures for tasks like opening apps. A CNN model was trained to detect hands in images and recognize gestures. The model achieved 80.4% validation accuracy and was able to successfully perform operations like opening WhatsApp, PowerPoint and other apps based on detected gestures in real-time. The system provides a cost-effective and contactless way of interacting with computers using hand gestures only.
Virtual Mouse Control Using Hand GesturesIRJET Journal
This document describes a system for controlling a computer mouse using hand gestures detected by a webcam. The system uses computer vision and image processing techniques to track hand movements and identify gestures. It analyzes video frames from the webcam to extract the hand contour and detect gestures. Specific gestures are mapped to mouse functions like movement, left/right clicks, and scrolling. The system aims to provide an intuitive, hands-free way to control the mouse for physically disabled people or those uncomfortable with touchpads. It could help the millions affected by carpal tunnel syndrome annually in India. The document outlines the system architecture, methodology including hand tracking and gesture recognition, and concludes the technology provides better human-computer interaction without requiring a physical mouse.
VIRTUAL PAINT APPLICATION USING HAND GESTURESIRJET Journal
This document presents a virtual paint application that uses hand gesture recognition for real-time drawing or sketching. The application uses MediaPipe and OpenCV to track hand movements and joints in real-time. It identifies different gestures like selecting tools, writing on the canvas, and clearing the canvas. This allows for an intuitive human-computer interaction method without any physical devices. The application provides a dust-free classroom solution and makes online lessons more engaging. It analyzes video frames from a webcam to detect hand landmarks and identify gestures based on finger positions. This allows users to draw on screen by simply moving their hands.
Smart Presentation Control by Hand Gestures Using Computer Vision and Google’...IRJET Journal
This document describes a smart presentation control system using hand gesture recognition with computer vision and Google's MediaPipe framework. The system uses a webcam to capture videos and photos of hand gestures as input. MediaPipe is used to detect hand landmarks and gestures in real-time. Various hand gestures like changing slides, drawing on slides, and erasing can be used to control the presentation without needing a keyboard or mouse. The system aims to provide a natural and intuitive human-computer interaction experience for presentation control through hand gesture recognition.
Virtual Mouse Using Hand Gesture RecognitionIRJET Journal
The document describes a virtual mouse system that uses hand gesture recognition instead of physical mouse devices. The system uses a webcam to capture hand movements and detect hand landmarks using mediapipe. Various hand gestures correspond to mouse functions like move, click, scroll etc. The system is portable, low-cost and provides a user-friendly way to control the computer without additional hardware. It aims to overcome limitations of prior systems that required colored fingertips or multiple cameras. The virtual mouse was implemented using libraries like OpenCV, PyAutoGUI and tested successfully.
Gesture Based Interface Using Motion and Image Comparisonijait
This paper gives a new approach for movement of mouse and implementation of its functions using a real time camera. Here we propose to change the hardware design. Most of the existing technologies mainly depend on changing the mouse parts features like changing the position of tracking ball and adding more buttons. We use a camera, colored substance, image comparison technology and motion detection technology to control mouse movement and implement its functions (right click, left click, scrolling and double click) .
This document describes a technical seminar presented on a real-time AI virtual mouse system using computer vision. The system allows users to control mouse functions like left clicks, right clicks, and scrolling through hand gestures detected by a webcam, without needing a physical mouse. It works by using the MediaPipe and OpenCV libraries to detect hand landmarks and track hand movements. Key gestures like finger position and distance are used to map to different mouse functions like clicking or scrolling. The system aims to provide a more convenient and hands-free way to control the computer.
This document describes a virtual mouse system that uses hand gestures as detected by a webcam to control the computer cursor and perform mouse functions like clicking and dragging. The proposed system aims to overcome limitations of physical mice like requiring batteries, wireless receivers or specific surfaces. It analyzes video frames from the webcam using OpenCV and MediaPipe to detect hand positions and gestures. Mouse movements and actions are mapped to specific gestures. The system was tested in different lighting conditions and distances from the webcam and was found to work effectively in most scenarios. Further improvements to accuracy and adapting it to mobile devices are discussed as future work.
IRJET- Finger Gesture Recognition Using Linear CameraIRJET Journal
This document describes a system for finger gesture recognition using a linear camera. The system aims to allow users to control basic computer functions through finger gestures as an alternative to using a mouse or keyboard. It works by using image processing techniques on video captured by the linear camera to detect the user's finger movements and map them to cursor movements or actions. The system is broken down into four main stages - skin detection to identify finger regions, finger contour extraction, finger tracking, and gesture recognition to identify gestures and map them to computer functions like play, pause, volume control etc. This vision-based approach allows for contactless control and could help users in situations where mouse or keyboard is unavailable.
A Survey Paper on Controlling Computer using Hand GesturesIRJET Journal
This document summarizes a survey paper on controlling computers using hand gestures. It discusses various techniques that have been used for hand gesture recognition in previous research papers. The paper reviews literature on hand gesture recognition methods based on sensor technology and computer vision. It describes applications of hand gesture recognition such as controlling media playback, scrolling web pages, and presenting slides. Common challenges with hand gesture recognition are also mentioned, such as dealing with complex backgrounds and lighting conditions. The goal of the paper is to perform a literature review on prominent techniques, applications, and difficulties in controlling computers using hand gestures.
This document describes a virtual mouse system that uses computer vision and color tracking to replace a conventional mouse. The system tracks colored objects like a red or blue object held in the user's hand to map hand movements to mouse movements and clicks. It analyzes image frames from a webcam to detect pixel colors and scale the detected positions to match screen coordinates. This allows for freer motion than a physical mouse and reduces costs compared to alternatives like touchscreens. The system is implemented using OpenCV for image processing and runs entirely in software on the user's computer.
This document summarizes a survey on detecting hand gestures to be used as input for computer interactions. The introduction discusses how graphical user interfaces are being upgraded to provide more efficient visual interfaces using touchscreen technologies. However, these technologies are still too expensive for laptops and desktops. The paper then proposes developing a virtual mouse system using a webcam to capture hand movements and perform mouse functions like left and right clicks. The methodology section outlines the key steps of the proposed system which includes skin detection, contour extraction from images, and mapping detected hand gestures to cursor movements and controls. Finally, the conclusion discusses the goal of making this technology cheaper and more accessible to use as a standard input device without additional hardware requirements.
This document presents a hand gesture controlled mouse system using machine learning. It has 4 modules: 1) Hand tracking to detect hand landmarks using a webcam, 2) Volume control using the distance between thumb and forefinger, 3) Virtual painting by drawing on screen, 4) Mouse control using index finger movements to move the cursor. The system was able to successfully track hand gestures and use them to control mouse functions and volume. Future applications could include use in education for interactive teaching and by people with disabilities. Some limitations are need for adequate lighting and inability to track multiple hands.
SLIDE PRESENTATION BY HAND GESTURE RECOGNITION USING MACHINE LEARNINGIRJET Journal
The document discusses a slide presentation controlled by hand gesture recognition using machine learning. It describes how different hand gestures can be used to control slide presentation functions, such as using the index finger to draw, three fingers to undo drawing, the little finger to move to the next slide, and the thumb to move to the previous slide. The system uses a camera and machine learning techniques like neural networks to recognize hand gestures in real-time and map them to slide navigation and other presentation controls.
SixthSense is a name for extra information supplied by a wearable computer, such as the device called EyeTap (Mann), Telepointer (Mann), and "WuW" (Wear yoUr World) by Pranav Mistry
This document summarizes a research project that aims to develop a virtual mouse system using hand gesture recognition with a webcam. The system analyzes hand contours extracted from the webcam footage to identify fingers and gestures. It then maps different gestures to mouse functions like cursor movement, left/right clicks, and scrolling. The researchers believe this could provide a more natural interface than physical mice and benefit users who have difficulty using mice. It describes the design and implementation of the gesture recognition system, which involves steps like color detection, contour extraction, and identifying fingers from convexity defects.
Design of Image Projection Using Combined Approach for TrackingIJMER
Over the years the techniques and methods that have been used to interact with the
computers have evolved significantly. From the primitive use of punch cards to the latest touch screen
panels we can see the vast improvement in interaction with the system. There are many new ways of
projection and interaction technologies that can reshape our perception and interaction
methodologies. Also projection technology is very useful for creating various geometric displays. In
earlier generations, the projector technology was used for projecting images and videos on single
screen, using large and bulky setup. To overcome the earlier limitations we are designing “Wireless
Image Projection Tracking”, which is a system that uses IR (Infrared) technology to track the body in
the IR range and uses their movements for image orientation and manipulations like zoom, tilt/rotate,
and scale. We are presenting a method of mapping IR light source position and orientation to an
image. By using this system we can also track single and multiple IR light source positions and also it
can be used effectively to see the image projection in 3D view. Extension in this technology can further
be useful for future tracking capabilities to implement the touch screen feature for commercial
applications.
Mouse Cursor Control Hands Free Using Deep LearningIRJET Journal
1) The document presents a study on developing a mouse-free cursor control system using facial movements and deep learning.
2) The system uses facial landmark detection and analysis of facial ratios like eye aspect ratio and mouth aspect ratio to interpret facial expressions and predict cursor movement directions.
3) Support vector machines and histogram of oriented gradients algorithms are used to train the system to classify facial features and link them to cursor control functions like moving, clicking, and scrolling.
Mouse Cursor Control Hands Free Using Deep LearningIRJET Journal
1) The document presents a study on developing a mouse-free cursor control system using facial movements and deep learning.
2) The system uses facial landmark detection and analysis of facial ratios like eye aspect ratio and mouth aspect ratio to detect facial expressions and movements like winks, blinks, and mouth movements.
3) A support vector machine (SVM) classifier is trained on the facial features to predict and translate cursor movement directions based on the user's facial gestures, allowing them to control the cursor and perform actions hands-free.
IRJET - Chatbot with Gesture based User InputIRJET Journal
The document describes a proposed system for building a chatbot that takes gesture-based user input. The system would use either a deep learning model or convexity defect algorithm to recognize gestures from video input. Recognized gestures would be mapped to text commands and fed into a keyword-based chatbot. The chatbot would execute commands or responses based on the gesture input. The proposed system aims to provide a natural interface for applications helping deaf/mute users or in places like museums. It reviews related work on gesture recognition and discusses the technical components and workflow of the envisioned chatbot system.
Computer vision based human computer interaction using color detection techni...Chetan Dhule
This document describes a method for controlling a computer using hand and finger gestures detected through a webcam, without the need for specialized hardware or gesture recognition training. The method tracks color markers attached to fingers to detect finger motion in real-time and uses the motion to control the mouse pointer position and clicks. An application was created with a graphical user interface that allows setting the marker color and controls the mouse based on finger movements detected by calculating pixel value changes of the colored markers in video frames. The method provides a low-cost way to interact with a computer using natural hand gestures without lag compared to existing gesture recognition methods.
The document describes the components and working of Sixth Sense technology, which is a wearable gestural interface. It consists of a camera, projector, mirror, smartphone, and color markers on the fingertips. The camera captures images and tracks hand gestures via the color markers. The smartphone processes the data and searches the internet. It projects information onto surfaces using the projector and mirror. The technology bridges the physical and digital world by recognizing objects and displaying related information using hand gestures.
Virtual Automation using Mixed Reality and Leap Motion ControlIRJET Journal
This document discusses using leap motion technology and mixed reality to control a robot virtually. It proposes a robot system that can be operated solely through human gestures detected by a leap motion sensor, without any other external devices. The robot's movements and tasks would be displayed to the user through an augmented reality mobile app and virtual reality headset. The system aims to provide an immersive experience for applications like shopping assistance, industrial training simulations, and inquiry-based learning. It describes the robot architecture, use of a controller like Arduino, augmented reality development using Unity 3D, and virtual reality using Google Cardboard. Experimental results showed the gesture controls and mixed reality interfaces worked accurately and provided a realistic experience to the user.
The document discusses the Sixth Sense technology, a wearable gestural interface that augments the physical world with digital information. It can project information onto surfaces using a camera, projector and mirror. The technology recognizes hand gestures to allow interactions like getting maps, photos and product information without devices. It offers advantages like connectivity and accessibility but faces issues like privacy, health effects and lack of durability. The technology may transform fields like education, e-commerce and assistance for disabled people.
Sixth Sense Technology is a mini-projector coupled with a camera and a
cellphone—which acts as the computer and connected to the Cloud, all the
information stored on the web. Sixth Sense can also obey hand gestures. The
camera recognizes objects around a person instantly, with the micro-projector
overlaying the information on any surface, including the object itself or hand.
Also can access or manipulate the information using fingers. make a call by
Extend hand on front of the projector and numbers will appear for to click.
know the time by Draw a circle on wrist and a watch will appear. take a photo
by Just make a square with fingers, highlighting what want to frame, and the
system will make the photo—which can later organize with the others using
own hands over the air.and The device has a huge number of applications , it is
portable and easily to carry as can wear it in neck.
The drawing application lets user draw on any surface by observing the
movement of index finger. Mapping can also be done anywhere with the
features of zooming in or zooming out. The camera also helps user to take
pictures of the scene is viewing and later can arrange them on any surface.
Some of the more practical uses are reading a newspaper. reading a newspaper
and viewing videos instead of the photos in the paper. Or live sports updates
while reading the newspaper.
The device can also tell arrival, departure or delay time of air plane on
tickets. For book lovers it is nothing less than a blessing. Open any book and
find the Amazon ratings of the book. To add to it, pick any page and the device
gives additional information on the text, comments and lot more add on feature
An analysis of desktop control and information retrieval from the internet us...eSAT Journals
Abstract
As the use of computers is ever increasing new and easier methods of interaction with the system are needed. Augmented reality makes any application more interactive and full of life making it easier and attractive. The conventional mouse and keyboard can be replaced by human hand to interact with the computer. Addition of augmented reality to do so will make it more attractive. The same concept of using hand as interaction device and adding augmented reality can be used for retrieval of information from the internet. This will make our daily tasks relating computer easier and fun increasing productivity.
Keywords: Human Computer Interaction, Desktop Control, Information Access, Augmented Reality, Image Processing, Information Retrieval, Image Formation
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...IRJET Journal
1) The document discusses the Sungal Tunnel project in Jammu and Kashmir, India, which is being constructed using the New Austrian Tunneling Method (NATM).
2) NATM involves continuous monitoring during construction to adapt to changing ground conditions, and makes extensive use of shotcrete for temporary tunnel support.
3) The methodology section outlines the systematic geotechnical design process for tunnels according to Austrian guidelines, and describes the various steps of NATM tunnel construction including initial and secondary tunnel support.
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTUREIRJET Journal
This study examines the effect of response reduction factors (R factors) on reinforced concrete (RC) framed structures through nonlinear dynamic analysis. Three RC frame models with varying heights (4, 8, and 12 stories) were analyzed in ETABS software under different R factors ranging from 1 to 5. The results showed that displacement increased as the R factor decreased, indicating less linear behavior for lower R factors. Drift also decreased proportionally with increasing R factors from 1 to 5. Shear forces in the frames decreased with higher R factors. In general, R factors of 3 to 5 produced more satisfactory performance with less displacement and drift. The displacement variations between different building heights were consistent at different R factors. This study evaluated how R factors influence
This document describes a technical seminar presented on a real-time AI virtual mouse system using computer vision. The system allows users to control mouse functions like left clicks, right clicks, and scrolling through hand gestures detected by a webcam, without needing a physical mouse. It works by using the MediaPipe and OpenCV libraries to detect hand landmarks and track hand movements. Key gestures like finger position and distance are used to map to different mouse functions like clicking or scrolling. The system aims to provide a more convenient and hands-free way to control the computer.
This document describes a virtual mouse system that uses hand gestures as detected by a webcam to control the computer cursor and perform mouse functions like clicking and dragging. The proposed system aims to overcome limitations of physical mice like requiring batteries, wireless receivers or specific surfaces. It analyzes video frames from the webcam using OpenCV and MediaPipe to detect hand positions and gestures. Mouse movements and actions are mapped to specific gestures. The system was tested in different lighting conditions and distances from the webcam and was found to work effectively in most scenarios. Further improvements to accuracy and adapting it to mobile devices are discussed as future work.
IRJET- Finger Gesture Recognition Using Linear CameraIRJET Journal
This document describes a system for finger gesture recognition using a linear camera. The system aims to allow users to control basic computer functions through finger gestures as an alternative to using a mouse or keyboard. It works by using image processing techniques on video captured by the linear camera to detect the user's finger movements and map them to cursor movements or actions. The system is broken down into four main stages - skin detection to identify finger regions, finger contour extraction, finger tracking, and gesture recognition to identify gestures and map them to computer functions like play, pause, volume control etc. This vision-based approach allows for contactless control and could help users in situations where mouse or keyboard is unavailable.
A Survey Paper on Controlling Computer using Hand GesturesIRJET Journal
This document summarizes a survey paper on controlling computers using hand gestures. It discusses various techniques that have been used for hand gesture recognition in previous research papers. The paper reviews literature on hand gesture recognition methods based on sensor technology and computer vision. It describes applications of hand gesture recognition such as controlling media playback, scrolling web pages, and presenting slides. Common challenges with hand gesture recognition are also mentioned, such as dealing with complex backgrounds and lighting conditions. The goal of the paper is to perform a literature review on prominent techniques, applications, and difficulties in controlling computers using hand gestures.
This document describes a virtual mouse system that uses computer vision and color tracking to replace a conventional mouse. The system tracks colored objects like a red or blue object held in the user's hand to map hand movements to mouse movements and clicks. It analyzes image frames from a webcam to detect pixel colors and scale the detected positions to match screen coordinates. This allows for freer motion than a physical mouse and reduces costs compared to alternatives like touchscreens. The system is implemented using OpenCV for image processing and runs entirely in software on the user's computer.
This document summarizes a survey on detecting hand gestures to be used as input for computer interactions. The introduction discusses how graphical user interfaces are being upgraded to provide more efficient visual interfaces using touchscreen technologies. However, these technologies are still too expensive for laptops and desktops. The paper then proposes developing a virtual mouse system using a webcam to capture hand movements and perform mouse functions like left and right clicks. The methodology section outlines the key steps of the proposed system which includes skin detection, contour extraction from images, and mapping detected hand gestures to cursor movements and controls. Finally, the conclusion discusses the goal of making this technology cheaper and more accessible to use as a standard input device without additional hardware requirements.
This document presents a hand gesture controlled mouse system using machine learning. It has 4 modules: 1) Hand tracking to detect hand landmarks using a webcam, 2) Volume control using the distance between thumb and forefinger, 3) Virtual painting by drawing on screen, 4) Mouse control using index finger movements to move the cursor. The system was able to successfully track hand gestures and use them to control mouse functions and volume. Future applications could include use in education for interactive teaching and by people with disabilities. Some limitations are need for adequate lighting and inability to track multiple hands.
SLIDE PRESENTATION BY HAND GESTURE RECOGNITION USING MACHINE LEARNINGIRJET Journal
The document discusses a slide presentation controlled by hand gesture recognition using machine learning. It describes how different hand gestures can be used to control slide presentation functions, such as using the index finger to draw, three fingers to undo drawing, the little finger to move to the next slide, and the thumb to move to the previous slide. The system uses a camera and machine learning techniques like neural networks to recognize hand gestures in real-time and map them to slide navigation and other presentation controls.
SixthSense is a name for extra information supplied by a wearable computer, such as the device called EyeTap (Mann), Telepointer (Mann), and "WuW" (Wear yoUr World) by Pranav Mistry
This document summarizes a research project that aims to develop a virtual mouse system using hand gesture recognition with a webcam. The system analyzes hand contours extracted from the webcam footage to identify fingers and gestures. It then maps different gestures to mouse functions like cursor movement, left/right clicks, and scrolling. The researchers believe this could provide a more natural interface than physical mice and benefit users who have difficulty using mice. It describes the design and implementation of the gesture recognition system, which involves steps like color detection, contour extraction, and identifying fingers from convexity defects.
Design of Image Projection Using Combined Approach for TrackingIJMER
Over the years the techniques and methods that have been used to interact with the
computers have evolved significantly. From the primitive use of punch cards to the latest touch screen
panels we can see the vast improvement in interaction with the system. There are many new ways of
projection and interaction technologies that can reshape our perception and interaction
methodologies. Also projection technology is very useful for creating various geometric displays. In
earlier generations, the projector technology was used for projecting images and videos on single
screen, using large and bulky setup. To overcome the earlier limitations we are designing “Wireless
Image Projection Tracking”, which is a system that uses IR (Infrared) technology to track the body in
the IR range and uses their movements for image orientation and manipulations like zoom, tilt/rotate,
and scale. We are presenting a method of mapping IR light source position and orientation to an
image. By using this system we can also track single and multiple IR light source positions and also it
can be used effectively to see the image projection in 3D view. Extension in this technology can further
be useful for future tracking capabilities to implement the touch screen feature for commercial
applications.
Mouse Cursor Control Hands Free Using Deep LearningIRJET Journal
1) The document presents a study on developing a mouse-free cursor control system using facial movements and deep learning.
2) The system uses facial landmark detection and analysis of facial ratios like eye aspect ratio and mouth aspect ratio to interpret facial expressions and predict cursor movement directions.
3) Support vector machines and histogram of oriented gradients algorithms are used to train the system to classify facial features and link them to cursor control functions like moving, clicking, and scrolling.
Mouse Cursor Control Hands Free Using Deep LearningIRJET Journal
1) The document presents a study on developing a mouse-free cursor control system using facial movements and deep learning.
2) The system uses facial landmark detection and analysis of facial ratios like eye aspect ratio and mouth aspect ratio to detect facial expressions and movements like winks, blinks, and mouth movements.
3) A support vector machine (SVM) classifier is trained on the facial features to predict and translate cursor movement directions based on the user's facial gestures, allowing them to control the cursor and perform actions hands-free.
IRJET - Chatbot with Gesture based User InputIRJET Journal
The document describes a proposed system for building a chatbot that takes gesture-based user input. The system would use either a deep learning model or convexity defect algorithm to recognize gestures from video input. Recognized gestures would be mapped to text commands and fed into a keyword-based chatbot. The chatbot would execute commands or responses based on the gesture input. The proposed system aims to provide a natural interface for applications helping deaf/mute users or in places like museums. It reviews related work on gesture recognition and discusses the technical components and workflow of the envisioned chatbot system.
Computer vision based human computer interaction using color detection techni...Chetan Dhule
This document describes a method for controlling a computer using hand and finger gestures detected through a webcam, without the need for specialized hardware or gesture recognition training. The method tracks color markers attached to fingers to detect finger motion in real-time and uses the motion to control the mouse pointer position and clicks. An application was created with a graphical user interface that allows setting the marker color and controls the mouse based on finger movements detected by calculating pixel value changes of the colored markers in video frames. The method provides a low-cost way to interact with a computer using natural hand gestures without lag compared to existing gesture recognition methods.
The document describes the components and working of Sixth Sense technology, which is a wearable gestural interface. It consists of a camera, projector, mirror, smartphone, and color markers on the fingertips. The camera captures images and tracks hand gestures via the color markers. The smartphone processes the data and searches the internet. It projects information onto surfaces using the projector and mirror. The technology bridges the physical and digital world by recognizing objects and displaying related information using hand gestures.
Virtual Automation using Mixed Reality and Leap Motion ControlIRJET Journal
This document discusses using leap motion technology and mixed reality to control a robot virtually. It proposes a robot system that can be operated solely through human gestures detected by a leap motion sensor, without any other external devices. The robot's movements and tasks would be displayed to the user through an augmented reality mobile app and virtual reality headset. The system aims to provide an immersive experience for applications like shopping assistance, industrial training simulations, and inquiry-based learning. It describes the robot architecture, use of a controller like Arduino, augmented reality development using Unity 3D, and virtual reality using Google Cardboard. Experimental results showed the gesture controls and mixed reality interfaces worked accurately and provided a realistic experience to the user.
The document discusses the Sixth Sense technology, a wearable gestural interface that augments the physical world with digital information. It can project information onto surfaces using a camera, projector and mirror. The technology recognizes hand gestures to allow interactions like getting maps, photos and product information without devices. It offers advantages like connectivity and accessibility but faces issues like privacy, health effects and lack of durability. The technology may transform fields like education, e-commerce and assistance for disabled people.
Sixth Sense Technology is a mini-projector coupled with a camera and a
cellphone—which acts as the computer and connected to the Cloud, all the
information stored on the web. Sixth Sense can also obey hand gestures. The
camera recognizes objects around a person instantly, with the micro-projector
overlaying the information on any surface, including the object itself or hand.
Also can access or manipulate the information using fingers. make a call by
Extend hand on front of the projector and numbers will appear for to click.
know the time by Draw a circle on wrist and a watch will appear. take a photo
by Just make a square with fingers, highlighting what want to frame, and the
system will make the photo—which can later organize with the others using
own hands over the air.and The device has a huge number of applications , it is
portable and easily to carry as can wear it in neck.
The drawing application lets user draw on any surface by observing the
movement of index finger. Mapping can also be done anywhere with the
features of zooming in or zooming out. The camera also helps user to take
pictures of the scene is viewing and later can arrange them on any surface.
Some of the more practical uses are reading a newspaper. reading a newspaper
and viewing videos instead of the photos in the paper. Or live sports updates
while reading the newspaper.
The device can also tell arrival, departure or delay time of air plane on
tickets. For book lovers it is nothing less than a blessing. Open any book and
find the Amazon ratings of the book. To add to it, pick any page and the device
gives additional information on the text, comments and lot more add on feature
An analysis of desktop control and information retrieval from the internet us...eSAT Journals
Abstract
As the use of computers is ever increasing new and easier methods of interaction with the system are needed. Augmented reality makes any application more interactive and full of life making it easier and attractive. The conventional mouse and keyboard can be replaced by human hand to interact with the computer. Addition of augmented reality to do so will make it more attractive. The same concept of using hand as interaction device and adding augmented reality can be used for retrieval of information from the internet. This will make our daily tasks relating computer easier and fun increasing productivity.
Keywords: Human Computer Interaction, Desktop Control, Information Access, Augmented Reality, Image Processing, Information Retrieval, Image Formation
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...IRJET Journal
1) The document discusses the Sungal Tunnel project in Jammu and Kashmir, India, which is being constructed using the New Austrian Tunneling Method (NATM).
2) NATM involves continuous monitoring during construction to adapt to changing ground conditions, and makes extensive use of shotcrete for temporary tunnel support.
3) The methodology section outlines the systematic geotechnical design process for tunnels according to Austrian guidelines, and describes the various steps of NATM tunnel construction including initial and secondary tunnel support.
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTUREIRJET Journal
This study examines the effect of response reduction factors (R factors) on reinforced concrete (RC) framed structures through nonlinear dynamic analysis. Three RC frame models with varying heights (4, 8, and 12 stories) were analyzed in ETABS software under different R factors ranging from 1 to 5. The results showed that displacement increased as the R factor decreased, indicating less linear behavior for lower R factors. Drift also decreased proportionally with increasing R factors from 1 to 5. Shear forces in the frames decreased with higher R factors. In general, R factors of 3 to 5 produced more satisfactory performance with less displacement and drift. The displacement variations between different building heights were consistent at different R factors. This study evaluated how R factors influence
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...IRJET Journal
This study compares the use of Stark Steel and TMT Steel as reinforcement materials in a two-way reinforced concrete slab. Mechanical testing is conducted to determine the tensile strength, yield strength, and other properties of each material. A two-way slab design adhering to codes and standards is executed with both materials. The performance is analyzed in terms of deflection, stability under loads, and displacement. Cost analyses accounting for material, durability, maintenance, and life cycle costs are also conducted. The findings provide insights into the economic and structural implications of each material for reinforcement selection and recommendations on the most suitable material based on the analysis.
Effect of Camber and Angles of Attack on Airfoil CharacteristicsIRJET Journal
This document discusses a study analyzing the effect of camber, position of camber, and angle of attack on the aerodynamic characteristics of airfoils. Sixteen modified asymmetric NACA airfoils were analyzed using computational fluid dynamics (CFD) by varying the camber, camber position, and angle of attack. The results showed the relationship between these parameters and the lift coefficient, drag coefficient, and lift to drag ratio. This provides insight into how changes in airfoil geometry impact aerodynamic performance.
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...IRJET Journal
This document reviews the progress and challenges of aluminum-based metal matrix composites (MMCs), focusing on their fabrication processes and applications. It discusses how various aluminum MMCs have been developed using reinforcements like borides, carbides, oxides, and nitrides to improve mechanical and wear properties. These composites have gained prominence for their lightweight, high-strength and corrosion resistance properties. The document also examines recent advancements in fabrication techniques for aluminum MMCs and their growing applications in industries such as aerospace and automotive. However, it notes that challenges remain around issues like improper mixing of reinforcements and reducing reinforcement agglomeration.
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...IRJET Journal
This document discusses research on using graph neural networks (GNNs) for dynamic optimization of public transportation networks in real-time. GNNs represent transit networks as graphs with nodes as stops and edges as connections. The GNN model aims to optimize networks using real-time data on vehicle locations, arrival times, and passenger loads. This helps increase mobility, decrease traffic, and improve efficiency. The system continuously trains and infers to adapt to changing transit conditions, providing decision support tools. While research has focused on performance, more work is needed on security, socio-economic impacts, contextual generalization of models, continuous learning approaches, and effective real-time visualization.
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...IRJET Journal
This document summarizes a research project that aims to compare the structural performance of conventional slab and grid slab systems in multi-story buildings using ETABS software. The study will analyze both symmetric and asymmetric building models under various loading conditions. Parameters like deflections, moments, shears, and stresses will be examined to evaluate the structural effectiveness of each slab type. The results will provide insights into the comparative behavior of conventional and grid slabs to help engineers and architects select appropriate slab systems based on building layouts and design requirements.
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...IRJET Journal
This document summarizes and reviews a research paper on the seismic response of reinforced concrete (RC) structures with plan and vertical irregularities, with and without infill walls. It discusses how infill walls can improve or reduce the seismic performance of RC buildings, depending on factors like wall layout, height distribution, connection to the frame, and relative stiffness of walls and frames. The reviewed research paper analyzes the behavior of infill walls, effects of vertical irregularities, and seismic performance of high-rise structures under linear static and dynamic analysis. It studies response characteristics like story drift, deflection and shear. The document also provides literature on similar research investigating the effects of infill walls, soft stories, plan irregularities, and different
This document provides a review of machine learning techniques used in Advanced Driver Assistance Systems (ADAS). It begins with an abstract that summarizes key applications of machine learning in ADAS, including object detection, recognition, and decision-making. The introduction discusses the integration of machine learning in ADAS and how it is transforming vehicle safety. The literature review then examines several research papers on topics like lightweight deep learning models for object detection and lane detection models using image processing. It concludes by discussing challenges and opportunities in the field, such as improving algorithm robustness and adaptability.
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...IRJET Journal
The document analyzes temperature and precipitation trends in Asosa District, Benishangul Gumuz Region, Ethiopia from 1993 to 2022 based on data from the local meteorological station. The results show:
1) The average maximum and minimum annual temperatures have generally decreased over time, with maximum temperatures decreasing by a factor of -0.0341 and minimum by -0.0152.
2) Mann-Kendall tests found the decreasing temperature trends to be statistically significant for annual maximum temperatures but not for annual minimum temperatures.
3) Annual precipitation in Asosa District showed a statistically significant increasing trend.
The conclusions recommend development planners account for rising summer precipitation and declining temperatures in
P.E.B. Framed Structure Design and Analysis Using STAAD ProIRJET Journal
This document discusses the design and analysis of pre-engineered building (PEB) framed structures using STAAD Pro software. It provides an overview of PEBs, including that they are designed off-site with building trusses and beams produced in a factory. STAAD Pro is identified as a key tool for modeling, analyzing, and designing PEBs to ensure their performance and safety under various load scenarios. The document outlines modeling structural parts in STAAD Pro, evaluating structural reactions, assigning loads, and following international design codes and standards. In summary, STAAD Pro is used to design and analyze PEB framed structures to ensure safety and code compliance.
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...IRJET Journal
This document provides a review of research on innovative fiber integration methods for reinforcing concrete structures. It discusses studies that have explored using carbon fiber reinforced polymer (CFRP) composites with recycled plastic aggregates to develop more sustainable strengthening techniques. It also examines using ultra-high performance fiber reinforced concrete to improve shear strength in beams. Additional topics covered include the dynamic responses of FRP-strengthened beams under static and impact loads, and the performance of preloaded CFRP-strengthened fiber reinforced concrete beams. The review highlights the potential of fiber composites to enable more sustainable and resilient construction practices.
Survey Paper on Cloud-Based Secured Healthcare SystemIRJET Journal
This document summarizes a survey on securing patient healthcare data in cloud-based systems. It discusses using technologies like facial recognition, smart cards, and cloud computing combined with strong encryption to securely store patient data. The survey found that healthcare professionals believe digitizing patient records and storing them in a centralized cloud system would improve access during emergencies and enable more efficient care compared to paper-based systems. However, ensuring privacy and security of patient data is paramount as healthcare incorporates these digital technologies.
Review on studies and research on widening of existing concrete bridgesIRJET Journal
This document summarizes several studies that have been conducted on widening existing concrete bridges. It describes a study from China that examined load distribution factors for a bridge widened with composite steel-concrete girders. It also outlines challenges and solutions for widening a bridge in the UAE, including replacing bearings and stitching the new and existing structures. Additionally, it discusses two bridge widening projects in New Zealand that involved adding precast beams and stitching to connect structures. Finally, safety measures and challenges for strengthening a historic bridge in Switzerland under live traffic are presented.
React based fullstack edtech web applicationIRJET Journal
The document describes the architecture of an educational technology web application built using the MERN stack. It discusses the frontend developed with ReactJS, backend with NodeJS and ExpressJS, and MongoDB database. The frontend provides dynamic user interfaces, while the backend offers APIs for authentication, course management, and other functions. MongoDB enables flexible data storage. The architecture aims to provide a scalable, responsive platform for online learning.
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...IRJET Journal
This paper proposes integrating Internet of Things (IoT) and blockchain technologies to help implement objectives of India's National Education Policy (NEP) in the education sector. The paper discusses how blockchain could be used for secure student data management, credential verification, and decentralized learning platforms. IoT devices could create smart classrooms, automate attendance tracking, and enable real-time monitoring. Blockchain would ensure integrity of exam processes and resource allocation, while smart contracts automate agreements. The paper argues this integration has potential to revolutionize education by making it more secure, transparent and efficient, in alignment with NEP goals. However, challenges like infrastructure needs, data privacy, and collaborative efforts are also discussed.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.IRJET Journal
This document provides a review of research on the performance of coconut fibre reinforced concrete. It summarizes several studies that tested different volume fractions and lengths of coconut fibres in concrete mixtures with varying compressive strengths. The studies found that coconut fibre improved properties like tensile strength, toughness, crack resistance, and spalling resistance compared to plain concrete. Volume fractions of 2-5% and fibre lengths of 20-50mm produced the best results. The document concludes that using a 4-5% volume fraction of coconut fibres 30-40mm in length with M30-M60 grade concrete would provide benefits based on previous research.
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...IRJET Journal
The document discusses optimizing business management processes through automation using Microsoft Power Automate and artificial intelligence. It provides an overview of Power Automate's key components and features for automating workflows across various apps and services. The document then presents several scenarios applying automation solutions to common business processes like data entry, monitoring, HR, finance, customer support, and more. It estimates the potential time and cost savings from implementing automation for each scenario. Finally, the conclusion emphasizes the transformative impact of AI and automation tools on business processes and the need for ongoing optimization.
Multistoried and Multi Bay Steel Building Frame by using Seismic DesignIRJET Journal
The document describes the seismic design of a G+5 steel building frame located in Roorkee, India according to Indian codes IS 1893-2002 and IS 800. The frame was analyzed using the equivalent static load method and response spectrum method, and its response in terms of displacements and shear forces were compared. Based on the analysis, the frame was designed as a seismic-resistant steel structure according to IS 800:2007. The software STAAD Pro was used for the analysis and design.
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...IRJET Journal
This research paper explores using plastic waste as a sustainable and cost-effective construction material. The study focuses on manufacturing pavers and bricks using recycled plastic and partially replacing concrete with plastic alternatives. Initial results found that pavers and bricks made from recycled plastic demonstrate comparable strength and durability to traditional materials while providing environmental and cost benefits. Additionally, preliminary research indicates incorporating plastic waste as a partial concrete replacement significantly reduces construction costs without compromising structural integrity. The outcomes suggest adopting plastic waste in construction can address plastic pollution while optimizing costs, promoting more sustainable building practices.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.