This paper gives a new approach for movement of mouse and implementation of its functions using a real time camera. Here we propose to change the hardware design. Most of the existing technologies mainly depend on changing the mouse parts features like changing the position of tracking ball and adding more buttons. We use a camera, colored substance, image comparison technology and motion detection technology to control mouse movement and implement its functions (right click, left click, scrolling and double click) .
Real time hand gesture recognition system for dynamic applicationsijujournal
Virtual environments have always been considered as a means for more visceral and efficient human computer interaction by a diversified range of applications. The spectrum of applications includes analysis of complex scientific data, medical training, military simulation, phobia therapy and virtual prototyping. Evolution of ubiquitous computing, current user interaction approaches with keyboard, mouse and pen are not sufficient for the still widening spectrum of Human computer interaction. Gloves and sensor based trackers are unwieldy, constraining and uncomfortable to use. Due to the limitation of these devices the useable command set based diligences is also limited. Direct use of hands as an input device is an innovative method for providing natural Human Computer Interaction which has its inheritance from textbased interfaces through 2D graphical-based interfaces, multimedia supported interfaces, to full-fledged multi-participant Virtual Environment (VE) systems. Conceiving a future era of human-computer interaction with the implementations of 3D application where the user may be able to move and rotate objects simply by moving and rotating his hand - all without help of any input device. The research effort centralizes on the efforts of implementing an application that employs computer vision algorithms and gesture recognition techniques which in turn results in developing a low cost interface device for interacting with objects in virtual environment using hand gestures. The prototype architecture of the application comprises of a central computational module that applies the camshift technique for tracking of hands and its gestures. Haar like technique has been utilized as a classifier that is creditworthy for locating hand position and classifying gesture. The patterning of gestures has been done for recognition by mapping the number of defects that is formed in the hand with the assigned gestures. The virtual objects are produced using Open GL library. This hand gesture recognition technique aims to substitute the use of mouse for interaction with the virtual objects. This will be useful to promote controlling applications like virtual games, browsing images etc in virtual environment using hand gestures.
Real time hand gesture recognition system for dynamic applicationsijujournal
Virtual environments have always been considered as a means for more visceral and efficient human computer interaction by a diversified range of applications. The spectrum of applications includes analysis of complex scientific data, medical training, military simulation, phobia therapy and virtual prototyping.
Evolution of ubiquitous computing, current user interaction approaches with keyboard, mouse and pen are
not sufficient for the still widening spectrum of Human computer interaction. Gloves and sensor based trackers are unwieldy, constraining and uncomfortable to use. Due to the limitation of these devices the useable command set based diligences is also limited. Direct use of hands as an input device is an
innovative method for providing natural Human Computer Interaction which has its inheritance from textbased interfaces through 2D graphical-based interfaces, multimedia-supported interfaces, to full-fledged multi-participant Virtual Environment (VE) systems. Conceiving a future era of human-computer
interaction with the implementations of 3D application where the user may be able to move and rotate objects simply by moving and rotating his hand - all without help of any input device.
An analysis of desktop control and information retrieval from the internet us...eSAT Publishing House
This document proposes a system that uses augmented reality and image processing to allow users to control their desktop and retrieve internet information using hand gestures, removing the need for mice and keyboards. It describes a system with two modules - one for desktop control using virtual menus, and one for accessing internet content like news and weather using hand movements. The system works by using a webcam to capture hand movements, processing the images to detect the hand position, and sending commands to the computer based on the position read. It aims to make human-computer interaction more intuitive and realistic through augmented reality.
Mouse Simulation Using Two Coloured Tapes ijistjournal
In this paper, we present a novel approach for Human Computer Interaction (HCI) where, we control cursor movement using a real-time camera. Current methods involve changing mouse parts such as adding more buttons or changing the position of the tracking ball. Instead, our method is to use a camera and computer vision technology, such as image segmentation and gesture recognition, to control mouse tasks (left and right clicking, double-clicking, and scrolling) and we show how it can perform everything as current mouse devices can.
The software will be developed in JAVA language. Recognition and pose estimation in this system are user independent and robust as we will be using colour tapes on our finger to perform actions. The software can be used as an intuitive input interface to applications that require multi-dimensional control e.g. computer games etc.
The document presents information on Sixth Sense technology, a wearable gestural interface developed by Pranav Mistry. It consists of a camera, projector, and mirror contained in a pendant-like device connected to a mobile phone. The camera tracks colored markers on the user's fingers to interpret gestures, while the projector displays information on surfaces. This allows users to interact with projected interfaces and access digital information from the physical world using natural hand motions. Some potential applications include making calls, getting maps/time, taking photos, and accessing online information about objects.
IRJET- Mouse on Finger Tips using ML and AIIRJET Journal
This document describes a system that uses computer vision and machine learning to allow users to control a computer mouse using only their fingertips. The system tracks colored fingertips using a webcam and processes the video frames in real-time to detect and track the fingertips. It then maps the fingertip movements to mouse movements and gestures to control clicking, scrolling, and other mouse functions without any physical contact with the computer. The system was created using Python and aims to provide a more natural and cost-effective way for human-computer interaction through a virtual mouse controlled by hand gestures.
The document discusses human activity recognition from video data using computer vision techniques. It describes recognizing activities at different levels from object locations to full activities. Basic activities like walking and clapping are the focus. Key steps involve tracking segmented objects across frames and comparing motion patterns to templates to identify activities through model fitting. The DEV8000 development kit and Linux are used to process video and recognize activities in real-time. Applications discussed include surveillance, sports analysis, and unmanned vehicles.
Virtual Class room using six sense TechnologyIOSR Journals
The document describes a virtual classroom system that uses six sense technology. Six sense technology allows for a virtual classroom that does not require computers, laptops, or an internet connection. It uses a wearable device with a camera and projector that recognizes gestures and projects digital information onto surfaces. The system would allow students and professors to access information and interact with a virtual classroom anywhere through hand gestures.
Real time hand gesture recognition system for dynamic applicationsijujournal
Virtual environments have always been considered as a means for more visceral and efficient human computer interaction by a diversified range of applications. The spectrum of applications includes analysis of complex scientific data, medical training, military simulation, phobia therapy and virtual prototyping. Evolution of ubiquitous computing, current user interaction approaches with keyboard, mouse and pen are not sufficient for the still widening spectrum of Human computer interaction. Gloves and sensor based trackers are unwieldy, constraining and uncomfortable to use. Due to the limitation of these devices the useable command set based diligences is also limited. Direct use of hands as an input device is an innovative method for providing natural Human Computer Interaction which has its inheritance from textbased interfaces through 2D graphical-based interfaces, multimedia supported interfaces, to full-fledged multi-participant Virtual Environment (VE) systems. Conceiving a future era of human-computer interaction with the implementations of 3D application where the user may be able to move and rotate objects simply by moving and rotating his hand - all without help of any input device. The research effort centralizes on the efforts of implementing an application that employs computer vision algorithms and gesture recognition techniques which in turn results in developing a low cost interface device for interacting with objects in virtual environment using hand gestures. The prototype architecture of the application comprises of a central computational module that applies the camshift technique for tracking of hands and its gestures. Haar like technique has been utilized as a classifier that is creditworthy for locating hand position and classifying gesture. The patterning of gestures has been done for recognition by mapping the number of defects that is formed in the hand with the assigned gestures. The virtual objects are produced using Open GL library. This hand gesture recognition technique aims to substitute the use of mouse for interaction with the virtual objects. This will be useful to promote controlling applications like virtual games, browsing images etc in virtual environment using hand gestures.
Real time hand gesture recognition system for dynamic applicationsijujournal
Virtual environments have always been considered as a means for more visceral and efficient human computer interaction by a diversified range of applications. The spectrum of applications includes analysis of complex scientific data, medical training, military simulation, phobia therapy and virtual prototyping.
Evolution of ubiquitous computing, current user interaction approaches with keyboard, mouse and pen are
not sufficient for the still widening spectrum of Human computer interaction. Gloves and sensor based trackers are unwieldy, constraining and uncomfortable to use. Due to the limitation of these devices the useable command set based diligences is also limited. Direct use of hands as an input device is an
innovative method for providing natural Human Computer Interaction which has its inheritance from textbased interfaces through 2D graphical-based interfaces, multimedia-supported interfaces, to full-fledged multi-participant Virtual Environment (VE) systems. Conceiving a future era of human-computer
interaction with the implementations of 3D application where the user may be able to move and rotate objects simply by moving and rotating his hand - all without help of any input device.
An analysis of desktop control and information retrieval from the internet us...eSAT Publishing House
This document proposes a system that uses augmented reality and image processing to allow users to control their desktop and retrieve internet information using hand gestures, removing the need for mice and keyboards. It describes a system with two modules - one for desktop control using virtual menus, and one for accessing internet content like news and weather using hand movements. The system works by using a webcam to capture hand movements, processing the images to detect the hand position, and sending commands to the computer based on the position read. It aims to make human-computer interaction more intuitive and realistic through augmented reality.
Mouse Simulation Using Two Coloured Tapes ijistjournal
In this paper, we present a novel approach for Human Computer Interaction (HCI) where, we control cursor movement using a real-time camera. Current methods involve changing mouse parts such as adding more buttons or changing the position of the tracking ball. Instead, our method is to use a camera and computer vision technology, such as image segmentation and gesture recognition, to control mouse tasks (left and right clicking, double-clicking, and scrolling) and we show how it can perform everything as current mouse devices can.
The software will be developed in JAVA language. Recognition and pose estimation in this system are user independent and robust as we will be using colour tapes on our finger to perform actions. The software can be used as an intuitive input interface to applications that require multi-dimensional control e.g. computer games etc.
The document presents information on Sixth Sense technology, a wearable gestural interface developed by Pranav Mistry. It consists of a camera, projector, and mirror contained in a pendant-like device connected to a mobile phone. The camera tracks colored markers on the user's fingers to interpret gestures, while the projector displays information on surfaces. This allows users to interact with projected interfaces and access digital information from the physical world using natural hand motions. Some potential applications include making calls, getting maps/time, taking photos, and accessing online information about objects.
IRJET- Mouse on Finger Tips using ML and AIIRJET Journal
This document describes a system that uses computer vision and machine learning to allow users to control a computer mouse using only their fingertips. The system tracks colored fingertips using a webcam and processes the video frames in real-time to detect and track the fingertips. It then maps the fingertip movements to mouse movements and gestures to control clicking, scrolling, and other mouse functions without any physical contact with the computer. The system was created using Python and aims to provide a more natural and cost-effective way for human-computer interaction through a virtual mouse controlled by hand gestures.
The document discusses human activity recognition from video data using computer vision techniques. It describes recognizing activities at different levels from object locations to full activities. Basic activities like walking and clapping are the focus. Key steps involve tracking segmented objects across frames and comparing motion patterns to templates to identify activities through model fitting. The DEV8000 development kit and Linux are used to process video and recognize activities in real-time. Applications discussed include surveillance, sports analysis, and unmanned vehicles.
Virtual Class room using six sense TechnologyIOSR Journals
The document describes a virtual classroom system that uses six sense technology. Six sense technology allows for a virtual classroom that does not require computers, laptops, or an internet connection. It uses a wearable device with a camera and projector that recognizes gestures and projects digital information onto surfaces. The system would allow students and professors to access information and interact with a virtual classroom anywhere through hand gestures.
The document describes the Sixth Sense technology created by Pranav Mistry. The Sixth Sense device uses a camera, projector, and mirror connected to a smartphone. It allows users to interact with the digital world through natural hand gestures by recognizing gestures, images, and finger movements marked with colored tape. The camera sends data to the smartphone, which then processes the information and projects digital outputs onto surrounding surfaces through the projector. This creates an interface between the physical and digital worlds, allowing users access information and applications with hand motions over real-world objects and spaces.
The document describes Sixth Sense, a wearable gestural interface created by Pranav Mistry that augments the physical world with digital information. It consists of a camera and projector mounted in a pendant-like device. The camera tracks hand gestures tagged with colored markers on the fingers. The projector displays information on surfaces based on gestures. This allows interacting with the digital world by interacting with real-world surfaces and objects, integrating digital and physical worlds.
Accessing Operating System using Finger GestureIRJET Journal
This document describes a system for accessing an operating system using finger gestures captured by a webcam. The system aims to reduce costs compared to existing gesture recognition systems that use expensive sensors like Kinect. It uses image processing algorithms to detect hand gestures from webcam input, recognize gestures like number of fingers, and execute corresponding operating system commands. The system architecture first segments hand regions from background, then classifies skin pixels and detects colored tapes on fingers to identify gestures. It can open programs and navigate computer contents contactlessly using natural hand movements. The proposed system aims to provide an affordable alternative for human-computer interaction without external input devices like mice or keyboards.
Computer vision based human computer interaction using color detection techni...Chetan Dhule
This document describes a method for controlling a computer using hand and finger gestures detected through a webcam, without the need for specialized hardware or gesture recognition training. The method tracks color markers attached to fingers to detect finger motion in real-time and uses the motion to control the mouse pointer position and clicks. An application was created with a graphical user interface that allows setting the marker color and controls the mouse based on finger movements detected by calculating pixel value changes of the colored markers in video frames. The method provides a low-cost way to interact with a computer using natural hand gestures without lag compared to existing gesture recognition methods.
The document is a seminar report on the space mouse presented by Mithil Jaju. It discusses the history and development of the space mouse, which originated from the DLR control ball developed in the 1970s for controlling robot grippers. The space mouse uses optical sensing and mechatronics engineering to allow 6 degrees of freedom motion control for applications like 3D modeling and animation. It has benefits over a regular 2D mouse for interactive control in 3D graphics environments.
Gesture Gaming on the World Wide Web Using an Ordinary Web CameraIJERD Editor
- Gesture gaming is a method by which users having a laptop/pc/x-box play games using natural or
bodily gestures. This paper presents a way of playing free flash games on the internet using an ordinary webcam
with the help of open source technologies. Emphasis in human activity recognition is given on the pose
estimation and the consistency in the pose of the player. These are estimated with the help of an ordinary web
camera having different resolutions from VGA to 20mps. Our work involved giving a 10 second documentary to
the user on how to play a particular game using gestures and what are the various kinds of gestures that can be
performed in front of the system. The initial inputs of the RGB values for the gesture component is obtained by
instructing the user to place his component in a red box in about 10 seconds after the short documentary before
the game is finished. Later the system opens the concerned game on the internet on popular flash game sites like
miniclip, games arcade, GameStop etc and loads the game clicking at various places and brings the state to a
place where the user is to perform only gestures to start playing the game. At any point of time the user can call
off the game by hitting the esc key and the program will release all of the controls and return to the desktop. It
was noted that the results obtained using an ordinary webcam matched that of the Kinect and the users could
relive the gaming experience of the free flash games on the net. Therefore effective in game advertising could
also be achieved thus resulting in a disruptive growth to the advertising firms.
This document summarizes an article on virtual reality technology. It discusses three key aspects of virtual reality that make it similar to the real world: total realism, physical interaction, and no boundaries. Total realism is achieved through technologies like CAVE that use projectors and tracking to create 3D immersive environments. Physical interaction is enabled by programmable matter technologies like catoms, which are tiny modular robots that can change shape and properties. Having no boundaries is achieved through virtual spheres and omni-directional treadmills that allow infinite virtual worlds without physical constraints. The document outlines the technologies and research that are integrating the real world with virtual worlds.
This document presents a sixth sense technology system for virtual mouse and drawing using computer vision. It captures images using a webcam and processes them to detect a red laser pointer for virtual drawing on a whiteboard or detect finger motion for mouse control. The system was developed in Microsoft Visual Basic and uses functions like drawing lines and capturing coordinates to enable these interactions. It has applications in gaming and military robots by providing vision-based control. Future work could expand the capabilities to include face recognition, reading documents, and zooming features.
Markerless motion capture for 3D human model animation using depth cameraTELKOMNIKA JOURNAL
3D animation is created using keyframe based system in 3D animation software such as Blender and Maya. Due to the long time interval and the need of high expertise in 3D animation, motion capture devices were used as an alternative and Microsoft Kinect v2 sensor is one of them. This research analyses the capabilities of the Kinect sensor in producing 3D human model animations using motion capture and keyframe based animation system in reference to a live motion performance. The quality, time interval and cost of both animation results were compared. The experimental result shows that motion capture system with Kinect sensor consumed less time (only 2.6%) and cost (30%) in the long run (10 minutes of animation) compare to keyframe-based system, but it produced lower quality animation. This was due to the lack of body detection accuracy when there is obstruction. Moreover, the sensor’s constant assumption that the performer’s body faces forward made it unreliable to be used for a wide variety of movements. Furthermore, standard test defined in this research covers most body parts’ movements to evaluate other motion capture system.
This document discusses gesture phones and gesture recognition technology. It begins by explaining how gestures are recognized through optical tracking, inertial tracking, and calibration. Examples are given of how gestures could control a smartphone, such as answering calls or controlling media playback. Challenges of gesture recognition are also mentioned. The document then discusses applications of gesture technology on Android and Windows phones through various apps that enable gesture control. Benefits of gesture technology include more intuitive interaction and control when touch is not possible.
This document discusses a thesis on using augmented reality for full body immersion. It describes using a motion capture suit and video see-through technologies with the Oculus Rift head mounted display. The thesis aims to achieve realistic interactions between the user and virtual characters through body movements and gestures controlled in the augmented reality scene. It plans to evaluate the system using surveys after subjects interact with a virtual agent scenario while wearing the motion capture suit and Oculus Rift.
The document describes Sixth Sense technology, a wearable gestural interface invented by Pranav Mistry. It consists of a camera, projector, mirror, and markers on the user's fingers, connected to a smartphone. The camera tracks hand gestures to send commands to the projector, which displays information onto surrounding surfaces. Examples of uses include making calls, checking maps and time, getting product information, and taking pictures through gestures. The system aims to bridge the gap between physical and digital worlds through seamless access to contextual information.
Sixth Sense technology discovered by Pranav Mistry. It is a wearable gestural based device which integrates the two worlds, i.e Physical world and Digital world.
Sixth Sense Technology is a mini-projector coupled with a camera and a cellphone—which acts as the computer and connected to the Cloud, all the information stored on the web. Sixth Sense can also obey hand gestures. The camera recognizes objects around a person instantly, with the micro-projector overlaying the information on any surface, including the object itself or hand. Also can access or manipulate the information using fingers. make a call by Extend hand on front of the projector and numbers will appear for to click. know the time by Draw a circle on wrist and a watch will appear. take a photo by Just make a square with fingers, highlighting what want to frame, and the system will make the photo—which can later organize with the others using own hands over the air.and The device has a huge number of applications , it is portable and easily to carry as can wear it in neck.
The drawing application lets user draw on any surface by observing the movement of index finger. Mapping can also be done anywhere with the features of zooming in or zooming out. The camera also helps user to take pictures of the scene is viewing and later can arrange them on any surface. Some of the more practical uses are reading a newspaper. reading a newspaper and viewing videos instead of the photos in the paper. Or live sports updates while reading the newspaper.
IRJET- 3D Drawing with Augmented RealityIRJET Journal
This document summarizes research on developing an augmented reality application called 3D Drawing with Augmented Reality. The application allows multiple users to draw in 3D space and see each other's drawings in real-time. It treats the real world as a canvas where users can draw lines and doodles in thin air. The application was created using Unity 3D and ARCore to enable 3D drawing on mobile devices. Related work on SLAM, human-computer interaction, game lobbies, and bare-handed 3D drawing techniques were also reviewed to inform the application's design. The methodology section describes how the collaborative AR application was developed to allow real-time 3D drawing between users.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Augmented reality (AR) is a technology which provides real time integration of digital content with the
information available in real world. Augmented reality enables direct access to implicit information
attached with context in real time. Augmented reality enhances our perception of real world by enriching
what we see, feel, and hear in the real environment. This paper gives comparative study of various
augmented reality software development kits (SDK’s) available to create augmented reality apps. The
paper describes how augmented reality is different from virtual reality; working of augmented reality
system and different types of tracking used in AR.
Augmented Reality AR is the technology that overlaps virtual objects onto real world objects. It has three main features the combination of the real world and the virtual world, real time interaction, and 3D registration. The algorithms used to produce graphical images and other sensor based inputs on real world objects uses the camera of your device. The shortest route and graphics information in 3D is not notified in normal maps. To avoid such problems we have developed a 3D virtual environment that gives graphics and contains more information about a particular place. This project is done by using the “UNITY†application, the engine can be used to create three dimensional, two dimensional which helps to view all these graphics and routes in a 3D view. R. Mohana Priya | Subash. R | Yogesh. R | Vignesh. M | Gopi. V "Augmented Reality Map" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-3 , April 2020, URL: https://www.ijtsrd.com/papers/ijtsrd30220.pdf Paper Url :https://www.ijtsrd.com/engineering/electronics-and-communication-engineering/30220/augmented-reality-map/r-mohana-priya
The document describes the Sixth Sense device, a wearable gestural interface developed by Pranav Mistry. It consists of a camera, projector, and mirror coupled in a pendant-like mobile device. The camera tracks hand gestures which are processed by a smartphone to project information onto surfaces. Some applications include making calls, getting maps/product info, and interacting with projected interfaces through natural hand motions. The Sixth Sense aims to augment physical reality with digital information accessible through gestures.
This document describes a virtual mouse system that uses computer vision and OpenCV to detect hand gestures from video input and use those gestures to control cursor movements and mouse clicks. Specifically, it tracks colored markers on fingertips to determine pointer position and recognizes gestures like clicking to emulate mouse functions without physical hardware. The system is implemented using Python libraries like OpenCV, MediaPipe, and PyAutoGUI to process video frames in real-time, identify hand and finger positions, and map those positions to mouse events. This allows users to control the computer interface entirely through natural hand motions detected by a webcam.
This document describes an AI-based virtual mouse system that is operated using hand gestures detected by a webcam, without needing to physically touch a mouse or other device. The system uses computer vision and mediapipe to detect hand landmarks and track finger positions in real-time video input. By analyzing which fingers are raised, it can determine mouse movement or click functions. The goal is to create a touchless input that could be useful during the pandemic by reducing virus transmission through shared surfaces. The virtual mouse is implemented using OpenCV and other Python libraries to process video, smooth output, and perform mouse functions based on hand and finger tracking.
Virtual Mouse Control Using Hand GesturesIRJET Journal
This document describes a system for controlling a computer mouse using hand gestures detected by a webcam. The system uses computer vision and image processing techniques to track hand movements and identify gestures. It analyzes video frames from the webcam to extract the hand contour and detect gestures. Specific gestures are mapped to mouse functions like movement, left/right clicks, and scrolling. The system aims to provide an intuitive, hands-free way to control the mouse for physically disabled people or those uncomfortable with touchpads. It could help the millions affected by carpal tunnel syndrome annually in India. The document outlines the system architecture, methodology including hand tracking and gesture recognition, and concludes the technology provides better human-computer interaction without requiring a physical mouse.
Mouse Simulation Using Two Coloured Tapesijistjournal
In this paper, we present a novel approach for Human Computer Interaction (HCI) where, we control cursor movement using a real-time camera. Current methods involve changing mouse parts such as adding more buttons or changing the position of the tracking ball. Instead, our method is to use a camera and computer vision technology, such as image segmentation and gesture recognition, to control mouse tasks (left and right clicking, double-clicking, and scrolling) and we show how it can perform everything as current mouse devices can.
The software will be developed in JAVA language. Recognition and pose estimation in this system are user independent and robust as we will be using colour tapes on our finger to perform actions. The software can be used as an intuitive input interface to applications that require multi-dimensional control e.g. computer games etc.
The document describes the Sixth Sense technology created by Pranav Mistry. The Sixth Sense device uses a camera, projector, and mirror connected to a smartphone. It allows users to interact with the digital world through natural hand gestures by recognizing gestures, images, and finger movements marked with colored tape. The camera sends data to the smartphone, which then processes the information and projects digital outputs onto surrounding surfaces through the projector. This creates an interface between the physical and digital worlds, allowing users access information and applications with hand motions over real-world objects and spaces.
The document describes Sixth Sense, a wearable gestural interface created by Pranav Mistry that augments the physical world with digital information. It consists of a camera and projector mounted in a pendant-like device. The camera tracks hand gestures tagged with colored markers on the fingers. The projector displays information on surfaces based on gestures. This allows interacting with the digital world by interacting with real-world surfaces and objects, integrating digital and physical worlds.
Accessing Operating System using Finger GestureIRJET Journal
This document describes a system for accessing an operating system using finger gestures captured by a webcam. The system aims to reduce costs compared to existing gesture recognition systems that use expensive sensors like Kinect. It uses image processing algorithms to detect hand gestures from webcam input, recognize gestures like number of fingers, and execute corresponding operating system commands. The system architecture first segments hand regions from background, then classifies skin pixels and detects colored tapes on fingers to identify gestures. It can open programs and navigate computer contents contactlessly using natural hand movements. The proposed system aims to provide an affordable alternative for human-computer interaction without external input devices like mice or keyboards.
Computer vision based human computer interaction using color detection techni...Chetan Dhule
This document describes a method for controlling a computer using hand and finger gestures detected through a webcam, without the need for specialized hardware or gesture recognition training. The method tracks color markers attached to fingers to detect finger motion in real-time and uses the motion to control the mouse pointer position and clicks. An application was created with a graphical user interface that allows setting the marker color and controls the mouse based on finger movements detected by calculating pixel value changes of the colored markers in video frames. The method provides a low-cost way to interact with a computer using natural hand gestures without lag compared to existing gesture recognition methods.
The document is a seminar report on the space mouse presented by Mithil Jaju. It discusses the history and development of the space mouse, which originated from the DLR control ball developed in the 1970s for controlling robot grippers. The space mouse uses optical sensing and mechatronics engineering to allow 6 degrees of freedom motion control for applications like 3D modeling and animation. It has benefits over a regular 2D mouse for interactive control in 3D graphics environments.
Gesture Gaming on the World Wide Web Using an Ordinary Web CameraIJERD Editor
- Gesture gaming is a method by which users having a laptop/pc/x-box play games using natural or
bodily gestures. This paper presents a way of playing free flash games on the internet using an ordinary webcam
with the help of open source technologies. Emphasis in human activity recognition is given on the pose
estimation and the consistency in the pose of the player. These are estimated with the help of an ordinary web
camera having different resolutions from VGA to 20mps. Our work involved giving a 10 second documentary to
the user on how to play a particular game using gestures and what are the various kinds of gestures that can be
performed in front of the system. The initial inputs of the RGB values for the gesture component is obtained by
instructing the user to place his component in a red box in about 10 seconds after the short documentary before
the game is finished. Later the system opens the concerned game on the internet on popular flash game sites like
miniclip, games arcade, GameStop etc and loads the game clicking at various places and brings the state to a
place where the user is to perform only gestures to start playing the game. At any point of time the user can call
off the game by hitting the esc key and the program will release all of the controls and return to the desktop. It
was noted that the results obtained using an ordinary webcam matched that of the Kinect and the users could
relive the gaming experience of the free flash games on the net. Therefore effective in game advertising could
also be achieved thus resulting in a disruptive growth to the advertising firms.
This document summarizes an article on virtual reality technology. It discusses three key aspects of virtual reality that make it similar to the real world: total realism, physical interaction, and no boundaries. Total realism is achieved through technologies like CAVE that use projectors and tracking to create 3D immersive environments. Physical interaction is enabled by programmable matter technologies like catoms, which are tiny modular robots that can change shape and properties. Having no boundaries is achieved through virtual spheres and omni-directional treadmills that allow infinite virtual worlds without physical constraints. The document outlines the technologies and research that are integrating the real world with virtual worlds.
This document presents a sixth sense technology system for virtual mouse and drawing using computer vision. It captures images using a webcam and processes them to detect a red laser pointer for virtual drawing on a whiteboard or detect finger motion for mouse control. The system was developed in Microsoft Visual Basic and uses functions like drawing lines and capturing coordinates to enable these interactions. It has applications in gaming and military robots by providing vision-based control. Future work could expand the capabilities to include face recognition, reading documents, and zooming features.
Markerless motion capture for 3D human model animation using depth cameraTELKOMNIKA JOURNAL
3D animation is created using keyframe based system in 3D animation software such as Blender and Maya. Due to the long time interval and the need of high expertise in 3D animation, motion capture devices were used as an alternative and Microsoft Kinect v2 sensor is one of them. This research analyses the capabilities of the Kinect sensor in producing 3D human model animations using motion capture and keyframe based animation system in reference to a live motion performance. The quality, time interval and cost of both animation results were compared. The experimental result shows that motion capture system with Kinect sensor consumed less time (only 2.6%) and cost (30%) in the long run (10 minutes of animation) compare to keyframe-based system, but it produced lower quality animation. This was due to the lack of body detection accuracy when there is obstruction. Moreover, the sensor’s constant assumption that the performer’s body faces forward made it unreliable to be used for a wide variety of movements. Furthermore, standard test defined in this research covers most body parts’ movements to evaluate other motion capture system.
This document discusses gesture phones and gesture recognition technology. It begins by explaining how gestures are recognized through optical tracking, inertial tracking, and calibration. Examples are given of how gestures could control a smartphone, such as answering calls or controlling media playback. Challenges of gesture recognition are also mentioned. The document then discusses applications of gesture technology on Android and Windows phones through various apps that enable gesture control. Benefits of gesture technology include more intuitive interaction and control when touch is not possible.
This document discusses a thesis on using augmented reality for full body immersion. It describes using a motion capture suit and video see-through technologies with the Oculus Rift head mounted display. The thesis aims to achieve realistic interactions between the user and virtual characters through body movements and gestures controlled in the augmented reality scene. It plans to evaluate the system using surveys after subjects interact with a virtual agent scenario while wearing the motion capture suit and Oculus Rift.
The document describes Sixth Sense technology, a wearable gestural interface invented by Pranav Mistry. It consists of a camera, projector, mirror, and markers on the user's fingers, connected to a smartphone. The camera tracks hand gestures to send commands to the projector, which displays information onto surrounding surfaces. Examples of uses include making calls, checking maps and time, getting product information, and taking pictures through gestures. The system aims to bridge the gap between physical and digital worlds through seamless access to contextual information.
Sixth Sense technology discovered by Pranav Mistry. It is a wearable gestural based device which integrates the two worlds, i.e Physical world and Digital world.
Sixth Sense Technology is a mini-projector coupled with a camera and a cellphone—which acts as the computer and connected to the Cloud, all the information stored on the web. Sixth Sense can also obey hand gestures. The camera recognizes objects around a person instantly, with the micro-projector overlaying the information on any surface, including the object itself or hand. Also can access or manipulate the information using fingers. make a call by Extend hand on front of the projector and numbers will appear for to click. know the time by Draw a circle on wrist and a watch will appear. take a photo by Just make a square with fingers, highlighting what want to frame, and the system will make the photo—which can later organize with the others using own hands over the air.and The device has a huge number of applications , it is portable and easily to carry as can wear it in neck.
The drawing application lets user draw on any surface by observing the movement of index finger. Mapping can also be done anywhere with the features of zooming in or zooming out. The camera also helps user to take pictures of the scene is viewing and later can arrange them on any surface. Some of the more practical uses are reading a newspaper. reading a newspaper and viewing videos instead of the photos in the paper. Or live sports updates while reading the newspaper.
IRJET- 3D Drawing with Augmented RealityIRJET Journal
This document summarizes research on developing an augmented reality application called 3D Drawing with Augmented Reality. The application allows multiple users to draw in 3D space and see each other's drawings in real-time. It treats the real world as a canvas where users can draw lines and doodles in thin air. The application was created using Unity 3D and ARCore to enable 3D drawing on mobile devices. Related work on SLAM, human-computer interaction, game lobbies, and bare-handed 3D drawing techniques were also reviewed to inform the application's design. The methodology section describes how the collaborative AR application was developed to allow real-time 3D drawing between users.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Augmented reality (AR) is a technology which provides real time integration of digital content with the
information available in real world. Augmented reality enables direct access to implicit information
attached with context in real time. Augmented reality enhances our perception of real world by enriching
what we see, feel, and hear in the real environment. This paper gives comparative study of various
augmented reality software development kits (SDK’s) available to create augmented reality apps. The
paper describes how augmented reality is different from virtual reality; working of augmented reality
system and different types of tracking used in AR.
Augmented Reality AR is the technology that overlaps virtual objects onto real world objects. It has three main features the combination of the real world and the virtual world, real time interaction, and 3D registration. The algorithms used to produce graphical images and other sensor based inputs on real world objects uses the camera of your device. The shortest route and graphics information in 3D is not notified in normal maps. To avoid such problems we have developed a 3D virtual environment that gives graphics and contains more information about a particular place. This project is done by using the “UNITY†application, the engine can be used to create three dimensional, two dimensional which helps to view all these graphics and routes in a 3D view. R. Mohana Priya | Subash. R | Yogesh. R | Vignesh. M | Gopi. V "Augmented Reality Map" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-3 , April 2020, URL: https://www.ijtsrd.com/papers/ijtsrd30220.pdf Paper Url :https://www.ijtsrd.com/engineering/electronics-and-communication-engineering/30220/augmented-reality-map/r-mohana-priya
The document describes the Sixth Sense device, a wearable gestural interface developed by Pranav Mistry. It consists of a camera, projector, and mirror coupled in a pendant-like mobile device. The camera tracks hand gestures which are processed by a smartphone to project information onto surfaces. Some applications include making calls, getting maps/product info, and interacting with projected interfaces through natural hand motions. The Sixth Sense aims to augment physical reality with digital information accessible through gestures.
This document describes a virtual mouse system that uses computer vision and OpenCV to detect hand gestures from video input and use those gestures to control cursor movements and mouse clicks. Specifically, it tracks colored markers on fingertips to determine pointer position and recognizes gestures like clicking to emulate mouse functions without physical hardware. The system is implemented using Python libraries like OpenCV, MediaPipe, and PyAutoGUI to process video frames in real-time, identify hand and finger positions, and map those positions to mouse events. This allows users to control the computer interface entirely through natural hand motions detected by a webcam.
This document describes an AI-based virtual mouse system that is operated using hand gestures detected by a webcam, without needing to physically touch a mouse or other device. The system uses computer vision and mediapipe to detect hand landmarks and track finger positions in real-time video input. By analyzing which fingers are raised, it can determine mouse movement or click functions. The goal is to create a touchless input that could be useful during the pandemic by reducing virus transmission through shared surfaces. The virtual mouse is implemented using OpenCV and other Python libraries to process video, smooth output, and perform mouse functions based on hand and finger tracking.
Virtual Mouse Control Using Hand GesturesIRJET Journal
This document describes a system for controlling a computer mouse using hand gestures detected by a webcam. The system uses computer vision and image processing techniques to track hand movements and identify gestures. It analyzes video frames from the webcam to extract the hand contour and detect gestures. Specific gestures are mapped to mouse functions like movement, left/right clicks, and scrolling. The system aims to provide an intuitive, hands-free way to control the mouse for physically disabled people or those uncomfortable with touchpads. It could help the millions affected by carpal tunnel syndrome annually in India. The document outlines the system architecture, methodology including hand tracking and gesture recognition, and concludes the technology provides better human-computer interaction without requiring a physical mouse.
Mouse Simulation Using Two Coloured Tapesijistjournal
In this paper, we present a novel approach for Human Computer Interaction (HCI) where, we control cursor movement using a real-time camera. Current methods involve changing mouse parts such as adding more buttons or changing the position of the tracking ball. Instead, our method is to use a camera and computer vision technology, such as image segmentation and gesture recognition, to control mouse tasks (left and right clicking, double-clicking, and scrolling) and we show how it can perform everything as current mouse devices can.
The software will be developed in JAVA language. Recognition and pose estimation in this system are user independent and robust as we will be using colour tapes on our finger to perform actions. The software can be used as an intuitive input interface to applications that require multi-dimensional control e.g. computer games etc.
IRJET- Finger Gesture Recognition Using Linear CameraIRJET Journal
This document describes a system for finger gesture recognition using a linear camera. The system aims to allow users to control basic computer functions through finger gestures as an alternative to using a mouse or keyboard. It works by using image processing techniques on video captured by the linear camera to detect the user's finger movements and map them to cursor movements or actions. The system is broken down into four main stages - skin detection to identify finger regions, finger contour extraction, finger tracking, and gesture recognition to identify gestures and map them to computer functions like play, pause, volume control etc. This vision-based approach allows for contactless control and could help users in situations where mouse or keyboard is unavailable.
This document describes a virtual mouse system that uses computer vision and color tracking to replace a conventional mouse. The system tracks colored objects like a red or blue object held in the user's hand to map hand movements to mouse movements and clicks. It analyzes image frames from a webcam to detect pixel colors and scale the detected positions to match screen coordinates. This allows for freer motion than a physical mouse and reduces costs compared to alternatives like touchscreens. The system is implemented using OpenCV for image processing and runs entirely in software on the user's computer.
Virtual Mouse Control Using Hand Gesture RecognitionIRJET Journal
This document describes a system for controlling a computer mouse using hand gestures without any physical mouse device. The system uses a webcam to capture video of the user's hand. Computer vision and machine learning algorithms are used to recognize hand gestures in the video frames. Specific gestures like raising different fingers are mapped to mouse actions like left click, right click, and cursor movement. The system is implemented using Python and OpenCV libraries to process the video and detect hand and finger positions. It allows for fully controlling the computer mouse interface through natural hand gestures performed in front of the webcam.
The document summarizes a student project to develop a virtual mouse interface using computer vision and finger tracking. The project is divided into 5 modules: 1) basic video operations in OpenCV, 2) image processing techniques, 3) object tracking, 4) finger-tip detection, and 5) using detected finger motions to control mouse functions. Key functions demonstrated include moving the cursor, left and right clicking, dragging, brightness control, and scrolling. Evaluation of the system found finger tracking accuracy between 60-85% for different gestures. The project aims to provide an alternative input method that reduces hardware needs and workspace.
Controlling Computer using Hand GesturesIRJET Journal
This document describes a research project on controlling a computer using hand gestures. The researchers created a real-time gesture recognition system using convolutional neural networks (CNNs). They developed a dataset of 3000 training images of 10 different hand gestures for tasks like opening apps. A CNN model was trained to detect hands in images and recognize gestures. The model achieved 80.4% validation accuracy and was able to successfully perform operations like opening WhatsApp, PowerPoint and other apps based on detected gestures in real-time. The system provides a cost-effective and contactless way of interacting with computers using hand gestures only.
This document describes a technical seminar presented on a real-time AI virtual mouse system using computer vision. The system allows users to control mouse functions like left clicks, right clicks, and scrolling through hand gestures detected by a webcam, without needing a physical mouse. It works by using the MediaPipe and OpenCV libraries to detect hand landmarks and track hand movements. Key gestures like finger position and distance are used to map to different mouse functions like clicking or scrolling. The system aims to provide a more convenient and hands-free way to control the computer.
This document summarizes a survey on detecting hand gestures to be used as input for computer interactions. The introduction discusses how graphical user interfaces are being upgraded to provide more efficient visual interfaces using touchscreen technologies. However, these technologies are still too expensive for laptops and desktops. The paper then proposes developing a virtual mouse system using a webcam to capture hand movements and perform mouse functions like left and right clicks. The methodology section outlines the key steps of the proposed system which includes skin detection, contour extraction from images, and mapping detected hand gestures to cursor movements and controls. Finally, the conclusion discusses the goal of making this technology cheaper and more accessible to use as a standard input device without additional hardware requirements.
The document describes Sixth Sense, a wearable gestural interface developed by Pranav Mistry. It consists of a camera, projector, and mirror coupled in a pendant-like device. The camera recognizes hand gestures tagged with colored markers to project interfaces onto surrounding surfaces. Applications include making calls, using maps, drawing, getting flight updates, and more. The device costs around $350 to build, with the goal of being open source and adapting computers to human needs by making the world the interface.
Virtual Mouse Using Hand Gesture RecognitionIRJET Journal
The document describes a virtual mouse system that uses hand gesture recognition instead of physical mouse devices. The system uses a webcam to capture hand movements and detect hand landmarks using mediapipe. Various hand gestures correspond to mouse functions like move, click, scroll etc. The system is portable, low-cost and provides a user-friendly way to control the computer without additional hardware. It aims to overcome limitations of prior systems that required colored fingertips or multiple cameras. The virtual mouse was implemented using libraries like OpenCV, PyAutoGUI and tested successfully.
Smart Presentation Control by Hand Gestures Using Computer Vision and Google’...IRJET Journal
This document describes a smart presentation control system using hand gesture recognition with computer vision and Google's MediaPipe framework. The system uses a webcam to capture videos and photos of hand gestures as input. MediaPipe is used to detect hand landmarks and gestures in real-time. Various hand gestures like changing slides, drawing on slides, and erasing can be used to control the presentation without needing a keyboard or mouse. The system aims to provide a natural and intuitive human-computer interaction experience for presentation control through hand gesture recognition.
This document summarizes a research project that aims to develop a virtual mouse system using hand gesture recognition with a webcam. The system analyzes hand contours extracted from the webcam footage to identify fingers and gestures. It then maps different gestures to mouse functions like cursor movement, left/right clicks, and scrolling. The researchers believe this could provide a more natural interface than physical mice and benefit users who have difficulty using mice. It describes the design and implementation of the gesture recognition system, which involves steps like color detection, contour extraction, and identifying fingers from convexity defects.
The document discusses the Sixth Sense technology, a wearable gestural interface developed by Pranav Mistry at the MIT Media Lab. It consists of a camera, projector, and mirror worn on a pendant around the neck. The camera tracks hand gestures which are used to interact with information projected onto physical surfaces using the projector. Sixth Sense allows users to access digital information in a natural way through gestures, overlaying virtual content onto the real world. It has applications in areas like augmented reality, mobile devices, and helping disabled individuals. The technology provides an easy to use, portable interface but security and accuracy need further improvement before widespread adoption.
This document describes a virtual mouse system that uses hand gestures as detected by a webcam to control the computer cursor and perform mouse functions like clicking and dragging. The proposed system aims to overcome limitations of physical mice like requiring batteries, wireless receivers or specific surfaces. It analyzes video frames from the webcam using OpenCV and MediaPipe to detect hand positions and gestures. Mouse movements and actions are mapped to specific gestures. The system was tested in different lighting conditions and distances from the webcam and was found to work effectively in most scenarios. Further improvements to accuracy and adapting it to mobile devices are discussed as future work.
Design of Image Projection Using Combined Approach for TrackingIJMER
Over the years the techniques and methods that have been used to interact with the
computers have evolved significantly. From the primitive use of punch cards to the latest touch screen
panels we can see the vast improvement in interaction with the system. There are many new ways of
projection and interaction technologies that can reshape our perception and interaction
methodologies. Also projection technology is very useful for creating various geometric displays. In
earlier generations, the projector technology was used for projecting images and videos on single
screen, using large and bulky setup. To overcome the earlier limitations we are designing “Wireless
Image Projection Tracking”, which is a system that uses IR (Infrared) technology to track the body in
the IR range and uses their movements for image orientation and manipulations like zoom, tilt/rotate,
and scale. We are presenting a method of mapping IR light source position and orientation to an
image. By using this system we can also track single and multiple IR light source positions and also it
can be used effectively to see the image projection in 3D view. Extension in this technology can further
be useful for future tracking capabilities to implement the touch screen feature for commercial
applications.
This document presents a hand gesture controlled mouse system using machine learning. It has 4 modules: 1) Hand tracking to detect hand landmarks using a webcam, 2) Volume control using the distance between thumb and forefinger, 3) Virtual painting by drawing on screen, 4) Mouse control using index finger movements to move the cursor. The system was able to successfully track hand gestures and use them to control mouse functions and volume. Future applications could include use in education for interactive teaching and by people with disabilities. Some limitations are need for adequate lighting and inability to track multiple hands.
The Follow-me bot is primarily humanoid with multiple degrees of freedom(one you might have seen in Real steel movie by Hugh Jackman). From the get go site, it has all the earmarks of being much the same as any other humanoid robot however the true contrast comes in when we go inside its working system. Follow-me bot lives up to expectations and follow the rules guideline for remote human type robot interface. It will take client's body movement as info and move much the same as him/her. It's more like offering sight to the robot and it'll mimicries client's movement. Dissimilar to another humanoid robot, the controller in this present bot's structural planning is the client himself. Controllers can steer correspond with the machine simply depending on body developments, and along these lines control the activities of remote robot progressively, coordination, and harmonization. The outline is a synthesis of new human movement catch strategies, sagacious executor controltechnique, system transmission, different sensor combination innovation. The name 'Follow-me bot' is utilized on the grounds that much the same as our follow-me, it'll move definitely in the same manner as we seem to be. This paper quickly portrays all the portions of the humanoid robot utilizing Kinect engineering.
Similar to Gesture Based Interface Using Motion and Image Comparison (20)
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
Reimagining Your Library Space: How to Increase the Vibes in Your Library No ...Diana Rendina
Librarians are leading the way in creating future-ready citizens – now we need to update our spaces to match. In this session, attendees will get inspiration for transforming their library spaces. You’ll learn how to survey students and patrons, create a focus group, and use design thinking to brainstorm ideas for your space. We’ll discuss budget friendly ways to change your space as well as how to find funding. No matter where you’re at, you’ll find ideas for reimagining your space in this session.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
Gesture Based Interface Using Motion and Image Comparison
1. International Journal of Advanced Information Technology (IJAIT) Vol. 2, No.3, June 2012
DOI : 10.5121/ijait.2012.2303 37
Gesture Based Interface Using Motion and Image
Comparison
Shany Jophin1
, Sheethal M.S2
, Priya Philip3
, T M Bhruguram4
Dept of Computer .Science Adi Shankara Institute of Engineering
And Technology, Kalady,
shanyjophin.s@gmail.com
ABSTRACT
This paper gives a new approach for movement of mouse and implementation of its functions using a real
time camera. Here we propose to change the hardware design. Most of the existing technologies mainly
depend on changing the mouse parts features like changing the position of tracking ball and adding more
buttons. We use a camera, colored substance, image comparison technology and motion detection
technology to control mouse movement and implement its functions (right click, left click, scrolling and
double click) .
KEYWORDS
HCI, Sixth Sense, VLCJ
1. INTRODUCTION
Human computing interaction (HCI) is one of the important area of research were people try to
improve the computer technology. Nowadays we find smaller and smaller devices being used to
improve technology. Vision-based gesture and object recognition are another area of research. A
simple interface like embedded keyboard, folder-keyboard and mini-keyboard already exists in
today’s market. However, these interfaces need some amount of space to use and cannot be used
while moving. Touch screen are also globally used which are good control interface and are being
used in many applications. However, touch screens can-not be applied to desktop systems
because of cost and other hardware limitations. By applying vision technology, colored substance,
image comparison technology and controlling the mouse by natural hand gestures, we can reduce
the work space required. In this paper, we propose a novel approach that uses a video device to
control the mouse system properly.
2. RELATED WORK
2.1. Mouse free
Vision-Based Human-Computer Interaction through Real-Time Hand Tracking and Gesture
Recognition Vision-based interaction is an appealing option for replacing primitive human-
computer interaction (HCI) using a mouse or touchpad. We propose a system for using a webcam
to track a users hand and recognize gestures to initiate specific interactions. The contributions of
our work will be to implement a system for hand tracking and simple gesture recognition in real
time [1].
2. International Journal of Advanced Information Technology (IJAIT) Vol. 2, No.3, June 2012
38
Many researchers in the human computer interaction and robotics fields have tried to control
mouse movement using video devices. However, all of them used different methods to make a
clicking event. One approach, by Erdem et al, used finger tip tracking to control the motion of the
mouse. A click of the mouse button was implemented by defining a screen such that a click
occurred when a user’s hand passed over the region [2, 3]. Another approach was developed by
Chu-Feng Lien [4]. He used only the finger-tips to control the mouse cursor and click. His
clicking method was based on image density, and required the user to hold the mouse cursor on
the desired spot for a short period of time. Paul et al, used still another method to click. They used
the motion of the thumb (from a ‘thumbs-up’ position to a fist) to mark a clicking event thumb.
Movement of the hand while making a special hand sign moved the mouse pointer.
2.2. A Method for Controlling Mouse Movement using a Real-Time Camera
This is a new approach for controlling mouse movement using a real-time camera. Most existing
approaches involve changing mouse parts such as adding more buttons or changing the position
of the tracking ball. Instead, we propose to change the hardware de-sign. Our method is to use a
camera and computer vision technology, such as image segmentation and gesture recognition
.Our method is to use a camera and computer vision technology, such as image segmentation and
gesture recognition, to control mouse tasks (left and right clicking, double-clicking, and scrolling)
and we show how it can perform everything current mouse devices can. This paper shows how to
build this mouse control system [5].
2.3. Sixth Sense
‘SixthSense' is a wearable gestural interface that augments the physical world around us with
digital in-formation and lets us use natural hand gestures to interact with that information. The
SixthSense prototype is comprised of a pocket projector, a mirror and a camera. The hardware
components are coupled in a pendant like mobile wearable device. Both the projector and the
camera are connected to the mobile computing device in the users pocket. The projector projects
visual information enabling surfaces, walls and physical objects around us to be used as
interfaces; while the camera recognizes and tracks user's hand gestures and physical objects using
computer-vision based techniques [6].
2.4. Vlcj
The vlcj project is an Open Source project that pro-vides Java bindings for the excellent vlc media
player from Video LAN. The bindings can be used to build media player client and server
software using Java - everything from simply playing local media les to a full-blown video-on-
demand streaming server is possible .vlcj is being used in diverse applications, helping to provide
video capabilities to software in use on oceanographic research vessels and bespoke IPTV and
home cinema solutions .vlcj is also being used to create software for an Open Source video
camera at Elphel and video mapping for the Open Street Map project [7].
2.5. Mouseless
Mouseless is an invisible computer mouse that provides the familiarity of interaction of a physical
mouse without actually needing a real hardware mouse. The Mouseless invention removes the
requirement of having a physical mouse altogether but still provides the intuitive interaction of a
physical mouse that we are familiar with. Mouseless consists of an Infrared (IR) laser beam (with
line cap) and an Infrared camera. Both IR laser and IR camera are embedded in the computer. The
laser beam module is modified with a line cap and placed such that it creates a plane of IR laser
3. International Journal of Advanced Information Technology (IJAIT) Vol. 2, No.3, June 2012
39
just above the surface the computer sits on. The user cups their hand, as if a physical mouse was
present underneath, and the laser beam lights up the hand which is in contact with the surface.
The IR camera detects those bright IR blobs using computer vision. The change in the position
and arrangements of these blobs are interpreted as mouse cursor movement and mouse clicks. As
the user moves their hand the cursor on screen moves accordingly. When the user taps their index
finger, the size of the blob changes and the camera recognizes the intended mouse click.[8]
2.6. Real-time Finger Tracking for Interaction
In this work, they described an approach for human finger motion and gesture detection using two
cameras. The target of pointing on a flat monitor or screen is identified using image processing
and line intersection. This is accomplished by processing above and side images of the hand. The
system is able to track the finger movement without building the 3D model of the hand.
Coordinates and movement of the finger in a live video feed can be taken to become the
coordinates and movement of the mouse pointer for human-computer interaction purpose.[9]
3. PROPOSED WORK
Figure 1. The proposed working
3.1. Product and system features
3.1.1. Computer
(Laptop)-Processor: Pentium4, Processor Speed: 1GHz, RAM Capacity: 512 MB, Hard disk:
40GB, Monitor: 15SVGA.
3.1.2. Webcam
It is used for the image processing and also for capturing the image.
4. International Journal of Advanced Information Technology (IJAIT) Vol. 2, No.3, June 2012
40
Video data format: 12.24-bit RGB, Image resolution: Max 2560*2084, Software enhanced menu
display/sec: 30 in CIF mode, Menu signal bit: 42db, Lens: 6.00mm, Vision: +/-28,Focus range: 3
centimeter to limitless.
3.1.3. Finger Tip
(Red and blue colored substance)- it is used as an alternative for mouse and control the functions
of pointer.
3.1.4. Software Requirements
Operating System: Windows XP, Windows vista, Windows 7.
Code Behind: JAVA (Red Hat or Eclipse).
3.1.5. Internal Interface Requirements
Swing, vlcj.
3.2. Tools used
Laptop (include software like red hat, eclipse, vlcj), webcam, a colored device. The performance
of the software is made to be improved by examining each pixel leaving behind its four
consecutive pixels.
5. International Journal of Advanced Information Technology (IJAIT) Vol. 2, No.3, June 2012
41
Figure 2. The flow chart
3.3. Functional Requirements
Programmer has to follow a particular sequence of actions to implement their functions. They are
Implementing the functions of vlcj to capture the images given by the user ,Store the captured
images in a temporary storage buffer, Examining pixel( skipping consecutive four pixels ) of the
image captured to find the red color, Finding out the mid points by examining x and y coordinates
of the current area detected, The pointer reaches the position where the midpoint is found,
Examine another pixel containing the red color and find out its midpoint, To move the pointer,
compare the current pixel with previous pixel, If the compared two pixels are the same, then the
pointer starts moving, Perform mouse functions depending on the length detected between the
new computed Pixels.
The User also has certain functions. The functions performed by the user in order are Place a red
colored finger cap or any other red colored substance in front of the webcam, Move the substance
in front of the webcam to see the pointer moving, selecting icons on the screen and doing mouse
functions.
International Journal of Advanced Information Technology (IJAIT) Vol. 2, No.3, June 2012
41
Figure 2. The flow chart
3.3. Functional Requirements
Programmer has to follow a particular sequence of actions to implement their functions. They are
Implementing the functions of vlcj to capture the images given by the user ,Store the captured
images in a temporary storage buffer, Examining pixel( skipping consecutive four pixels ) of the
image captured to find the red color, Finding out the mid points by examining x and y coordinates
of the current area detected, The pointer reaches the position where the midpoint is found,
Examine another pixel containing the red color and find out its midpoint, To move the pointer,
compare the current pixel with previous pixel, If the compared two pixels are the same, then the
pointer starts moving, Perform mouse functions depending on the length detected between the
new computed Pixels.
The User also has certain functions. The functions performed by the user in order are Place a red
colored finger cap or any other red colored substance in front of the webcam, Move the substance
in front of the webcam to see the pointer moving, selecting icons on the screen and doing mouse
functions.
International Journal of Advanced Information Technology (IJAIT) Vol. 2, No.3, June 2012
41
Figure 2. The flow chart
3.3. Functional Requirements
Programmer has to follow a particular sequence of actions to implement their functions. They are
Implementing the functions of vlcj to capture the images given by the user ,Store the captured
images in a temporary storage buffer, Examining pixel( skipping consecutive four pixels ) of the
image captured to find the red color, Finding out the mid points by examining x and y coordinates
of the current area detected, The pointer reaches the position where the midpoint is found,
Examine another pixel containing the red color and find out its midpoint, To move the pointer,
compare the current pixel with previous pixel, If the compared two pixels are the same, then the
pointer starts moving, Perform mouse functions depending on the length detected between the
new computed Pixels.
The User also has certain functions. The functions performed by the user in order are Place a red
colored finger cap or any other red colored substance in front of the webcam, Move the substance
in front of the webcam to see the pointer moving, selecting icons on the screen and doing mouse
functions.
6. International Journal of Advanced Information Technology (IJAIT) Vol. 2, No.3, June 2012
42
Figure 3. Hand gesture
Figure 4. Hand gesture
Figure 5. Hand gesture
7. International Journal of Advanced Information Technology (IJAIT) Vol. 2, No.3, June 2012
43
Figure 6. Hand gesture
4. RESULTS
From our implementation and execution of our program we found that the mouse pointer can be
made to move and its functions can be implemented without the use of a touchpad or mouse .The
pointer is moved with the help of our finger gestures by placing the specific color substance in
our hand (any colored cap or any colored small substance) making us easy to use our system
works. The performance of the software has been improved.
Figure 7. Screenshot for mouse pointer functions in windows xp
The Main problem encountered was when the finger shook a lot. Each time the finger position
changed, illumination changed every frame. To fix this problem we added a code to make the
cursor not to move when the previous and current red colored pixel difference is within 4 pixels.
Technologies we use in this paper assume that all hand movement is properly coordinated.
8. International Journal of Advanced Information Technology (IJAIT) Vol. 2, No.3, June 2012
44
Figure 8. Screenshot for mouse pointer functions in linux
5. FUTURE WORK
There are still many improvements that can be made to our system like improving the
performance of the current system and adding features such as enlarging and shrinking windows,
closing window, etc. by using the palm and multiple fingers. The current system is variant to
reflection and scale changes and requires proper hand gestures, good illumination technology and
powerful camera for the performance of mouse functions. Precision can always be increased at
the cost of recall by adding more stages, but each successive stage takes twice as much time to
find harder negative samples and the applications which benefit from this technology. We present
an image viewing application as an example of where this technology could lead to a more natural
user interface. The same could be said for navigating something like Google Maps or browsing
folders on a screen. However, the applications reach far beyond that. They are particularly
compelling in situations where touch screens are not applicable or less than ideal. For example,
with projection systems, there is no screen to touch. Here vision-based technology would provide
an ideal replacement for touch screen technology. Similarly in public terminals, constant use
results in the spread of dirt and germs. Vision-based systems would remove the need to touch
such setups, and would result in improved interaction.
5.1. Enlarge, zoom in and shrink, zoom out
More experiments were done to implement zooming and shrinking. Different hand gestures were
analyzed to find the proper gesture for zoom and shrink functions. “Moving hand apart” was the
initial hand gesture analyzed but later was discarded due to its limitation of using two hands. The
most convenient hand gesture selected was expansion and contraction movement of thumb and
index finger. Appropriate scaling factor for enlarging and shrinking are chosen. The scaling
9. International Journal of Advanced Information Technology (IJAIT) Vol. 2, No.3, June 2012
45
across the thumb and index finger is taken for further procedures. The thumb and index finger
movement is done repeatedly to scale the distance to implement these functions.
Figure 9. Diagrammatic representation of zoom and shrink
6. CONCLUSION
We developed a system to control the mouse cursor and implement its function using a real-time
camera. We implemented mouse movement, selection of the icons and its functions like right,
left, double click and scrolling. This system is based on image comparison and motion detection
technology to do mouse pointer movements and selection of icon. However, it is difficult to get
stable results because of the variety of lighting and detection of the same color in any other
location in the background. Most algorithms used have illumination issues. From the results, we
can expect that if the algorithms can work in all environments then our system will work more
efficiently. This system could be useful in presentations and to reduce work space. In the future,
we plan to add more features such as enlarging and shrinking windows, closing window, etc. by
using the palm and multiple fingers. The performance of the software can be only improved by
small percentage due to the lack of a powerful camera and a separate processor for this
application.
REFERENCES
[1] Hojoon Park. A Method for Controlling Mouse Movement using a Real-Time Camera.
ww.cs.brown.edu/research/pubs/theses/masters/2010/park.pdf2010.
[2] Computer vision based mouse, A. Erdem, E. Yardimci, Y. Atalay, V. Cetin, A. E. Acoustics, Speech,
and Signal Processing, 2002. Proceedings. (ICASS). IEEE International Conference.
[3] Vision based Men-Machine Interaction http://www.ceng.metu.edu.tr/~vbi/.
[4] Chu-Feng Lien, Portable Vision-Based HCI - A Real-time Hand Mouse System on Handheld
Devices.
[5] Ben askar Chris Jordan, Hyokwon Lee mouse free. http://www.seas.upenn.edu/cse400/CSE400-2009-
2010/nal-report/Jordan-lee.pdf, 2009-2010.
[6] Pranavmistry,Sixthsense. http://www.youtube.com/watch?v=ZfV4R4x2SK0, 2010. [Online; accessed
28-november-2010].
[7] Mark dot lee Java bindings for the vlc media player. http://www.capricasoftware.co.uk/vlcj/index.php,
2011. [Online; accessed 1-may-2011].
[8] Pranavmistry,Mouseless http://www.pranavmistry.com/projects/mouseless/,2010 [Online; accessed
28-november-2010].
[9] Shaker, N.; Abou Zliekha, M.;Damascus Univ., Damascus, Real-time Finger Tracking for Interaction
http://ieeexplore.ieee.org/search/freesrchabstract.jsp 2007
10. International Journal of Advanced Information Technology (IJAIT) Vol. 2, No.3, June 2012
46
Authors
Shany Jophin received the BTECH degree from the Department of Computer Science
and Engineering, Government Engineering College, Kannur University, Wayand, in
2011. She is currently doing Masters degree in the Department of Computer Science
and Engineering, Adi Shankara Institute of Engineering and Technology, MG
University, Kerala, India.Her research interests include Cyber forensics, Networking
and Information Security.
Sheethal M.S received the BTECH degree from the Department of Computer Science and
Engineering KMEA engineering college, Edathala, in 2007. She is currently doing
Masters Degree in the Department of Computer Science and Engineering, Adi shankara
institute of engineering and technology, MG university, kerala. Her research interests
include fuzzy clustering and image processing.
Priya Philip received the BE degree from the Department of Computer Science and
Engineering, Sun college of engineering and technology, Nagarcoil , Anna university in
2011. She is currently doing Masters Degree in the Department of Computer Science and
Engineering,, Adi Shankara Institute of Engineering and Technology, MG University,
Kerala, India
T M Bhraguram currently working as assistant professor of Adi Shankara Institute of
Engineering and Technology, Department of information technology, Kerala. He has 4
years of teaching experience in various engineering colleges and industry experience in
Hiranandani Group, Mumbai, India.