A Novel approach for Graphical User Interface development and real time Object and Face Tracking using Image Processing and Computer Vision Techniques implemented in MATLAB
This document presents a novel approach for developing graphical user interfaces and real-time object and face tracking using image processing and computer vision techniques implemented in MATLAB. Key aspects include developing a GUI for image manipulation, object detection and tracking in real-time using color segmentation, expanding this to control a robot, and implementing face detection and tracking using the Viola-Jones algorithm and CAMShift for multiple faces. The approach aims to provide a unified GUI for advanced image processing functions.
This document provides an overview of computer graphics and its applications. It discusses interactive graphics, where the user can control the image, versus passive graphics which produce images automatically. Interactive graphics allow for advantages like motion dynamics and update dynamics. The document then covers how interactive graphics displays work, using a frame buffer, monitor, and display controller. It concludes with a discussion of various applications of computer graphics, such as cartography, user interfaces, scientific visualization, CAD/CAM, simulation, art, process control and more.
Personalised Product Design Using Virtual Interactive Techniques ijcga
Use of Virtual Interactive Techniques for personalized product design is described in this paper. Usually products are designed and built by considering general usage patterns and Prototyping is used to mimic the static or working behaviour of an actual product before manufacturing the product. The user does not
have any control on the design of the product. Personalized design postpones design to a later stage. It allows for personalized selection of individual components by the user. This is implemented by displaying the individual components over a physical model constructed using Cardboard or Thermocol in the actual size and shape of the original product. The components of the equipment or product such as
screen, buttons etc. are then projected using a projector connected to the computer into the physical model. Users can interact with the prototype like the original working equipment and they can select, shape, position the individual components displayed on the interaction panel using simple hand gestures. Computer Vision techniques as well as sound processing techniques are used to detect and recognize the user gestures captured using a web camera and microphone.
IRJET- Mouse on Finger Tips using ML and AIIRJET Journal
This document describes a system that uses computer vision and machine learning to allow users to control a computer mouse using only their fingertips. The system tracks colored fingertips using a webcam and processes the video frames in real-time to detect and track the fingertips. It then maps the fingertip movements to mouse movements and gestures to control clicking, scrolling, and other mouse functions without any physical contact with the computer. The system was created using Python and aims to provide a more natural and cost-effective way for human-computer interaction through a virtual mouse controlled by hand gestures.
Gesture Based Interface Using Motion and Image Comparisonijait
This paper gives a new approach for movement of mouse and implementation of its functions using a real time camera. Here we propose to change the hardware design. Most of the existing technologies mainly depend on changing the mouse parts features like changing the position of tracking ball and adding more buttons. We use a camera, colored substance, image comparison technology and motion detection technology to control mouse movement and implement its functions (right click, left click, scrolling and double click) .
The document discusses different types and techniques of animation including student-generated animation using key frames and tweens, computer-generated animation using morphing and controllers, using cameras and hierarchies in animation, and rendering and output of animations.
This document provides an introduction to computer graphics. It begins by defining computer graphics as using computers to generate and manipulate visual images and discusses how computer graphics has evolved from traditional technical drawings. The document then outlines several key applications of computer graphics, including presentation graphics, painting/drawing, photo editing, scientific visualization, image processing, education/training/entertainment, simulations, and animation/games. It also describes common graphics hardware components like input/output and display devices. The overall purpose is to introduce the field of computer graphics and discuss its uses and technologies.
Computer graphics is responsible for displaying art and image data effectively and beautifully to the user, and processing image data received from the physical world. The interaction and understanding of computers and interpretation of data has been made easier because of computer graphics. It have had a profound impact on many types of media and have revolutionized animation, movies and the video game industry.
Computer-generated imagery (CGI) is the application of computer graphics to create or contribute to images in art, printed media, video games, films, television programs, commercials, videos, and simulators. The visual scenes may be dynamic or static, and may be two-dimensional (2D), though the term "CGI" is most commonly used to refer to 3D computer graphics used for creating scenes or special effects in films and television.
Video games most often use real-time computer graphics (rarely referred to as CGI), but may also include pre-rendered "cut scenes" and intro movies that would be typical CGI applications.
Computer Graphics Project Development Help with OpenGL computer graphics proj...Team Codingparks
Computer Graphics Project Development Help (OpenGL Computer Graphics Ideas and Help, C Computer Graphics Programs Ideas and Help, C++ Computer Graphics Programs Ideas and Help, Adobe Photoshop Computer Graphics Ideas and Help, Adobe Illustrator Computer Graphics Ideas and Help)
Fore More Details Visit - http://www.codingparks.com/computer-graphics-project-development-and-assignment-help/
Get OpenGL Programming Topics and Ideas to work with OpenGL Project and Assignment Help.
This document provides an overview of computer graphics and its applications. It discusses interactive graphics, where the user can control the image, versus passive graphics which produce images automatically. Interactive graphics allow for advantages like motion dynamics and update dynamics. The document then covers how interactive graphics displays work, using a frame buffer, monitor, and display controller. It concludes with a discussion of various applications of computer graphics, such as cartography, user interfaces, scientific visualization, CAD/CAM, simulation, art, process control and more.
Personalised Product Design Using Virtual Interactive Techniques ijcga
Use of Virtual Interactive Techniques for personalized product design is described in this paper. Usually products are designed and built by considering general usage patterns and Prototyping is used to mimic the static or working behaviour of an actual product before manufacturing the product. The user does not
have any control on the design of the product. Personalized design postpones design to a later stage. It allows for personalized selection of individual components by the user. This is implemented by displaying the individual components over a physical model constructed using Cardboard or Thermocol in the actual size and shape of the original product. The components of the equipment or product such as
screen, buttons etc. are then projected using a projector connected to the computer into the physical model. Users can interact with the prototype like the original working equipment and they can select, shape, position the individual components displayed on the interaction panel using simple hand gestures. Computer Vision techniques as well as sound processing techniques are used to detect and recognize the user gestures captured using a web camera and microphone.
IRJET- Mouse on Finger Tips using ML and AIIRJET Journal
This document describes a system that uses computer vision and machine learning to allow users to control a computer mouse using only their fingertips. The system tracks colored fingertips using a webcam and processes the video frames in real-time to detect and track the fingertips. It then maps the fingertip movements to mouse movements and gestures to control clicking, scrolling, and other mouse functions without any physical contact with the computer. The system was created using Python and aims to provide a more natural and cost-effective way for human-computer interaction through a virtual mouse controlled by hand gestures.
Gesture Based Interface Using Motion and Image Comparisonijait
This paper gives a new approach for movement of mouse and implementation of its functions using a real time camera. Here we propose to change the hardware design. Most of the existing technologies mainly depend on changing the mouse parts features like changing the position of tracking ball and adding more buttons. We use a camera, colored substance, image comparison technology and motion detection technology to control mouse movement and implement its functions (right click, left click, scrolling and double click) .
The document discusses different types and techniques of animation including student-generated animation using key frames and tweens, computer-generated animation using morphing and controllers, using cameras and hierarchies in animation, and rendering and output of animations.
This document provides an introduction to computer graphics. It begins by defining computer graphics as using computers to generate and manipulate visual images and discusses how computer graphics has evolved from traditional technical drawings. The document then outlines several key applications of computer graphics, including presentation graphics, painting/drawing, photo editing, scientific visualization, image processing, education/training/entertainment, simulations, and animation/games. It also describes common graphics hardware components like input/output and display devices. The overall purpose is to introduce the field of computer graphics and discuss its uses and technologies.
Computer graphics is responsible for displaying art and image data effectively and beautifully to the user, and processing image data received from the physical world. The interaction and understanding of computers and interpretation of data has been made easier because of computer graphics. It have had a profound impact on many types of media and have revolutionized animation, movies and the video game industry.
Computer-generated imagery (CGI) is the application of computer graphics to create or contribute to images in art, printed media, video games, films, television programs, commercials, videos, and simulators. The visual scenes may be dynamic or static, and may be two-dimensional (2D), though the term "CGI" is most commonly used to refer to 3D computer graphics used for creating scenes or special effects in films and television.
Video games most often use real-time computer graphics (rarely referred to as CGI), but may also include pre-rendered "cut scenes" and intro movies that would be typical CGI applications.
Computer Graphics Project Development Help with OpenGL computer graphics proj...Team Codingparks
Computer Graphics Project Development Help (OpenGL Computer Graphics Ideas and Help, C Computer Graphics Programs Ideas and Help, C++ Computer Graphics Programs Ideas and Help, Adobe Photoshop Computer Graphics Ideas and Help, Adobe Illustrator Computer Graphics Ideas and Help)
Fore More Details Visit - http://www.codingparks.com/computer-graphics-project-development-and-assignment-help/
Get OpenGL Programming Topics and Ideas to work with OpenGL Project and Assignment Help.
This document is a lecture outline for an introduction to computer graphics course. It outlines the course information and administrative details, provides an overview of topics to be covered including graphics systems, techniques, operations and a mathematical review. It also defines computer graphics, discusses image processing and analysis, and explains why computer graphics is an important field due to advances in computing power, visualization, and interaction capabilities.
1. The document introduces computer graphics and its various applications and elements. It discusses how computer graphics involves the display, manipulation, and storage of images and data for visualization using computers.
2. Key elements of computer graphics include raster/bitmap graphics which use a grid of pixels and vector graphics which use mathematical formulas to define shapes. Common display devices include CRT, LCD, and plasma displays.
3. Applications of computer graphics include CAD, presentation graphics, computer art, entertainment, education and training, visualization, image processing, and graphical user interfaces.
This document provides information about a Computer Graphics course taught by Prof. Sonal Badhe. It includes the course syllabus, list of experiments for the lab component, and information about term work requirements. The syllabus covers topics like 2D and 3D graphics, geometric transformations, illumination models, and visible surface detection. The lab experiments include implementing basic graphics primitives, curves, transformations, and clipping algorithms. Students are required to complete 12 experiments along with assignments and a mini-project for their term work evaluation.
This document provides an overview of computer graphics and its applications. It defines computer graphics as the creation and manipulation of geometric objects and images through computer programs. Some key applications discussed include:
- Computer-aided design (CAD) for modeling objects like buildings, vehicles, and products.
- Presentation graphics for illustrating data through charts, graphs, and diagrams.
- Entertainment applications like movies, games, and animation through modeling and rendering techniques.
- Education and training through interactive models of systems.
- Visualization of scientific and business data.
Additional applications covered are computer art, image processing, and graphical user interfaces.
Panorama Technique for 3D Animation movie, Design and EvaluatingIOSRjournaljce
This paper presents an applied approach for Panorama 3D movies enhanced with visual sound effects. The case study that considered is IIPS@UOIT. Many selected S/W have been used to introduce the 3D Movie. 3D Animation is a modern technology in the field of the world of filmmaking and is considered the core of multimedia, where the vast majority of movies such as Hollywood movies that we see today, it was using 3D technology. Where this technique is used in all the magazines, such as medical experiments, engineering, astronomy, planets and stars, to prove scientific theories, history, geography, etc., where they are building models or scenes or characters simulates reality and the movement of the viewer to the heart of the event. A three-dimensional film was made to (IIPS @ UOITC) to give it a future vision and published in the global sites such as YouTube, Facebook and Google earth. By using many specialized 3d software and cinematic tricks, with a focusing on movement, characters, lighting, cameras and final render.
This document describes a Pong-inspired interactive art installation called Pong Sketch Two Project. Users can interact with a digitally projected bouncing ball using their full body motions, which are tracked via video camera. Their silhouettes and the ball are projected on a screen. Users can pass the ball back and forth or trap it in different ways. The installation aims to create an engaging interactive experience through whole-body gestures without restrictions of wires or hardware.
IRJET- 3D Drawing with Augmented RealityIRJET Journal
This document summarizes research on developing an augmented reality application called 3D Drawing with Augmented Reality. The application allows multiple users to draw in 3D space and see each other's drawings in real-time. It treats the real world as a canvas where users can draw lines and doodles in thin air. The application was created using Unity 3D and ARCore to enable 3D drawing on mobile devices. Related work on SLAM, human-computer interaction, game lobbies, and bare-handed 3D drawing techniques were also reviewed to inform the application's design. The methodology section describes how the collaborative AR application was developed to allow real-time 3D drawing between users.
01 introduction image processing analysisRumah Belajar
This document provides an introduction to image processing and analysis. It discusses image acquisition, pre-processing techniques like image transforms and enhancement, and applications of image processing. Image transforms like the discrete Fourier transform and discrete cosine transform are used to represent images in different domains. Image enhancement techniques accentuate features to make images more useful for display and analysis. Common techniques include adjusting histograms, using median filters, and performing operations in transform domains.
This document provides an overview of computer graphics programming and OpenGL. It discusses common uses of computer graphics such as games, simulations, and medical imaging. It then explains core graphics concepts like the graphics pipeline, 2D and 3D coordinates, projections, and RGB color space. The document also covers OpenGL specifications, functions, data types, and related APIs.
Computer animation involves creating animation sequences through object definition, path specification, key frames, and in-betweening. There are two main methods for displaying animation sequences: raster animation and color-table animation. Raster animation involves copying frames from memory to the display very quickly, while color-table animation uses a color lookup table to convert logical color numbers in each pixel to physical colors. The document discusses techniques for designing animation sequences like storyboarding, defining objects and paths, specifying key frames, and generating in-between frames. It also covers topics like motion specification using direct motion, goal-directed systems, kinematics, dynamics, and inverse kinematics. Morphing and tweening are introduced as techniques for warping one image into
From Image Processing To Computer VisionJoud Khattab
This document provides an overview of digital image processing and computer vision. It defines digital images and describes different image types including binary, grayscale, and color images. The document outlines common digital image processing steps such as acquisition, enhancement, restoration, compression, segmentation, representation and description. It also discusses applications of computer vision such as scene completion, object detection and recognition tasks. In summary, the document serves as an introduction to digital image processing and computer vision concepts.
Mouse Simulation Using Two Coloured Tapes ijistjournal
In this paper, we present a novel approach for Human Computer Interaction (HCI) where, we control cursor movement using a real-time camera. Current methods involve changing mouse parts such as adding more buttons or changing the position of the tracking ball. Instead, our method is to use a camera and computer vision technology, such as image segmentation and gesture recognition, to control mouse tasks (left and right clicking, double-clicking, and scrolling) and we show how it can perform everything as current mouse devices can.
The software will be developed in JAVA language. Recognition and pose estimation in this system are user independent and robust as we will be using colour tapes on our finger to perform actions. The software can be used as an intuitive input interface to applications that require multi-dimensional control e.g. computer games etc.
Computer graphics involves using computers to generate and manipulate visual and spatial data. It has various applications including computer-aided design, presentation graphics, education and training, visualization, image processing, entertainment, medical imaging, and graphical user interfaces. The key advantages of computer graphics are its ability to produce high quality visualizations and animations that can effectively communicate information.
Gesture Gaming on the World Wide Web Using an Ordinary Web CameraIJERD Editor
- Gesture gaming is a method by which users having a laptop/pc/x-box play games using natural or
bodily gestures. This paper presents a way of playing free flash games on the internet using an ordinary webcam
with the help of open source technologies. Emphasis in human activity recognition is given on the pose
estimation and the consistency in the pose of the player. These are estimated with the help of an ordinary web
camera having different resolutions from VGA to 20mps. Our work involved giving a 10 second documentary to
the user on how to play a particular game using gestures and what are the various kinds of gestures that can be
performed in front of the system. The initial inputs of the RGB values for the gesture component is obtained by
instructing the user to place his component in a red box in about 10 seconds after the short documentary before
the game is finished. Later the system opens the concerned game on the internet on popular flash game sites like
miniclip, games arcade, GameStop etc and loads the game clicking at various places and brings the state to a
place where the user is to perform only gestures to start playing the game. At any point of time the user can call
off the game by hitting the esc key and the program will release all of the controls and return to the desktop. It
was noted that the results obtained using an ordinary webcam matched that of the Kinect and the users could
relive the gaming experience of the free flash games on the net. Therefore effective in game advertising could
also be achieved thus resulting in a disruptive growth to the advertising firms.
This document contains questions and answers about computer graphics. It begins by defining computer graphics as pictures and movies created using computers, usually referring to image data created with specialized graphics hardware and software. Applications of computer graphics mentioned include computer-aided design, presentation graphics, computer art, entertainment, education and training, visualization, image processing, and graphical user interfaces. Key terms like pixel, resolution, aspect ratio, and persistence are also defined. The document then discusses video display devices and CRTs, and explains raster scan and random scan display systems. Color CRTs using beam penetration and shadow mask techniques are also covered.
Computer graphics refers to creating and manipulating pictures and drawings using a computer. There are two main types: passive graphics which have no interaction and active graphics which allow two-way communication and interaction between the user and hardware. Computer graphics has many applications including user interfaces, scientific visualization, animation, computer aided design, presentation graphics, image processing, and education/training.
Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...IJERA Editor
Gesture recognition system received great attention in the recent few years because of its manifoldness applications and the ability to interact with machine efficiently through human computer interaction. Gesture is one of human body languages which are popularly used in our daily life. It is a communication system that consists of hand movements and facial expressions via communication by actions and sights. This research mainly focuses on the research of gesture extraction and finger segmentation in the gesture recognition. In this paper, we have used image analysis technologies to create an application by encoding in MATLAB program. We will use this application to segment and extract the finger from one specific gesture. This paper is aimed to give gesture recognition in different natural conditions like dark and glare condition, different distances condition and similar object condition then collect the results to calculate the successful extraction rate.
The document provides an overview of an introduction to computer graphics course. It discusses topics that will be covered like the history and applications of computer graphics, hardware concepts, 2D and 3D algorithms, modeling curves and 3D objects, animation, and textbooks. It also defines computer graphics and compares image processing versus computer graphics.
This document provides an overview of various types of 3D computer graphics software, including proprietary software like 3ds Max and Cinema 4D, free software like Blender and FreeCAD, and freeware packages like Sculptris and SketchUp. It also discusses renderers, related software for tasks like match moving and mesh processing, and resources for finding more information.
The document analyzes and designs a multi-cell post-tensioned pre-stressed concrete box girder bridge with a 35m span. Two different duct materials, HDPE and corrugated bright metal, are considered to determine the most economical design. Finite element modeling and analysis of the box girder is performed using CSI Bridge software. The design is done according to Indian code specifications, considering aspects such as section properties, load calculations, stress limits, prestressing calculations and loss estimates, and serviceability checks. Results for bending moments, shear forces, displacements and stresses are obtained and compared for both duct options.
Comparison of an Isolated bidirectional Dc-Dc converter with and without a Fl...IOSR Journals
This document compares an isolated bidirectional DC-DC converter with and without a flyback snubber through simulation and hardware implementation. It begins with an introduction to isolated bidirectional converters and the problem of voltage spikes caused by transformer leakage inductance. It then describes the operation and components of the converter both with and without a flyback snubber. Simulation results show that the flyback snubber reduces voltage spikes by 78-80% by clamping the voltage. Hardware results for boost mode operation with a flyback snubber are also presented and agree with simulation.
This document discusses performance analysis of Hadoop applications on heterogeneous systems. It analyzes the throughput and processing rate of Hadoop jobs with variable sized input files on Intel Core 2 Duo and Intel Core i3 systems. The experiments show that throughput increases with larger single input files compared to multiple smaller files of the same total size. Processing records was also faster on the Intel Core i3 system compared to the Intel Core 2 Duo system. Ensuring proper data quality testing and tuning Hadoop parameters can help optimize performance.
This document is a lecture outline for an introduction to computer graphics course. It outlines the course information and administrative details, provides an overview of topics to be covered including graphics systems, techniques, operations and a mathematical review. It also defines computer graphics, discusses image processing and analysis, and explains why computer graphics is an important field due to advances in computing power, visualization, and interaction capabilities.
1. The document introduces computer graphics and its various applications and elements. It discusses how computer graphics involves the display, manipulation, and storage of images and data for visualization using computers.
2. Key elements of computer graphics include raster/bitmap graphics which use a grid of pixels and vector graphics which use mathematical formulas to define shapes. Common display devices include CRT, LCD, and plasma displays.
3. Applications of computer graphics include CAD, presentation graphics, computer art, entertainment, education and training, visualization, image processing, and graphical user interfaces.
This document provides information about a Computer Graphics course taught by Prof. Sonal Badhe. It includes the course syllabus, list of experiments for the lab component, and information about term work requirements. The syllabus covers topics like 2D and 3D graphics, geometric transformations, illumination models, and visible surface detection. The lab experiments include implementing basic graphics primitives, curves, transformations, and clipping algorithms. Students are required to complete 12 experiments along with assignments and a mini-project for their term work evaluation.
This document provides an overview of computer graphics and its applications. It defines computer graphics as the creation and manipulation of geometric objects and images through computer programs. Some key applications discussed include:
- Computer-aided design (CAD) for modeling objects like buildings, vehicles, and products.
- Presentation graphics for illustrating data through charts, graphs, and diagrams.
- Entertainment applications like movies, games, and animation through modeling and rendering techniques.
- Education and training through interactive models of systems.
- Visualization of scientific and business data.
Additional applications covered are computer art, image processing, and graphical user interfaces.
Panorama Technique for 3D Animation movie, Design and EvaluatingIOSRjournaljce
This paper presents an applied approach for Panorama 3D movies enhanced with visual sound effects. The case study that considered is IIPS@UOIT. Many selected S/W have been used to introduce the 3D Movie. 3D Animation is a modern technology in the field of the world of filmmaking and is considered the core of multimedia, where the vast majority of movies such as Hollywood movies that we see today, it was using 3D technology. Where this technique is used in all the magazines, such as medical experiments, engineering, astronomy, planets and stars, to prove scientific theories, history, geography, etc., where they are building models or scenes or characters simulates reality and the movement of the viewer to the heart of the event. A three-dimensional film was made to (IIPS @ UOITC) to give it a future vision and published in the global sites such as YouTube, Facebook and Google earth. By using many specialized 3d software and cinematic tricks, with a focusing on movement, characters, lighting, cameras and final render.
This document describes a Pong-inspired interactive art installation called Pong Sketch Two Project. Users can interact with a digitally projected bouncing ball using their full body motions, which are tracked via video camera. Their silhouettes and the ball are projected on a screen. Users can pass the ball back and forth or trap it in different ways. The installation aims to create an engaging interactive experience through whole-body gestures without restrictions of wires or hardware.
IRJET- 3D Drawing with Augmented RealityIRJET Journal
This document summarizes research on developing an augmented reality application called 3D Drawing with Augmented Reality. The application allows multiple users to draw in 3D space and see each other's drawings in real-time. It treats the real world as a canvas where users can draw lines and doodles in thin air. The application was created using Unity 3D and ARCore to enable 3D drawing on mobile devices. Related work on SLAM, human-computer interaction, game lobbies, and bare-handed 3D drawing techniques were also reviewed to inform the application's design. The methodology section describes how the collaborative AR application was developed to allow real-time 3D drawing between users.
01 introduction image processing analysisRumah Belajar
This document provides an introduction to image processing and analysis. It discusses image acquisition, pre-processing techniques like image transforms and enhancement, and applications of image processing. Image transforms like the discrete Fourier transform and discrete cosine transform are used to represent images in different domains. Image enhancement techniques accentuate features to make images more useful for display and analysis. Common techniques include adjusting histograms, using median filters, and performing operations in transform domains.
This document provides an overview of computer graphics programming and OpenGL. It discusses common uses of computer graphics such as games, simulations, and medical imaging. It then explains core graphics concepts like the graphics pipeline, 2D and 3D coordinates, projections, and RGB color space. The document also covers OpenGL specifications, functions, data types, and related APIs.
Computer animation involves creating animation sequences through object definition, path specification, key frames, and in-betweening. There are two main methods for displaying animation sequences: raster animation and color-table animation. Raster animation involves copying frames from memory to the display very quickly, while color-table animation uses a color lookup table to convert logical color numbers in each pixel to physical colors. The document discusses techniques for designing animation sequences like storyboarding, defining objects and paths, specifying key frames, and generating in-between frames. It also covers topics like motion specification using direct motion, goal-directed systems, kinematics, dynamics, and inverse kinematics. Morphing and tweening are introduced as techniques for warping one image into
From Image Processing To Computer VisionJoud Khattab
This document provides an overview of digital image processing and computer vision. It defines digital images and describes different image types including binary, grayscale, and color images. The document outlines common digital image processing steps such as acquisition, enhancement, restoration, compression, segmentation, representation and description. It also discusses applications of computer vision such as scene completion, object detection and recognition tasks. In summary, the document serves as an introduction to digital image processing and computer vision concepts.
Mouse Simulation Using Two Coloured Tapes ijistjournal
In this paper, we present a novel approach for Human Computer Interaction (HCI) where, we control cursor movement using a real-time camera. Current methods involve changing mouse parts such as adding more buttons or changing the position of the tracking ball. Instead, our method is to use a camera and computer vision technology, such as image segmentation and gesture recognition, to control mouse tasks (left and right clicking, double-clicking, and scrolling) and we show how it can perform everything as current mouse devices can.
The software will be developed in JAVA language. Recognition and pose estimation in this system are user independent and robust as we will be using colour tapes on our finger to perform actions. The software can be used as an intuitive input interface to applications that require multi-dimensional control e.g. computer games etc.
Computer graphics involves using computers to generate and manipulate visual and spatial data. It has various applications including computer-aided design, presentation graphics, education and training, visualization, image processing, entertainment, medical imaging, and graphical user interfaces. The key advantages of computer graphics are its ability to produce high quality visualizations and animations that can effectively communicate information.
Gesture Gaming on the World Wide Web Using an Ordinary Web CameraIJERD Editor
- Gesture gaming is a method by which users having a laptop/pc/x-box play games using natural or
bodily gestures. This paper presents a way of playing free flash games on the internet using an ordinary webcam
with the help of open source technologies. Emphasis in human activity recognition is given on the pose
estimation and the consistency in the pose of the player. These are estimated with the help of an ordinary web
camera having different resolutions from VGA to 20mps. Our work involved giving a 10 second documentary to
the user on how to play a particular game using gestures and what are the various kinds of gestures that can be
performed in front of the system. The initial inputs of the RGB values for the gesture component is obtained by
instructing the user to place his component in a red box in about 10 seconds after the short documentary before
the game is finished. Later the system opens the concerned game on the internet on popular flash game sites like
miniclip, games arcade, GameStop etc and loads the game clicking at various places and brings the state to a
place where the user is to perform only gestures to start playing the game. At any point of time the user can call
off the game by hitting the esc key and the program will release all of the controls and return to the desktop. It
was noted that the results obtained using an ordinary webcam matched that of the Kinect and the users could
relive the gaming experience of the free flash games on the net. Therefore effective in game advertising could
also be achieved thus resulting in a disruptive growth to the advertising firms.
This document contains questions and answers about computer graphics. It begins by defining computer graphics as pictures and movies created using computers, usually referring to image data created with specialized graphics hardware and software. Applications of computer graphics mentioned include computer-aided design, presentation graphics, computer art, entertainment, education and training, visualization, image processing, and graphical user interfaces. Key terms like pixel, resolution, aspect ratio, and persistence are also defined. The document then discusses video display devices and CRTs, and explains raster scan and random scan display systems. Color CRTs using beam penetration and shadow mask techniques are also covered.
Computer graphics refers to creating and manipulating pictures and drawings using a computer. There are two main types: passive graphics which have no interaction and active graphics which allow two-way communication and interaction between the user and hardware. Computer graphics has many applications including user interfaces, scientific visualization, animation, computer aided design, presentation graphics, image processing, and education/training.
Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...IJERA Editor
Gesture recognition system received great attention in the recent few years because of its manifoldness applications and the ability to interact with machine efficiently through human computer interaction. Gesture is one of human body languages which are popularly used in our daily life. It is a communication system that consists of hand movements and facial expressions via communication by actions and sights. This research mainly focuses on the research of gesture extraction and finger segmentation in the gesture recognition. In this paper, we have used image analysis technologies to create an application by encoding in MATLAB program. We will use this application to segment and extract the finger from one specific gesture. This paper is aimed to give gesture recognition in different natural conditions like dark and glare condition, different distances condition and similar object condition then collect the results to calculate the successful extraction rate.
The document provides an overview of an introduction to computer graphics course. It discusses topics that will be covered like the history and applications of computer graphics, hardware concepts, 2D and 3D algorithms, modeling curves and 3D objects, animation, and textbooks. It also defines computer graphics and compares image processing versus computer graphics.
This document provides an overview of various types of 3D computer graphics software, including proprietary software like 3ds Max and Cinema 4D, free software like Blender and FreeCAD, and freeware packages like Sculptris and SketchUp. It also discusses renderers, related software for tasks like match moving and mesh processing, and resources for finding more information.
The document analyzes and designs a multi-cell post-tensioned pre-stressed concrete box girder bridge with a 35m span. Two different duct materials, HDPE and corrugated bright metal, are considered to determine the most economical design. Finite element modeling and analysis of the box girder is performed using CSI Bridge software. The design is done according to Indian code specifications, considering aspects such as section properties, load calculations, stress limits, prestressing calculations and loss estimates, and serviceability checks. Results for bending moments, shear forces, displacements and stresses are obtained and compared for both duct options.
Comparison of an Isolated bidirectional Dc-Dc converter with and without a Fl...IOSR Journals
This document compares an isolated bidirectional DC-DC converter with and without a flyback snubber through simulation and hardware implementation. It begins with an introduction to isolated bidirectional converters and the problem of voltage spikes caused by transformer leakage inductance. It then describes the operation and components of the converter both with and without a flyback snubber. Simulation results show that the flyback snubber reduces voltage spikes by 78-80% by clamping the voltage. Hardware results for boost mode operation with a flyback snubber are also presented and agree with simulation.
This document discusses performance analysis of Hadoop applications on heterogeneous systems. It analyzes the throughput and processing rate of Hadoop jobs with variable sized input files on Intel Core 2 Duo and Intel Core i3 systems. The experiments show that throughput increases with larger single input files compared to multiple smaller files of the same total size. Processing records was also faster on the Intel Core i3 system compared to the Intel Core 2 Duo system. Ensuring proper data quality testing and tuning Hadoop parameters can help optimize performance.
This document discusses how SNMP protocol is implemented on HP routers with OVPI (OpenView Performance Insight) for network management. It describes the three main components of SNMP - managed devices, agents, and NMS. It then explains how OVPI consists of NNM for network discovery and monitoring, and uses data collected by NNM to generate reports through OVPI. Examples of specific report packages are provided, such as Cisco ping reports and device resource reports, showing how they display performance metrics over time for network troubleshooting and optimization.
This document describes a hybrid system approach to modeling direct torque control of an induction motor coupled to an inverter. It first models the direct torque control as a hybrid system, with a discrete event dynamics defined by the inverter voltage vectors, and continuous dynamics defined by the stator flux vector and electromagnetic torque. It then abstracts the continuous dynamics in terms of discrete events representing the flux and torque entering/exiting a working point region, and the passage of the flux vector between zones. Finally, it uses supervisory control theory to drive the inverter-motor system to a desired working point by controlling the discrete event dynamics.
1) The document discusses techniques for reducing the peak-to-average power ratio (PAPR) of orthogonal frequency division multiplexing (OFDM) signals, including repeated clipping and filtering (RCF) and nonlinear commanding transform (NCT).
2) RCF projects clipping noise into the feasible extension area while removing out-of-band interference through filtering, but suffers from problems like in-band distortion, peak re-growth, and out-of-band radiation.
3) NCT aims to reduce PAPR by compressing peak signals and expanding small signals while maintaining constant average power through optimal parameter selection. The proposed NCT technique outperforms RCF in terms of lower clipping ratio,
The researchers used strain gauges to experimentally determine the drag coefficient of a scale model Toyota car. Tests were conducted in a subsonic wind tunnel from 21.17 to 33 m/s. Drag coefficients were obtained ranging from 1.10 to 0.53, decreasing about 50% over the speed range tested. Flow visualization showed recirculating vortices at the rear that influence drag. Measurement errors for velocity, drag force, and drag coefficient decreased with increasing air speed.
This document discusses the magneto static analysis of a magneto rheological (MR) fluid clutch. It begins with background on MR fluids and how their viscosity increases dramatically in the presence of a magnetic field. It then describes the design of the MR fluid clutch, which contains multiple plates, an electromagnet, and housing. Finite element analysis using ANSYS was performed to analyze the magnetic field density generated by the magnetic coil in the clutch armature. The analysis was run for input currents ranging from 0.2 to 2 amps. The results provide the magnetic flux density distribution in the MR fluid, casing, and plates, which can then be used to calculate the yield stress and effective torque of the clutch.
Educational Process Mining-Different PerspectivesIOSR Journals
Process mining methods have in recent years enabled the development of more sophisticated Process
models which represent and detect a broader range of student behaviors than was previously possible. This
paper summarizes key Process mining perspectives that have supported student modeling efforts, discussing
also the specific constructs that have been modeled with the use of educational process mining and key
upcoming directions that are needed for educational process mining research to reach its full potential. Process
mining aims to discover, monitor and improve real processes by extracting knowledge from event logs readily
available in today’s information systems. This paper is designed to give a view on the capabilities of process
mining techniques in the context of higher education system which involves and deals with administrative and
academic tasks like enrolment of students in a particular case, alienation of traditional classroom teaching
model, detection of unfair means used in online examination, detection of abnormal values in the result sheet of
students, prediction about students performance, identify the drop outs, and students who need special attention
and allow the teacher to provide appropriate advising /counseling and so on.
This document summarizes a study on liquefaction analysis of the Kakinada region in India using geotechnical borehole data. The study aims to determine the factor of safety against liquefaction for the region using standard penetration test (SPT) data. Deterministic liquefaction analysis is performed using SPT-based methods to calculate the factor of safety, which is the ratio of cyclic resistance ratio to cyclic stress ratio. Reliability analysis is also conducted considering uncertainties in models and parameters. Key findings from selected boreholes include the soil profile period, peak ground acceleration, and ground response spectrum at the surface.
This document describes a system for classifying gender using both face images and voice samples. It analyzes features extracted from preprocessed voice and image data using autocorrelation and Fisherface algorithms, respectively. The results from each modality are then integrated to increase classification accuracy. Experimental results showed the voice-based system was more robust to variations, but combining modalities yielded a system that leveraged the strengths of each approach and improved performance under noisy conditions.
The document describes a study that used response surface methodology to develop a mathematical model for predicting brake specific fuel consumption (BSFC) in a single cylinder diesel engine fueled with soybean biodiesel blends. Experiments were conducted varying injection pressure, compression ratio, load, and percentage of biodiesel. The results were analyzed using ANOVA and central composite design to obtain the optimal operating conditions. The model showed load had the most significant effect on BSFC, with higher loads resulting in lower BSFC. The mathematical model achieved good accuracy in predicting BSFC with an adjusted R-squared value of 96.78%.
This document summarizes research using artificial neural networks to forecast the output power performance of a solar thermal lag Stirling engine. Input parameters like angular velocity, temperature, and tank volumes were used to train neural networks. The best network structure had inputs, hidden, and output layers. It was trained on 572 data points and showed high accuracy in predicting engine performance based on validation metrics. Graphs showed the neural network could successfully predict variables like gas temperature, tank volume, and output power under different operating conditions. The research demonstrated artificial neural networks are a useful tool for simulating Stirling engine performance without complex modeling equations.
Group Search Optimization technique is used to minimize reactive power generation in a power system. The objective is to control generator voltages to reduce reactive power production while meeting system constraints. IEEE 14-bus test system is used with 4 generator buses. Generator voltages are optimized using Group Search Optimization, which finds the minimum reactive power of 5.26 MVAR after 26 iterations. Reactive power and line losses are reduced compared to the base case, showing the effectiveness of the technique in minimizing reactive power generation.
This document discusses using a back propagation neural network (BPNN) to predict carbon monoxide (CO) emissions from a diesel engine. It begins by providing background on BPNN and how it was applied in this study. Experimental data on engine parameters and CO emissions were collected from tests. The data were divided into training and testing sets to train and validate the BPNN. Different combinations of engine parameters were used as inputs to the BPNN in various "strategies" to determine the best parameters for accurately predicting CO emissions. The BPNN architecture and training parameters were optimized to minimize error between predicted and actual CO emissions. The goal was to develop a method for predicting emissions to better control engine parameters and reduce pollution.
This document discusses challenging issues and similarity measures for web document clustering. It begins with an introduction to text mining and document clustering. Some key challenges discussed include ambiguity in natural language, efficiently measuring semantic similarity between words, and cluster validity. Various string-based, term-based, and corpus-based similarity measures are then described that can be used for document clustering, including Jaro-Winkler distance, cosine similarity, latent semantic analysis, and pointwise mutual information. The conclusion states that accurate clustering requires a precise definition of similarity between document pairs.
1. The document analyzes the nutritional composition, physicochemical properties, and short-term toxicological effects of beniseed (Sesanum indicum) oil in albino rats.
2. The beniseed was found to contain moderate levels of protein, fat, and carbohydrates, making it a good source of nutrients. Its oil had properties suggesting potential industrial and dietary uses.
3. Rats fed a diet with 5% beniseed oil showed increased weight gain and feed consumption compared to controls, with no significant differences in organ weights or blood parameters, indicating no apparent toxic effects of short-term consumption.
This document proposes using a Markov chain model and bipartite graphing to efficiently schedule spectrum in cognitive radio networks. It models the cognitive radio network as a k-connected bipartite graph and uses a Markov chain to represent the state transitions of channels between idle and busy. It then applies the Banker's algorithm to the modeled cognitive radio network to allocate spectrum to users while avoiding deadlock. The proposed approach indicates it could improve spectrum scheduling and allocation performance in cognitive radio networks.
This document analyzes the cryptographic security of an efficient unlinkable secret handshakes scheme proposed by Ryu, Yoo, and Ha. It summarizes the scheme and its goals of providing authentication and key agreement between group members while preserving unlinkability. The document then shows that the scheme is vulnerable to a key-compromise impersonation attack, allowing an adversary who learns a user's private key to impersonate other users within the same group. This violates an important security property for key agreement protocols. In conclusion, while the scheme aims to provide strong anonymity, it fails to achieve key-compromise impersonation resilience.
This document discusses soil-structure interaction calculations for rigid hydraulic structures using the finite element method (FEM). It examines two common computational models: the Winkler and Pasternak models. The Winkler model represents soil behavior with independent vertical springs, while the Pasternak model adds a shear layer between the soil and structure. Equations are provided for deriving the stiffness matrix of a beam foundation considering each model. The influence of these models on displacements and developed stresses of rigid structures is evaluated through FEM calculations.
Similar to A Novel approach for Graphical User Interface development and real time Object and Face Tracking using Image Processing and Computer Vision Techniques implemented in MATLAB
This document summarizes a research paper on developing a hand gesture recognition system using computer vision techniques. The system uses a webcam to capture videos of the user's hands with colored markers. It then analyzes the frames to detect the markers and track hand movements in real-time. Gestures are interpreted to control mouse functions like positioning and clicking, as well as keyboard inputs. The system was created with Python and OpenCV to provide an affordable, effective alternative human-computer interface with minimal hardware requirements. It demonstrates how computer vision can be used to translate hand gestures into digital commands.
Color based image processing , tracking and automation using matlabKamal Pradhan
Image processing is a form of signal processing in which the input is an image, such as a photograph or video frame. The output of image processing may be either an image or, a set of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it. This project aims at processing the real time images captured by a Webcam for motion detection and Color Recognition and system automation using MATLAB programming.
In color based image processing we work with colors instead of object. Color provides powerful information for object recognition. A simple and effective recognition scheme is to represent and match images on the basis of color histograms.
Tracking refers to detection of the path of the color once the color based processing is done the color becomes the object to be tracked this can be very helpful in security purposes.
Automation refers to an automated system is any system that does not require human intervention. In this project I’ve automated the mouse that work with our gesture and do the desired tasks.
This document provides an overview of computer graphics. It discusses interactive graphics where the user has control over the image and passive graphics where the image is produced automatically. Interactive graphics allow for advantages like more efficient communication and understanding of data through dynamic and user-controlled visualization. The document also describes how an interactive graphics display works with components like a frame buffer and display controller that outputs images to a monitor.
IRJET- 3-D Face Image Identification from Video Streaming using Map Reduc...IRJET Journal
This document summarizes a research paper that proposes using 3D face recognition techniques and Hadoop for large-scale face identification from video streams. It describes extracting faces from video frames, representing faces as 3D models with 15 distinguishing features, and using Hadoop for parallel processing to enable fast matching of input faces against a large database of faces. The Hadoop implementation includes map and reduce processes to distribute the face matching computations across multiple servers for improved performance on large datasets.
Mobile Programming - 8 Progress Bar, Draggable Music Knob, TimerAndiNurkholis1
Material for this slide includes:
1. Description of progress bar and their types
2. Description of draggable music knob and their examples
3. Description of timer and and their examples
A SOFTWARE TOOL FOR EXPERIMENTAL STUDY LEAP MOTIONijcsit
The paper aims to present computer application that illustrates Leap Motion controller’s abilities. It is a peripheral and software for PC, which enables control by natural user interface based on gestures. The publication also describes how the controller works and its main advantages/disadvantages. Some apps
using leap motion controller are discussed.
Face Recognition Based on Image Processing in an Advanced Robotic SystemIRJET Journal
This document describes a face recognition system used to control a robotic system. The system works in two stages: first, face recognition is used to unlock the system by validating a user's face. Then, different navigation images are used to control the robot's motion. Face recognition is implemented using support vector machine (SVM), histogram of oriented gradients (HOG), and k-nearest neighbors (KNN) algorithms in MATLAB. The process is based on machine learning concepts where the system is trained in a supervised manner to recognize faces and control the robot.
UI2code : A Neural Machine Translator to Bootstrap Mobile GUI ImplementationChunyang Chen
Paper accepted at ICSE'18 (International Conference on Software Engineering). A GUI skeleton is the starting point for implementing a UI design image. To obtain a GUI skeleton from a UI design image, developers have to visually understand UI elements and their spatial layout in the image, and then translate this understanding into proper GUI components and their compositions. Automating this visual understanding and translation would be beneficial for bootstraping mobile GUI implementation, but it is a challenging task due to the diversity of UI designs and the complexity of GUI skeletons to generate. Existing tools are rigid as they depend on heuristically-designed visual understanding and GUI generation rules. In this paper, we present a neural machine translator that combines recent advances in computer vision and machine translation for translating a UI design image into a GUI skeleton. Our translator learns to extract visual features in UI images, encode these features’ spatial layouts, and generate GUI skeletons in a unified neural network framework, without requiring manual rule development. For training our translator, we develop an automated GUI exploration method to automatically collect large-scale UI data from real-world applications. We carry out extensive experiments to evaluate the accuracy, generality and usefulness of our approach.
IRJET- A Smart Personal AI Assistant for Visually Impaired People: A SurveyIRJET Journal
This document summarizes a research paper that surveys potential solutions for developing a smart personal AI assistant to help visually impaired people. It discusses using technologies like artificial intelligence, voice recognition, image recognition and text recognition through an Android application. The application could assist users by recognizing surroundings using images, responding to voice commands, and providing text recognition to read text aloud. The paper reviews related works that used technologies like object detection, neural networks and Google's Vision and Dialogflow APIs. It proposes an application with modules for image recognition, speech recognition, interaction with a chatbot, and text recognition to help visually impaired people interact with their environment and carry out daily tasks.
The document provides instructions on using various computer art applications like Photoshop, Snapseed, and Stencyl to create digital artworks and videos. It describes the tools and features of different programs and includes activities for students to make collages and manipulated photos. The goal is for students to learn digital art skills through hands-on experience with application tools and sharing their creations online.
06108870 analytical study of parallel and distributed image processing 2011Kiran Verma
This document summarizes an analytical study of parallel and distributed image processing. It discusses how parallelism can be applied to image processing through data parallelism, task parallelism, and pipeline parallelism. It also describes common architectures for parallel processing like using parallel hardware or distributed computing. Finally, it analyzes tools and technologies used for parallel image processing like MPI, OpenMP, and CUDA and discusses application domains and ongoing research areas in this field.
IRJET- Full Body Motion Detection and Surveillance System ApplicationIRJET Journal
1) The document discusses a system for real-time full-body motion detection and surveillance using computer vision techniques.
2) It involves comparing video frames over time to detect motion by treating videos as stacks of frames and looking for differences between frames.
3) The goal is to track body motion in real time using OpenCV for applications like surveillance systems, pose estimation, and other filters.
The document summarizes the HeartCheck app which monitors heart rate data from Mi Band fitness trackers. It provides information on the app's beta testing process, graphical user interface, and key features. The UI is designed for material design principles and gestures. It allows initialization of heart rate zones and displays real-time heart rate data and statistics in tabs and graphs. The app supports both Mi Band 2 and 1s devices using different authentication processes and synchronization of heart rate data. Future plans include improving stability, adding additional metrics like steps and sleep, and using Bluetooth sniffing to obtain data without relying on GadgetBridge.
This document summarizes a research project on detecting hand gestures to control computer functions like a mouse. The project uses computer vision and image processing techniques on webcam footage to identify hand movements and assign them mouse input commands. Specifically, it detects hand contours and tracks finger movements to control cursor movement and functions like left/right clicks. The goal is to create an intuitive touchless mouse interface that is cheaper than existing motion sensing hardware. The methodology involves skin detection, contour extraction, hand tracking, and mapping recognized gestures to mouse controls.
This document presents a hand gesture controlled mouse system using machine learning. It has 4 modules: 1) Hand tracking to detect hand landmarks using a webcam, 2) Volume control using the distance between thumb and forefinger, 3) Virtual painting by drawing on screen, 4) Mouse control using index finger movements to move the cursor. The system was able to successfully track hand gestures and use them to control mouse functions and volume. Future applications could include use in education for interactive teaching and by people with disabilities. Some limitations are need for adequate lighting and inability to track multiple hands.
Accessing Operating System using Finger GestureIRJET Journal
This document describes a system for accessing an operating system using finger gestures captured by a webcam. The system aims to reduce costs compared to existing gesture recognition systems that use expensive sensors like Kinect. It uses image processing algorithms to detect hand gestures from webcam input, recognize gestures like number of fingers, and execute corresponding operating system commands. The system architecture first segments hand regions from background, then classifies skin pixels and detects colored tapes on fingers to identify gestures. It can open programs and navigate computer contents contactlessly using natural hand movements. The proposed system aims to provide an affordable alternative for human-computer interaction without external input devices like mice or keyboards.
This document summarizes a research project that aims to develop a virtual mouse system using hand gesture recognition with a webcam. The system analyzes hand contours extracted from the webcam footage to identify fingers and gestures. It then maps different gestures to mouse functions like cursor movement, left/right clicks, and scrolling. The researchers believe this could provide a more natural interface than physical mice and benefit users who have difficulty using mice. It describes the design and implementation of the gesture recognition system, which involves steps like color detection, contour extraction, and identifying fingers from convexity defects.
IRJET- Sixth Sense Technology in Image ProcessingIRJET Journal
This document describes sixth sense technology, which allows users to interact with digital information by using hand gestures that are detected by a camera. The technology was developed by Pranav Mistry to bridge the gap between the digital and physical worlds. It consists of a camera, projector, mirror, mobile phone, and colored markers on the fingers. The camera detects hand gestures and objects, and the projector displays related digital information onto physical surfaces. Pattern matching through image processing is used to recognize hand gestures and colors and trigger the appropriate responses from the sixth sense device. This technology has applications in areas like maps, drawing, calling, and photos.
Similar to A Novel approach for Graphical User Interface development and real time Object and Face Tracking using Image Processing and Computer Vision Techniques implemented in MATLAB (20)
This document provides a technical review of secure banking using RSA and AES encryption methodologies. It discusses how RSA and AES are commonly used encryption standards for secure data transmission between ATMs and bank servers. The document first provides background on ATM security measures and risks of attacks. It then reviews related work analyzing encryption techniques. The document proposes using a one-time password in addition to a PIN for ATM authentication. It concludes that implementing encryption standards like RSA and AES can make transactions more secure and build trust in online banking.
This document analyzes the performance of various modulation schemes for achieving energy efficient communication over fading channels in wireless sensor networks. It finds that for long transmission distances, low-order modulations like BPSK are optimal due to their lower SNR requirements. However, as transmission distance decreases, higher-order modulations like 16-QAM and 64-QAM become more optimal since they can transmit more bits per symbol, outweighing their higher SNR needs. Simulations show lifetime extensions up to 550% are possible in short-range networks by using higher-order modulations instead of just BPSK. The optimal modulation depends on transmission distance and balancing the energy used by electronic components versus power amplifiers.
This document provides a review of mobility management techniques in vehicular ad hoc networks (VANETs). It discusses three modes of communication in VANETs: vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V), and hybrid vehicle (HV) communication. For each communication mode, different mobility management schemes are required due to their unique characteristics. The document also discusses mobility management challenges in VANETs and outlines some open research issues in improving mobility management for seamless communication in these dynamic networks.
This document provides a review of different techniques for segmenting brain MRI images to detect tumors. It compares the K-means and Fuzzy C-means clustering algorithms. K-means is an exclusive clustering algorithm that groups data points into distinct clusters, while Fuzzy C-means is an overlapping clustering algorithm that allows data points to belong to multiple clusters. The document finds that Fuzzy C-means requires more time for brain tumor detection compared to other methods like hierarchical clustering or K-means. It also reviews related work applying these clustering algorithms to segment brain MRI images.
1) The document simulates and compares the performance of AODV and DSDV routing protocols in a mobile ad hoc network under three conditions: when users are fixed, when users move towards the base station, and when users move away from the base station.
2) The results show that both protocols have higher packet delivery and lower packet loss when users are either fixed or moving towards the base station, since signal strength is better in those scenarios. Performance degrades when users move away from the base station due to weaker signals.
3) AODV generally has better performance than DSDV, with higher throughput and packet delivery rates observed across the different user mobility conditions.
This document describes the design and implementation of 4-bit QPSK and 256-bit QAM modulation techniques using MATLAB. It compares the two techniques based on SNR, BER, and efficiency. The key steps of implementing each technique in MATLAB are outlined, including generating random bits, modulation, adding noise, and measuring BER. Simulation results show scatter plots and eye diagrams of the modulated signals. A table compares the results, showing that 256-bit QAM provides better performance than 4-bit QPSK. The document concludes that QAM modulation is more effective for digital transmission systems.
The document proposes a hybrid technique using Anisotropic Scale Invariant Feature Transform (A-SIFT) and Robust Ensemble Support Vector Machine (RESVM) to accurately identify faces in images. A-SIFT improves upon traditional SIFT by applying anisotropic scaling to extract richer directional keypoints. Keypoints are processed with RESVM and hypothesis testing to increase accuracy above 95% by repeatedly reprocessing images until the threshold is met. The technique was tested on similar and different facial images and achieved better results than SIFT in retrieval time and reduced keypoints.
This document studies the effects of dielectric superstrate thickness on microstrip patch antenna parameters. Three types of probes-fed patch antennas (rectangular, circular, and square) were designed to operate at 2.4 GHz using Arlondiclad 880 substrate. The antennas were tested with and without an Arlondiclad 880 superstrate of varying thicknesses. It was found that adding a superstrate slightly degraded performance by lowering the resonant frequency and increasing return loss and VSWR, while decreasing bandwidth and gain. Specifically, increasing the superstrate thickness or dielectric constant resulted in greater changes to the antenna parameters.
This document describes a wireless environment monitoring system that utilizes soil energy as a sustainable power source for wireless sensors. The system uses a microbial fuel cell to generate electricity from the microbial activity in soil. Two microbial fuel cells were created using different soil types and various additives to produce different current and voltage outputs. An electronic circuit was designed on a printed circuit board with components like a microcontroller and ZigBee transceiver. Sensors for temperature and humidity were connected to the circuit to monitor the environment wirelessly. The system provides a low-cost way to power remote sensors without needing battery replacement and avoids the high costs of wiring a power source.
1) The document proposes a model for a frequency tunable inverted-F antenna that uses ferrite material.
2) The resonant frequency of the antenna can be significantly shifted from 2.41GHz to 3.15GHz, a 31% shift, by increasing the static magnetic field placed on the ferrite material.
3) Altering the permeability of the ferrite allows tuning of the antenna's resonant frequency without changing the physical dimensions, providing flexibility to operate over a wide frequency range.
This document summarizes a research paper that presents a speech enhancement method using stationary wavelet transform. The method first classifies speech into voiced, unvoiced, and silence regions based on short-time energy. It then applies different thresholding techniques to the wavelet coefficients of each region - modified hard thresholding for voiced speech, semi-soft thresholding for unvoiced speech, and setting coefficients to zero for silence. Experimental results using speech from the TIMIT database corrupted with white Gaussian noise at various SNR levels show improved performance over other popular denoising methods.
This document reviews the design of an energy-optimized wireless sensor node that encrypts data for transmission. It discusses how sensing schemes that group nodes into clusters and transmit aggregated data can reduce energy consumption compared to individual node transmissions. The proposed node design calculates the minimum transmission power needed based on received signal strength and uses a periodic sleep/wake cycle to optimize energy when not sensing or transmitting. It aims to encrypt data at both the node and network level to further optimize energy usage for wireless communication.
This document discusses group consumption modes. It analyzes factors that impact group consumption, including external environmental factors like technological developments enabling new forms of online and offline interactions, as well as internal motivational factors at both the group and individual level. The document then proposes that group consumption modes can be divided into four types based on two dimensions: vertical (group relationship intensity) and horizontal (consumption action period). These four types are instrument-oriented, information-oriented, enjoyment-oriented, and relationship-oriented consumption modes. Finally, the document notes that consumption modes are dynamic and can evolve over time.
The document summarizes a study of different microstrip patch antenna configurations with slotted ground planes. Three antenna designs were proposed and their performance evaluated through simulation: a conventional square patch, an elliptical patch, and a star-shaped patch. All antennas were mounted on an FR4 substrate. The effects of adding different slot patterns to the ground plane on resonance frequency, bandwidth, gain and efficiency were analyzed parametrically. Key findings were that reshaping the patch and adding slots increased bandwidth and shifted resonance frequency. The elliptical and star patches in particular performed better than the conventional design. Three antenna configurations were selected for fabrication and measurement based on the simulations: a conventional patch with a slot under the patch, an elliptical patch with slots
1) The document describes a study conducted to improve call drop rates in a GSM network through RF optimization.
2) Drive testing was performed before and after optimization using TEMS software to record network parameters like RxLevel, RxQuality, and events.
3) Analysis found call drops were occurring due to issues like handover failures between sectors, interference from adjacent channels, and overshooting due to antenna tilt.
4) Corrective actions taken included defining neighbors between sectors, adjusting frequencies to reduce interference, and lowering the mechanical tilt of an antenna.
5) Post-optimization drive testing showed improvements in RxLevel, RxQuality, and a reduction in dropped calls.
This document describes the design of an intelligent autonomous wheeled robot that uses RF transmission for communication. The robot has two modes - automatic mode where it can make its own decisions, and user control mode where a user can control it remotely. It is designed using a microcontroller and can perform tasks like object recognition using computer vision and color detection in MATLAB, as well as wall painting using pneumatic systems. The robot's movement is controlled by DC motors and it uses sensors like ultrasonic sensors and gas sensors to navigate autonomously. RF transmission allows communication between the robot and a remote control unit. The overall aim is to develop a low-cost robotic system for industrial applications like material handling.
This document reviews cryptography techniques to secure the Ad-hoc On-Demand Distance Vector (AODV) routing protocol in mobile ad-hoc networks. It discusses various types of attacks on AODV like impersonation, denial of service, eavesdropping, black hole attacks, wormhole attacks, and Sybil attacks. It then proposes using the RC6 cryptography algorithm to secure AODV by encrypting data packets and detecting and removing malicious nodes launching black hole attacks. Simulation results show that after applying RC6, the packet delivery ratio and throughput of AODV increase while delay decreases, improving the security and performance of the network under attack.
The document describes a proposed modification to the conventional Booth multiplier that aims to increase its speed by applying concepts from Vedic mathematics. Specifically, it utilizes the Urdhva Tiryakbhyam formula to generate all partial products concurrently rather than sequentially. The proposed 8x8 bit multiplier was coded in VHDL, simulated, and found to have a path delay 44.35% lower than a conventional Booth multiplier, demonstrating its potential for higher speed.
This document discusses image deblurring techniques. It begins by introducing image restoration and focusing on image deblurring. It then discusses challenges with image deblurring being an ill-posed problem. It reviews existing approaches to screen image deconvolution including estimating point spread functions and iteratively estimating blur kernels and sharp images. The document also discusses handling spatially variant blur and summarizes the relationship between the proposed method and previous work for different blur types. It proposes using color filters in the aperture to exploit parallax cues for segmentation and blur estimation. Finally, it proposes moving the image sensor circularly during exposure to prevent high frequency attenuation from motion blur.
This document describes modeling an adaptive controller for an aircraft roll control system using PID, fuzzy-PID, and genetic algorithm. It begins by introducing the aircraft roll control system and motivation for developing an adaptive controller to minimize errors from noisy analog sensor signals. It then provides the mathematical model of aircraft roll dynamics and describes modeling the real-time flight control system in MATLAB/Simulink. The document evaluates PID, fuzzy-PID, and PID-GA (genetic algorithm) controllers for aircraft roll control and finds that the PID-GA controller delivers the best performance.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Rainfall intensity duration frequency curve statistical analysis and modeling...bijceesjournal
Using data from 41 years in Patna’ India’ the study’s goal is to analyze the trends of how often it rains on a weekly, seasonal, and annual basis (1981−2020). First, utilizing the intensity-duration-frequency (IDF) curve and the relationship by statistically analyzing rainfall’ the historical rainfall data set for Patna’ India’ during a 41 year period (1981−2020), was evaluated for its quality. Changes in the hydrologic cycle as a result of increased greenhouse gas emissions are expected to induce variations in the intensity, length, and frequency of precipitation events. One strategy to lessen vulnerability is to quantify probable changes and adapt to them. Techniques such as log-normal, normal, and Gumbel are used (EV-I). Distributions were created with durations of 1, 2, 3, 6, and 24 h and return times of 2, 5, 10, 25, and 100 years. There were also mathematical correlations discovered between rainfall and recurrence interval.
Findings: Based on findings, the Gumbel approach produced the highest intensity values, whereas the other approaches produced values that were close to each other. The data indicates that 461.9 mm of rain fell during the monsoon season’s 301st week. However, it was found that the 29th week had the greatest average rainfall, 92.6 mm. With 952.6 mm on average, the monsoon season saw the highest rainfall. Calculations revealed that the yearly rainfall averaged 1171.1 mm. Using Weibull’s method, the study was subsequently expanded to examine rainfall distribution at different recurrence intervals of 2, 5, 10, and 25 years. Rainfall and recurrence interval mathematical correlations were also developed. Further regression analysis revealed that short wave irrigation, wind direction, wind speed, pressure, relative humidity, and temperature all had a substantial influence on rainfall.
Originality and value: The results of the rainfall IDF curves can provide useful information to policymakers in making appropriate decisions in managing and minimizing floods in the study area.
Rainfall intensity duration frequency curve statistical analysis and modeling...
A Novel approach for Graphical User Interface development and real time Object and Face Tracking using Image Processing and Computer Vision Techniques implemented in MATLAB
1. IOSR Journal of Computer Engineering (IOSR-JCE)
e-ISSN: 2278-0661, p- ISSN: 2278-8727Volume 15, Issue 5 (Nov. - Dec. 2013), PP 61-68
www.iosrjournals.org
www.iosrjournals.org 61 | Page
A Novel approach for Graphical User Interface development and
real time Object and Face Tracking using Image Processing and
Computer Vision Techniques implemented in MATLAB
Satyajit Mahanta1
, Souvik Ghosal2
, Partha Das3
, Aditi Datta4
, SauravDebnath5
,
Sauvik Das Gupta6
12345
West Bengal University of Technology, Kolkata, West Bengal, India
6
Schoolof Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK, USA
Abstract:Image Processing is an advancement in the Signal processing domain where the input is an image or
video and output can be image, video or in any other forms. With daily advances in Human-Computer
interaction, Image Processing, along with computer vision has become an integral part of any intelligent system.
They are mainly based on the workings of the visual perception function of the human brain. As such most of
the algorithms are designed based around this central concept. The major application of Computer Vision is in
the field of intelligent machines and robotic visions, where machines are made intelligent by being able to sense
their surrounding visually and optimize themselves accordingly. In case of robots, through computer vision, they
are made capable of sensing their surroundings, just like a human.
Index Terms: Image processing,GUI, Object Tracking, Face Tracking, Edge Detection.
I. Introduction
In Image Processing or Computer Vision, the basic idea is to mimic the human visual perception
powers and try to find the mathematic models behind them [1] [2] [3]. Having achieved this, the next challenge
is to implement these models into viable real world systems and applications. In this paper, we try and
implement object detection & tracking along with Face Detection and tracking in real-time.We first start off
with detecting and tracking of objects and gradually expand this system to controlling a real world object, here
the iRobot Create. Though multiple algorithms have been developed for detecting objects, we have decided to
base our approach on the Colour based segmentation. Through this we detect an object and its centroid and over
the course of multiple frames, track it, based on its colour. We then further expand and apply the findings to
real-world systems (Arduino and iRobot Create) and end up with the ability to control, wirelessly. We then
develop the system further to allow us to have complete control of the robot as the user tend to change its
position the robot moves simultaneously in that direction.
We then moved onto Computer Vision wherein we implemented the Face Detection and tracking using
the highly efficient Viola-Jones [4] [5] Algorithm. For the tracking part, we first detected the face and based on
that and the skin tone information of the nose, which is the only part of the face having the least number of
background pixels; we implemented a CAMShift [6] algorithm to track the face over time. After the initial
findings, we expanded the system to be capable of detecting and tracking multiple faces.
Lastly, we have gathered and implemented all our findings and systems in a unified GUI [7] in order to give the
user a graphical user interfacefor ease of accessibility of the advanced functions.
II. GUI for Manipulating Images
GUI (Graphical User Interface) is a very popular Point-and-click control of a software application; it is
a well-known design environment which is very effective as the user does not need to know the programing
language (in which the software was built) in order to use the software. Thus, GUI is such an interface that can
literally be handled by anybody. In this article, we are going to discuss about a GUI which we have created in
the MATLAB‟s Graphics User Interface (GUI) environment and we will also discuss briefly how to make such
a GUI in MATLAB environment. This GUI design is all about the editing of pictures. The overview of this GUI
is given below but before that the procedure to make a GUI is mentioned below, in a nutshell.
2.1:Making a GUI in MATLAB – In MATLAB, There are vividly four kind of way to make a GUI,
namely Basic Graphics, Animation, Handle Graphics Objects and Creation of GUI using Guide. Here we will be
demonstrating about the GUI creation by using „GUIDE‟ command. This is a command which will fetch us to a
window to make a new GUI or to open any existing GUI. Let us make a curt demonstration to make anew GUI.
2. A Novel approach for Graphical User Interface development and real time Object and Face Tracking
www.iosrjournals.org 62 | Page
Below in figure 1 is a chart about the various ways of making GUI in MATLAB:
Figure1: Basic ways to implement a GUI
Here, we are going to use the last one i.e. „Creating GUI using GUIDE‟ to make a new one. There are some
typical steps involved in the process of making a GUI. The diagram shown below demarcatesthose typical steps
which are to be followed:
Figure 2: Basic steps to make one GUI by using GUIDE in MATLAB
Now, let us illustrate what these steps really mean. At first, the Step is “Designing the GUI”. Here user must
have a clear mental picture of what he or she is going to design and accordingly the user must start the design
right away. Next comes “Laying out the GUI” [8] [9]. In this step the user will create a layout of his or her
preferred design of the GUI in the Layout Editor. Then comes the “Programming the GUI”. Now, in this step
user must create some logic for their own preferences and accordingly write that logic in the programing
language of MATLAB. The last step which is “Saving & Running the GUI”. Now, there is one thing that is to
be remembered at the time of making a GUI that the design that the user will do in the Layout Editor will be
shown as the external interface and there are buttons, sliders and many more things that can be incorporated in
the Design. These various buttons are available on the due left corner of the Layout Editor. The user of the GUI
will not use any programming language to make any particular button active. Instead, the programming should
be under the title of the Button such that when the Button is pressed, it commands MATLAB to run the program
which is written under that function.
2.2 :Details - about the GUI-We have used several functions and operations in our design. Firstly there are
few buttons (technically known as Push Button) available at the top left corner of our design, namely „Resize‟
(to resize an image to users desired resolution), Noise, Image adjust, Rotate, Blend etc. The operations that we
have incorporated in our design are Morphological Operation, Image Segmentation etc. Here few of them are
discussed in the sense of giving an idea about what these functions are and what they do. Firstly we have shown
a screen shot of our design below.In this design we have shown the tweaking of the Red, Green and Blue
(technically known as RGB) values of a sample image (left), and the result image (right).
Figure 3: The GUI design at the time of real time RGB value tweaking operation on images
3. A Novel approach for Graphical User Interface development and real time Object and Face Tracking
www.iosrjournals.org 63 | Page
Next is the „Blend‟ Pushbutton. This is a button by clicking which we can add any two kinds (resolution wise)
of picture over one and another, i.e. they get merged up together. First the user has to load one picture to the
GUI main frame and this will be assigned as the 1st
image, on which a second image will be blended. Then
he/she can press this button so that a window will appear to load another image. We have designed this function
in such a way that when the image is not equal in dimension with the first one, then firstly the resizing of image
will be done according to the resolution of the first image and then the merging operation will be done. Here two
distinct images and their blended form are shown below.
Figure4: The Blend Operation
Another very useful feature of Image Processing that has been integrated in our GUI,is the ability to find edges
in a given image [10]. This functionality is widely used in many Computer vision techniques. As such there are
multiple algorithms that are used for finding edges. In our GUI, we have included most of the available
algorithm. But during our workings, we found that since the algorithms work only with Black & White images,
there is a considerable amount of information loss when we need to find edges from a real world or color image,
as it is converted to the Black & White image. In order to reduce this information loss, we have implemented a
simple new algorithm, based on the original algorithms, to find edges for color images, as shown in the flow
diagram (Figure 5). As can be seen from Figure 6: edges of objects that were lost in the general approach are
quite visible through the new approach. The weakest edges are denoted by Red, followed by Green & Blue. The
most prominent edges are marked with white. These can be used or discarded (Figure 8) as required, by a simple
thresh-holding on the component.
4. A Novel approach for Graphical User Interface development and real time Object and Face Tracking
www.iosrjournals.org 64 | Page
Figure 5: Proposed Algorithm of the new Edge Detection
Figure 6: Original Color Image
Figure7: (left) Result after normal Edge Detection (Canny) (right) Result after the new Edge Detection
algorithm
Figure 8: Applying thresholding after the new Edge Detection algorithm to discard the weak edges
III. Image Processing
3.1: Object Detection & Tracking- Like any living things, in order to track an object, we first need to identify
and detect that object. There are many approaches to this. We base our approach on the colour of the object. For
this, we first initialize the camera, the primary input of our processing system, to start grabbing images and return
it in the RGB colour space. Once this is done, we then set up a loop that iterate through each frame of the input
stream, since a video is essentially a stream of still images. With each of these grabbed frames, we first convert
them to grey scale & then we segment the colour channel from it. After this we apply filters to remove the noise
& hence minimize errors due to external conditions. Having done this, we find the area of the objects, which are
now only the objects with the required colours. Once done, we obtain an array containing descriptions of the
centroid and area of all the objects with the desired colour. We then use this array to draw a box around the
detected object(s). Since we are finding the centroid for the objects in every frame, the object is tracked along
with its motion over the entire duration of the stream.
5. A Novel approach for Graphical User Interface development and real time Object and Face Tracking
www.iosrjournals.org 65 | Page
Figure 9: Proposed scheme for Object detection & tracking
Figure 10: Objects being detected & tracked
3.2: Object Based Control- Once we are able to detect and track an object, we can easily use the data so
obtained to control real world systems. For the preliminary stage, we chose a basic Arduino board with an LED.
The idea was to make a LED light up when it was in a given range on the X-Y axis and turn off when it went
out of the range. We used the differential of the previous and present coordinates of the centroid of the tracked
object. By analysing the result from this operation we can detect whether object was moved to the left or right or
to the top or bottom. For example if the difference in the X Axis was greater than the Y Axis, then we can surely
say that the object was moved in the X-Axis & vice-versa. A value within a small threshold is ignored in order
to prevent the system from becoming too sensitive to motion and thus responding to even slight hand jerks.
3.3: Object Based control of iRobot – Once the control algorithm has been tested; we applied the findings to
control iRobot over a Bluetooth Access Module (BAM), which gives us a huge range of wireless operability.
Just like with the arduino board, we detected the motion using the same algorithm (figure 11) and for movement
along Y-Axis, we made it turn clockwise/anti-clockwise, while for X-Axis, we made it go forward and
backward, thus detecting 2-D movement of the object. We then further expanded the dimension of the iRobot
movement by tracking the area too. If the area increases, then the object is coming closer to the camera, while
the decrease signifies the object moving away from the camera in the Z-Axis. With this in mind we made the
iRobot back away when the object moved closer and come closer when the object moved away from the camera,
thus allowing a 3-D motion control of the iRobot.
Figure 11: Proposed scheme for controlling iRobot through Object Detection
GRAB IMAGE & RETURN TO RGB SPACE
CONVERT TO GRAY SCALE
SUBTRACT FROM REQUIRED COLOUR CHANNEL
APPLLY FILTERS
FIND AREA AND OBJECT CENTROID
6. A Novel approach for Graphical User Interface development and real time Object and Face Tracking
www.iosrjournals.org 66 | Page
3.4: Object Motion Mimicking – After controlling the iRobot in three dimensions, we further expanded
the capability by making the robot mimic the object‟s movement, thus freeing it from rigid straight line
movements. As such, the robot‟s movement follows the pattern that is plotted by the movement of the object.
The basic idea behind the algorithm is that we can describe each movement in a Cartesian plane with
coordinates & when it is translated to the real world, it can be described by the change of angle along with
forward, backward or sideways movements. Thus, like the previous algorithm, we detect which direction the
object is moving and along with that we also calculate the angle at which the object is moved. This can be easily
found by calculating the slope between the two points of the centroid and normalizing it to a value between π to
–π degrees. Like before we also keep a threshold here to ignore the unwanted jerks of the hands. We then make
the iRobot turn the calculated angle along with the direction of the movement, thus mimicking the motion of the
object in the real world.
3.5 :Following a map – From all these, we also observed that we can make the iRobot follow a map or pattern
in the real world based on a map or pattern drawn as an image. The basic idea is that like the mimicking
algorithm, the iRobot will follow a pattern, but instead of the input being from the object‟s movement, it will be
a pre-drawn pattern in a binary image. Another zero initialized matrix (reference) with the same dimension as
the image is maintained to track looping. The algorithm is such that, from the starting point, the program will
sample its next pixel, which can be in front or to the sides. If the one of the pixels is set as 1 and the
corresponding pixel is not set as 1 in the reference, then the iRobot will move in that direction. When the iRobot
moves, it updates the corresponding pixel in the reference too. The entire process is repeated until the end of the
map is found (when there are no more pixels to move forward to) or the pixel which has already been traversed
(using the reference) is met. If the robot encounters a cross-section (overlapping of path), it randomly decides on
a direction and updates. The next time it comes, due to the reference being updated, it will only have one way to
go. Thus the robot is made to follow a given map/pattern even with loops. The algorithm was further expanded
by making the reference increase the weights it deposits on a traversed path, like an ant. This can track the
number of loop the robot is required to perform. Also, a scale factor is kept for the map to real world distance
scaling. This factor can be tweaked to indicate how much distance of the real world each pixel should designate.
lV. Face Detection
4.1: Viola-Jones Algorithm -In 2001, Paul Viola and Michael Jones proposed a framework for an algorithm
called the Viola-Jones algorithmthat used a variant of Adaboost, Haar Features and Integral Image to detect
faces in any given image. This algorithm can also be used to detect other parts of the face like eyes, nose, upper
body, etc. Although this algorithm was motivated by the problem of Face Detection, it is capable of being used
as an object detector and hence this algorithm can be trained to detect a wide variety of objects too.
4.2: Methodology - We have used Viola-Jones algorithm forfacedetection to implement a real time face
detection system. As illustrated in the flow chart in Figure12, we first initialize the cascade object detector to
detect a face. There are various ways to detect a face using the Viola-Jones algorithm. The algorithm can be
initialized to detect different parts of the face, like eyes, nose, upper body etc. Since the face is the part which is
distinguished mainly by the skin tone, we base our approach on this method and as such we first convert the
captured image to the Hue channel, as this gives us the skin tone information and much better detection rate.
Once the face is detected, our next objective is to track. Since we are using computer vision, we implemented
the Histogram Based tracker, that tracks an object based on the pixel density over time, to track the subject(s)‟
face. Again, in order to get better and constant tracking results, we first extract the face and detect the nose in it
using Viola-Jones Algorithm, all using the Hue channel data. We specifically use the nose instead of the face to
track the face because in a human face, the nose is the most prominent part. In addition, the nose has the lowest
number of background pixels which has the least variance due to external conditions, like illumination. This can
be easily seen in the skin tone information obtained through the Hue channel data. Once the nose is detected, we
initialize the tracker with this and extrapolate the tracked information to account for the face. We even further
expanded the system to detect multiple faces, using the same principle but looped many times over. The system
so developed was integrated in to the GUI that has been developed and discussed in the first part of this paper.
The Figure 13 shows the system detecting & tracking multiple faces at once.
7. A Novel approach for Graphical User Interface development and real time Object and Face Tracking
www.iosrjournals.org 67 | Page
Figure 12: Proposed method for Face Detection & Tracking
Figure 13: Multiple faces being detected and tracked simultaneously
V. Discussion & Future Prospects
In the paper, we have seen and implemented many functions in the GUI which are widely used in the
field of image processing and we also tried to make some improvements in some of the functions. However, all
of the functions could not be shown in the one screen shot as there are other sub-steps involved in this process.
Image Processing opens up a wide variety of Human-Computer interaction that can only be limited by the
human imagination. The works that we have shown here can easily be expanded to various real world situations.
For example, the iRobot function of following the map can easily be embedded and made into a robot that can
perform tasks like a peon in a disaster stricken area where normal transportations are hindered. The topography
of the area can be easily found out and mapped using conventional method and then using this approach an
automated delivery or even rescue system can be setup quickly, by chalking out a map based on the topography,
for the robot to follow. Again the motion mimicking part can be integrated here to make the entire process real
time and thus help guide people within a, for example, collapsed building/area, by guiding the victims by
drawing the escape path and making a guide robot follow it. This is but just one way of implementing the work
we have done here in the real world. The algorithms, though, that have been implemented still does have room
for improvements. For example, in the basic object detection model, instead of scanning the entire image for the
next position of the tracked image, we could set up a probabilistic boundary in which to search for the next
position. If the object is not found there, we scan the entire image. This can result in an even better, faster and
efficient detection and tracking algorithm.
8. A Novel approach for Graphical User Interface development and real time Object and Face Tracking
www.iosrjournals.org 68 | Page
Along the same line, we have Face Detection and tracking. Needless to say, this has an ever growing
demand in the field of security. But, as more powerful hardware are getting miniaturized face detection and
tracking are also creeping into our daily lives. For example, smart TVs and electronics can be more efficient by
automatically tracking and thus regulating themselves for efficient management of energy. They can for
example turn themselves off or go into a sleep mode when they can‟t see any person in their field of service.
Also, advanced gesture recognition systems can be developed using facial features detection. One example, out
of many such possibilities, is that a user might control robots or other electronics with just the movement of
their heads, which the electronics can track. It is only a matter of time, before we will have electronics that will
actually give us personalized services based on who we are.
Acknowledgements
We would like to thank ESL, www.eschoollearning.net, Kolkata for giving us a platform to work on this project.
References
[1] Richard Szeliski,Computer Vision algorithms and applications, (2010)
[2] R. Fisher , K Dawson-Howe, A. Fitzgibbon, C.Robertson, E. Trucco, John Wiley, Dictionary of Computer Vision and Image
Processing, 2005
[3] F. Tanner, B. Colder, C. Pullen, D. Heagy, C. Oertel, & P. Sallee, Overhead Imagery Research Data Set (OIRDS) – an annotated
data library and tools to aid in the development of computer vision algorithms, June 2009
[4] Liming Wang, Jianbosh Shi, Gang Song, I-fan Shen, Object Detection Combining recognition and Segmentation, Fudan University,
Shanghai, PRC, 200433, obj_det_liming_accv07.pdf
[5] Viola, Jones: Robust Real-time Object Detection, IJCV 2001,http://en.wikipedia.org/wiki/Caltech_101#cite_note-Viola_Jones-
[6] D. Comaniciu, V. Ramesh and P. Meer, Real-time tracking of non-rigid objects using mean shift. In Proc. Conf. Comp. Vision
Pattern Rec., pages II: 142–149, Hilton Head, SC, and June 2000
[7] Scott T. Smith, MATLAB Advanced GUI Development, Dog Ear Publication, published (2006)
[8] Yair Moshe, Signal and Image Processing Laboratory, GUI with MATLAB, Published May 2004
[9] Rafael C. Gonzalez, Richard E. Woods and Steven L, Eddins, Digital Image Processing using MATLAB, (2004)
[10] Scott E Umbaugh, Digital image processing and analysis: human and computer vision applications with CVIPTools, segmentation
and edge like detection, CRC press, November 19, 2010