It's a report to demonstrate a project which implements a prototype to make Kinect camera stability and prepare a better ability of detecting, recognizing body movement such as waving, Squat, Push and pull.
It's a report to demonstrate a project which implements a prototype to make Kinect camera stability and prepare a better ability of detecting, recognizing body movement such as waving, Squat, Push and pull.
Auther:
Chen-I Chang
Kent Chang
It's a report to demonstrate a project which implements a prototype to make Kinect camera stability and prepare a better ability of detecting, recognizing body movement such as waving, Squat, Push and pull.
The document discusses haptic suits, which are wearable devices that provide tactile feedback in virtual reality. It provides details on how haptic suits work by using small electrical pulses to stimulate electrodes on the skin and mimic various sensations. The document also outlines the main components of a haptic system, including actuators that provide vibration, sensors to detect user movements, and software to coordinate feedback. Potential applications of haptic suits discussed include training, entertainment, education, and smart clothing. While the technology offers benefits, challenges include high costs, energy consumption, and privacy/security concerns that must still be addressed through ongoing research.
The document discusses various haptic feedback techniques for virtual reality experiences. It describes active haptics which use computer-controlled devices to provide tactile or force feedback, and passive haptics which use physical props registered to virtual objects. Haptic retargeting techniques are introduced to allow one physical prop to represent multiple virtual objects, including body warping, world warping, and hybrid warping. Body warping shifts the rendering of the user's body to align with props. World warping manipulates the virtual world's coordinates. Hybrid warping combines both techniques. Examples from Microsoft Research demonstrate haptic retargeting illusions using a single physical cube mapped to different virtual cubes. The document concludes by discussing potential applications of these
Haptic suit is one of the revolutionary wearable devices in the field of virtual reality.
It is also known as haptic vest, tactile suit or gaming suit.
It allows the user to completely dive into the world of virtual reality beyond visualization.
The world is full of things that intrigue us.
The haptic suit is the future of reality technology and clothing(e-textiles).
It is designed to help people learn things and abstracts present in the world efficiently, although a lot of research and cost is required.
It has full potential to replace the normal clothing style and standard of learning and living.
Technical presentation of the gesture based NUI I developed for the Aigaio smart conference room in IIT Demokritos
Demo In Greek:
https://www.youtube.com/watch?v=5C_p7MHKA4g
This paper proposes combining data from the Leap Motion and Microsoft Kinect to more accurately recognize hand gestures. New features are extracted from each device and combined into a feature vector, including extended finger detection, fingertip positions and angles, and measurements of the hand shape. These features are used to train a random forest classifier on a dataset of 10 American Sign Language gestures. The results show improved recognition accuracy over using either device alone.
Computer vision towards an automatic recognition of communicative gesturesDonato Di Pierro
A quick rework of what has been done during my master's degree thesis work.
Computer Vision has a lot of applications and mainly is a set of techniques for artificial vision, able to reproduce the same abilities of human vision.
This study focuses on the automatic recognition and classification of human gestures, used to give emphasis to prosodic events.
It's a report to demonstrate a project which implements a prototype to make Kinect camera stability and prepare a better ability of detecting, recognizing body movement such as waving, Squat, Push and pull.
Auther:
Chen-I Chang
Kent Chang
It's a report to demonstrate a project which implements a prototype to make Kinect camera stability and prepare a better ability of detecting, recognizing body movement such as waving, Squat, Push and pull.
The document discusses haptic suits, which are wearable devices that provide tactile feedback in virtual reality. It provides details on how haptic suits work by using small electrical pulses to stimulate electrodes on the skin and mimic various sensations. The document also outlines the main components of a haptic system, including actuators that provide vibration, sensors to detect user movements, and software to coordinate feedback. Potential applications of haptic suits discussed include training, entertainment, education, and smart clothing. While the technology offers benefits, challenges include high costs, energy consumption, and privacy/security concerns that must still be addressed through ongoing research.
The document discusses various haptic feedback techniques for virtual reality experiences. It describes active haptics which use computer-controlled devices to provide tactile or force feedback, and passive haptics which use physical props registered to virtual objects. Haptic retargeting techniques are introduced to allow one physical prop to represent multiple virtual objects, including body warping, world warping, and hybrid warping. Body warping shifts the rendering of the user's body to align with props. World warping manipulates the virtual world's coordinates. Hybrid warping combines both techniques. Examples from Microsoft Research demonstrate haptic retargeting illusions using a single physical cube mapped to different virtual cubes. The document concludes by discussing potential applications of these
Haptic suit is one of the revolutionary wearable devices in the field of virtual reality.
It is also known as haptic vest, tactile suit or gaming suit.
It allows the user to completely dive into the world of virtual reality beyond visualization.
The world is full of things that intrigue us.
The haptic suit is the future of reality technology and clothing(e-textiles).
It is designed to help people learn things and abstracts present in the world efficiently, although a lot of research and cost is required.
It has full potential to replace the normal clothing style and standard of learning and living.
Technical presentation of the gesture based NUI I developed for the Aigaio smart conference room in IIT Demokritos
Demo In Greek:
https://www.youtube.com/watch?v=5C_p7MHKA4g
This paper proposes combining data from the Leap Motion and Microsoft Kinect to more accurately recognize hand gestures. New features are extracted from each device and combined into a feature vector, including extended finger detection, fingertip positions and angles, and measurements of the hand shape. These features are used to train a random forest classifier on a dataset of 10 American Sign Language gestures. The results show improved recognition accuracy over using either device alone.
Computer vision towards an automatic recognition of communicative gesturesDonato Di Pierro
A quick rework of what has been done during my master's degree thesis work.
Computer Vision has a lot of applications and mainly is a set of techniques for artificial vision, able to reproduce the same abilities of human vision.
This study focuses on the automatic recognition and classification of human gestures, used to give emphasis to prosodic events.
Virtual Yoga System Using Kinect SensorIRJET Journal
The document describes a virtual yoga system using the Microsoft Kinect sensor. The system aims to make yoga exercises more engaging and motivating for patients by tracking their poses in real-time and providing feedback. It recognizes skeleton joints and yoga postures using the Kinect's depth sensing capabilities. Voice instructions guide users through different poses. The system is intended to address issues with traditional physiotherapy being tedious and repetitive. It allows customizing exercises to individual needs and challenges. Recognizing poses accurately in real-time could help patients perform exercises correctly and consistently at home without direct supervision.
Mouse Simulation Using Two Coloured Tapesijistjournal
In this paper, we present a novel approach for Human Computer Interaction (HCI) where, we control cursor movement using a real-time camera. Current methods involve changing mouse parts such as adding more buttons or changing the position of the tracking ball. Instead, our method is to use a camera and computer vision technology, such as image segmentation and gesture recognition, to control mouse tasks (left and right clicking, double-clicking, and scrolling) and we show how it can perform everything as current mouse devices can.
The software will be developed in JAVA language. Recognition and pose estimation in this system are user independent and robust as we will be using colour tapes on our finger to perform actions. The software can be used as an intuitive input interface to applications that require multi-dimensional control e.g. computer games etc.
Mouse Simulation Using Two Coloured Tapes ijistjournal
In this paper, we present a novel approach for Human Computer Interaction (HCI) where, we control cursor movement using a real-time camera. Current methods involve changing mouse parts such as adding more buttons or changing the position of the tracking ball. Instead, our method is to use a camera and computer vision technology, such as image segmentation and gesture recognition, to control mouse tasks (left and right clicking, double-clicking, and scrolling) and we show how it can perform everything as current mouse devices can.
The software will be developed in JAVA language. Recognition and pose estimation in this system are user independent and robust as we will be using colour tapes on our finger to perform actions. The software can be used as an intuitive input interface to applications that require multi-dimensional control e.g. computer games etc.
This document describes an assistive technology (AT) for lower limb rehabilitation of post-stroke patients. The AT uses a Kinect sensor and surface electromyography sensor to provide real-time biofeedback to users during a virtual snowboarding game. The system was evaluated positively by users based on usability, functionality, and goal attainment scales. The AT shows potential for future clinical use in stroke rehabilitation.
Haptics is the science of applying touch sensation and control to interact with computer applications. The Phantom interface and Cyber Grasp system are haptic devices that allow users to touch and feel virtual 3D objects. Phantom provides 3D touch and allows users to feel the shape and size of virtual objects. Cyber Grasp fits over the hand like an exoskeleton and measures finger movement. Haptics is used in applications like video games, mobile devices, medical training, robotics, and arts/design. While high costs and size/weight limitations exist, haptics increases confidence in fields like medicine and brings interactions with the digital world closer to real world experiences.
This document discusses collision detection in games. It explains that collision detection determines the intersection of two moving objects. Common steps are selecting objects to test for collision and checking if they collided. It then discusses algorithms for detecting collision and describes a simple game called "Hit the Target" that demonstrates collision detection by having the player move a turtle to hit a target. The document concludes by outlining how to code collision detection using Microsoft Small Basic.
Complex Weld Seam Detection Using Computer Vision Linked Inglenn_silvers
This document discusses a project to use computer vision and a Microsoft Kinect sensor to enable real-time gesture control of a welding robot. The project aims to detect and track a user's hand gestures to control robot movement, and to define the weld seam region of interest to allow for seam detection. The plan involves accessing Kinect data, detecting and tracking the hand in 3D space, recognizing gestures for robot movement commands, extracting color values from the hand for skin detection, and using the hand position to define the seam region of interest. The work so far has successfully defined the hand and fingers, tracked hand motion, and extracted the seam region. Further work is needed to finalize the gesture commands and integrate control of the robot.
The document describes a project to develop a tabletop touchscreen interface using a Kinect depth sensor. The researchers designed and built a table setup with a projected screen and mounted Kinect. They tested the Kinect's ability to track finger touches and gestures through programs that allowed for coloring, puzzles, zooming and image swiping. The Kinect was able to accurately detect and follow finger motion. This demonstrated the viability of using a depth sensor for a multi-user touch interface and suggested advantages over other touchscreen technologies.
This document provides an overview of virtual reality (VR) technology. It discusses the key components of a VR system, including input devices like 3D positional trackers and gesture interfaces that allow user interaction, and output devices like head-mounted displays and haptic feedback interfaces that provide visual and tactile feedback. It also describes computer architectures for VR and the modeling techniques used to create virtual environments. The document is divided into sections covering input devices, output devices, computer architectures, modeling, and VR programming.
Xbox Fitness uses the Kinect sensor to enable interactive exercise programs. The Kinect sensor uses an RGB camera, depth sensor, microphones and infrared camera to track the user's movement, detect emotions, measure heart rate and enable full-body motion control without additional wearables. It creates a 3D depth map of the user to monitor form, intensity and provide adaptive workouts tailored to the individual through integrated AI algorithms. User data can also sync with external health and social media platforms to provide a connected fitness experience.
The document proposes a novel approach to simulate mouse functions using only a webcam and computer vision techniques. Two colored tapes would be worn on the fingers to detect hand gestures for controlling mouse movements and clicks. The yellow tape on the index finger would control cursor position while the distance between the yellow and red tapes would determine click events. Left clicks would occur when the thumb tape nears the index finger tape, right clicks from pausing in position, and double clicks from pausing both tapes in position. This vision-based mouse simulation could revolutionize human-computer interaction by eliminating physical devices.
Haptic technology uses tactile feedback to apply forces, vibrations or motions to the user through the sense of touch. It is enabled by actuators that provide mechanical responses to electrical stimuli. There are multiple generations of haptic technology with advancing capabilities like touch-coordinate specific responses and pressure sensitivity. Haptic technology allows creation of virtual objects that can be touched and manipulated. Applications include medical simulation, video games, and virtual reality experiences through haptic devices like Phantom and CyberGrasp systems. Future applications may include holographic interaction and assistive technologies.
Haptic technology uses tactile feedback to apply forces, vibrations or motions to the user through the sense of touch. It is enabled by actuators and controllers that provide mechanical responses to electrical stimuli. Haptic technology allows for the creation of virtual objects that can be touched and manipulated. Virtual reality haptic systems use haptic interfaces and haptic rendering to generate force feedback that simulates the properties of real objects like shape, weight and texture. Commercial applications of haptics include video games, medical simulators, and virtual reality.
This document summarizes a project to control a virtual human using gestures recognized by the Kinect sensor. The Kinect is used to track joint locations and recognize gestures like moving the hands up and down to change the virtual human's heart rate or blood pressure. It can also detect a CPR gesture by measuring the distance between the wrists and shoulder center. The virtual human then displays different animations and reactions based on its mathematically modeled health conditions and whether the user is performing CPR.
This document summarizes a project to control a virtual human using gestures recognized by the Microsoft Kinect sensor. The Kinect tracks users' joint positions and gestures are recognized by comparing relative joint locations. Gestures like moving the hands up and down control the virtual human's heart rate and blood pressure. Performing CPR is recognized by moving the hands in and out towards the torso. The virtual human then displays appropriate animations and reactions based on its physiological state.
This document summarizes a project to control a virtual human using gestures recognized by the Microsoft Kinect sensor. The Kinect tracks users' joint positions and gestures are recognized by comparing relative joint locations. Gestures like moving the hands up and down control the virtual human's heart rate and blood pressure. Performing CPR is recognized by moving the hands in and out towards the torso. The virtual human then displays appropriate animations and reactions based on its physiological state.
This document summarizes a project to control a virtual human using gestures recognized by the Microsoft Kinect sensor. The Kinect tracks users' joint positions and gestures are recognized by comparing relative joint locations. Gestures like moving the hands up and down control the virtual human's heart rate and blood pressure. Performing CPR is recognized by moving the hands in and out towards the torso. The virtual human then displays appropriate animations and reactions based on its physiological state.
Virtual reality systems consist of four main components:
1. A computer and software known as the reality engine
2. Input sensors like head-mounted displays, gloves, and audio units
3. Output sensors like head-mounted displays and audio units
4. The user, who directs the environment and reacts to it
This document describes a medical hands-free system using gesture recognition to help surgeons keep track of surgical instruments and materials without direct contact. It uses a Leap Motion controller to detect hand movements and recognize customized gestures. Image moments are used to distinguish between gestures by calculating weighted averages of pixel distributions in captured images. The goal is to introduce hands-free control to medical settings where direct contact poses infection risks, helping surgeons prevent accidental retention of foreign objects in patients.
This project develops a natural user interface for interacting with 3D environments using the Microsoft Kinect. Two Kinect devices are placed in a virtual reality space to track a user's full body movements and gestures. The Kinect data is used to create a digital avatar that represents the user's position and allows directly interacting with virtual objects by reaching out. Gesture recognition is also implemented to provide additional controls for navigation and selection. The goal is to make interacting with complex 3D data more intuitive by mirroring natural physical interactions.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Virtual Yoga System Using Kinect SensorIRJET Journal
The document describes a virtual yoga system using the Microsoft Kinect sensor. The system aims to make yoga exercises more engaging and motivating for patients by tracking their poses in real-time and providing feedback. It recognizes skeleton joints and yoga postures using the Kinect's depth sensing capabilities. Voice instructions guide users through different poses. The system is intended to address issues with traditional physiotherapy being tedious and repetitive. It allows customizing exercises to individual needs and challenges. Recognizing poses accurately in real-time could help patients perform exercises correctly and consistently at home without direct supervision.
Mouse Simulation Using Two Coloured Tapesijistjournal
In this paper, we present a novel approach for Human Computer Interaction (HCI) where, we control cursor movement using a real-time camera. Current methods involve changing mouse parts such as adding more buttons or changing the position of the tracking ball. Instead, our method is to use a camera and computer vision technology, such as image segmentation and gesture recognition, to control mouse tasks (left and right clicking, double-clicking, and scrolling) and we show how it can perform everything as current mouse devices can.
The software will be developed in JAVA language. Recognition and pose estimation in this system are user independent and robust as we will be using colour tapes on our finger to perform actions. The software can be used as an intuitive input interface to applications that require multi-dimensional control e.g. computer games etc.
Mouse Simulation Using Two Coloured Tapes ijistjournal
In this paper, we present a novel approach for Human Computer Interaction (HCI) where, we control cursor movement using a real-time camera. Current methods involve changing mouse parts such as adding more buttons or changing the position of the tracking ball. Instead, our method is to use a camera and computer vision technology, such as image segmentation and gesture recognition, to control mouse tasks (left and right clicking, double-clicking, and scrolling) and we show how it can perform everything as current mouse devices can.
The software will be developed in JAVA language. Recognition and pose estimation in this system are user independent and robust as we will be using colour tapes on our finger to perform actions. The software can be used as an intuitive input interface to applications that require multi-dimensional control e.g. computer games etc.
This document describes an assistive technology (AT) for lower limb rehabilitation of post-stroke patients. The AT uses a Kinect sensor and surface electromyography sensor to provide real-time biofeedback to users during a virtual snowboarding game. The system was evaluated positively by users based on usability, functionality, and goal attainment scales. The AT shows potential for future clinical use in stroke rehabilitation.
Haptics is the science of applying touch sensation and control to interact with computer applications. The Phantom interface and Cyber Grasp system are haptic devices that allow users to touch and feel virtual 3D objects. Phantom provides 3D touch and allows users to feel the shape and size of virtual objects. Cyber Grasp fits over the hand like an exoskeleton and measures finger movement. Haptics is used in applications like video games, mobile devices, medical training, robotics, and arts/design. While high costs and size/weight limitations exist, haptics increases confidence in fields like medicine and brings interactions with the digital world closer to real world experiences.
This document discusses collision detection in games. It explains that collision detection determines the intersection of two moving objects. Common steps are selecting objects to test for collision and checking if they collided. It then discusses algorithms for detecting collision and describes a simple game called "Hit the Target" that demonstrates collision detection by having the player move a turtle to hit a target. The document concludes by outlining how to code collision detection using Microsoft Small Basic.
Complex Weld Seam Detection Using Computer Vision Linked Inglenn_silvers
This document discusses a project to use computer vision and a Microsoft Kinect sensor to enable real-time gesture control of a welding robot. The project aims to detect and track a user's hand gestures to control robot movement, and to define the weld seam region of interest to allow for seam detection. The plan involves accessing Kinect data, detecting and tracking the hand in 3D space, recognizing gestures for robot movement commands, extracting color values from the hand for skin detection, and using the hand position to define the seam region of interest. The work so far has successfully defined the hand and fingers, tracked hand motion, and extracted the seam region. Further work is needed to finalize the gesture commands and integrate control of the robot.
The document describes a project to develop a tabletop touchscreen interface using a Kinect depth sensor. The researchers designed and built a table setup with a projected screen and mounted Kinect. They tested the Kinect's ability to track finger touches and gestures through programs that allowed for coloring, puzzles, zooming and image swiping. The Kinect was able to accurately detect and follow finger motion. This demonstrated the viability of using a depth sensor for a multi-user touch interface and suggested advantages over other touchscreen technologies.
This document provides an overview of virtual reality (VR) technology. It discusses the key components of a VR system, including input devices like 3D positional trackers and gesture interfaces that allow user interaction, and output devices like head-mounted displays and haptic feedback interfaces that provide visual and tactile feedback. It also describes computer architectures for VR and the modeling techniques used to create virtual environments. The document is divided into sections covering input devices, output devices, computer architectures, modeling, and VR programming.
Xbox Fitness uses the Kinect sensor to enable interactive exercise programs. The Kinect sensor uses an RGB camera, depth sensor, microphones and infrared camera to track the user's movement, detect emotions, measure heart rate and enable full-body motion control without additional wearables. It creates a 3D depth map of the user to monitor form, intensity and provide adaptive workouts tailored to the individual through integrated AI algorithms. User data can also sync with external health and social media platforms to provide a connected fitness experience.
The document proposes a novel approach to simulate mouse functions using only a webcam and computer vision techniques. Two colored tapes would be worn on the fingers to detect hand gestures for controlling mouse movements and clicks. The yellow tape on the index finger would control cursor position while the distance between the yellow and red tapes would determine click events. Left clicks would occur when the thumb tape nears the index finger tape, right clicks from pausing in position, and double clicks from pausing both tapes in position. This vision-based mouse simulation could revolutionize human-computer interaction by eliminating physical devices.
Haptic technology uses tactile feedback to apply forces, vibrations or motions to the user through the sense of touch. It is enabled by actuators that provide mechanical responses to electrical stimuli. There are multiple generations of haptic technology with advancing capabilities like touch-coordinate specific responses and pressure sensitivity. Haptic technology allows creation of virtual objects that can be touched and manipulated. Applications include medical simulation, video games, and virtual reality experiences through haptic devices like Phantom and CyberGrasp systems. Future applications may include holographic interaction and assistive technologies.
Haptic technology uses tactile feedback to apply forces, vibrations or motions to the user through the sense of touch. It is enabled by actuators and controllers that provide mechanical responses to electrical stimuli. Haptic technology allows for the creation of virtual objects that can be touched and manipulated. Virtual reality haptic systems use haptic interfaces and haptic rendering to generate force feedback that simulates the properties of real objects like shape, weight and texture. Commercial applications of haptics include video games, medical simulators, and virtual reality.
This document summarizes a project to control a virtual human using gestures recognized by the Kinect sensor. The Kinect is used to track joint locations and recognize gestures like moving the hands up and down to change the virtual human's heart rate or blood pressure. It can also detect a CPR gesture by measuring the distance between the wrists and shoulder center. The virtual human then displays different animations and reactions based on its mathematically modeled health conditions and whether the user is performing CPR.
This document summarizes a project to control a virtual human using gestures recognized by the Microsoft Kinect sensor. The Kinect tracks users' joint positions and gestures are recognized by comparing relative joint locations. Gestures like moving the hands up and down control the virtual human's heart rate and blood pressure. Performing CPR is recognized by moving the hands in and out towards the torso. The virtual human then displays appropriate animations and reactions based on its physiological state.
This document summarizes a project to control a virtual human using gestures recognized by the Microsoft Kinect sensor. The Kinect tracks users' joint positions and gestures are recognized by comparing relative joint locations. Gestures like moving the hands up and down control the virtual human's heart rate and blood pressure. Performing CPR is recognized by moving the hands in and out towards the torso. The virtual human then displays appropriate animations and reactions based on its physiological state.
This document summarizes a project to control a virtual human using gestures recognized by the Microsoft Kinect sensor. The Kinect tracks users' joint positions and gestures are recognized by comparing relative joint locations. Gestures like moving the hands up and down control the virtual human's heart rate and blood pressure. Performing CPR is recognized by moving the hands in and out towards the torso. The virtual human then displays appropriate animations and reactions based on its physiological state.
Virtual reality systems consist of four main components:
1. A computer and software known as the reality engine
2. Input sensors like head-mounted displays, gloves, and audio units
3. Output sensors like head-mounted displays and audio units
4. The user, who directs the environment and reacts to it
This document describes a medical hands-free system using gesture recognition to help surgeons keep track of surgical instruments and materials without direct contact. It uses a Leap Motion controller to detect hand movements and recognize customized gestures. Image moments are used to distinguish between gestures by calculating weighted averages of pixel distributions in captured images. The goal is to introduce hands-free control to medical settings where direct contact poses infection risks, helping surgeons prevent accidental retention of foreign objects in patients.
This project develops a natural user interface for interacting with 3D environments using the Microsoft Kinect. Two Kinect devices are placed in a virtual reality space to track a user's full body movements and gestures. The Kinect data is used to create a digital avatar that represents the user's position and allows directly interacting with virtual objects by reaching out. Gesture recognition is also implemented to provide additional controls for navigation and selection. The goal is to make interacting with complex 3D data more intuitive by mirroring natural physical interactions.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
3. Motivation
It's been hours for the modern people to stay in front
of the computer for a long time. The health problems
may be the case under the high pressure. Kinect
game can bring modern people more creative
activities, and no longer be restrained in front of the
computer. Kinect can not only achieve the purpose of
relieving pain, helping the injured patient to
recuperate, and it’s able to rebuild confidence and be
at leisure enjoyable entertainment.
4. Game Intro
Long, long time ago, the Earth was
hit by natural disasters and the
creatures were facing a crisis of
extinction. A group of animals saw a
ship on the other side of the sea,
they felt the boat can escape the
danger, but the ship can no longer
accommodate other things, the
player needs to repel the animals, so
the player can survive.
5. Architecture
Kinect: Analyze the user's joints and pass
the skeleton to the Controller.
Controller: Joint computing user aiming
point, detection push, gesture recognition.
Unity Engine: load scene, animal system,
engine, explosion effects, animation
system ... and so on
Scene: shows the score, menu, user
interface, time.
8. Basic Theory
Use the left and right rotation of the
body skeleton, tilt back and forth to
change the camera aiming
position, and detect hand
trajectory, determine gesture to
operate and launch attack ball, hit
the specified target to obtain
points.
Person-oriented position: Use the
joint of Right and Left Shoulder
and the joint of Right and Left Hip
to form X vector, use Y-vector from
Shoulder Center and Hip Center,
Cross into third vector, and face-to-
face orientation.
9. Basic Theory
Problem:
Because Kinect detection is very
accurate, if you use the body's
immediate vector as
Targeting may cause camera shake
and screen jitter problems.
Solution:
Using the Weighting Average to
smooth the input information to
improve the camera when the
problem will be fluttering aim to get
20,30 aiming point within the
detection to calculate an aiming
point and updated after each frame
to recalculate, The best result is 0.3.
11. Gesture recognition
Problem: Identify correctness
Because Kinect's built-in gesture recognition in Unity
Wrapper is not accurate, sometimes the action will be
misjudged, so we've done some real-world ways of
identifying specific actions.
Solution:
Using the joint parameters of the joint and the shoulder
of the hand, the distance and the vector information can
be obtained, and judging the massive change at a
certain time to identify the Push or Pull triggered, so that
the posture can be more accurately identified.
14. Basic Theory
Push and Pull detection:
Use Shoulder, Elbow, Hand three joints to do
detection. 1.Shoulder and Hand are between vectors
(+ -0.2, + - 0.2,> 0). 2. Shoulder and Hand distances
produce a z-axis shift for a limited time. 3.Hand's y
vector approach elbow's y vector.
15. Basic Theory
Posture recognition:
1.TopTorso, ButtonTorso composition of the vector to
determine the left and right tilt, you can determine
the squat.
2.Right and Left Hand, Elbow, Shoulder determine
the relative position.
16. Basic Theory
Posture recognition:
1.TopTorso, ButtonTorso
composition of the vector to
determine the left and right tilt,
you can determine the squat.
2.Right and Left Hand, Elbow,
Shoulder determine the relative
position.
17. Future
Hope to increase the game's
interactive, support double
game, and will increase the
battle
Combination, combined with
cell phone connection, let a
party control the launch of a
bullet, one control the walking
and emergence of animals,
and identify the action in the
picture below.