Technical presentation of the gesture based NUI I developed for the Aigaio smart conference room in IIT Demokritos
Demo In Greek:
https://www.youtube.com/watch?v=5C_p7MHKA4g
DETERMINING METAL SURFACE WAVINESS PARAMETERS AND HEIGHT LIQUID SURFACE WAVE ...IAEME Publication
The work is concentrated on an experimental approach to determine surface waviness of metal and height of liquid surface wave. It is based on illuminated those surfaces with a highly coherent light such as laser, and observed movement of center of speckle pattern which were gained from the movement surfaces by using tracking program. The movements of the speckle pattern work on carries important information about those surfaces. Two triangulation methods employed to evaluate metal waviness and liquid surface wave. The first method apply on a solid material (metal) and taking a video directly from the reflected surface, the second method apply on the liquid material (oil ) and taking a video after reflected from screen. The methods used here have a great potential for precise and non-contact optical measurements for surface wave measurements.
The flow of baseline estimation using a single omnidirectional cameraTELKOMNIKA JOURNAL
Baseline is a distance between two cameras, but we cannot get information from a single camera. Baseline is one of the important parameters to find the depth of objects in stereo image triangulation. The flow of baseline is produced by moving the camera in horizontal axis from its original location. Using baseline estimation, we can determined the depth of an object by using only an omnidirectional camera. This research focus on determining the flow of baseline before calculating the disparity map. To estimate the flow and to tracking the object, we use three and four points in the surface of an object from two different data (panoramic image) that were already chosen. By moving the camera horizontally, we get the tracks of them. The obtained tracks are visually similar. Each track represent the coordinate of each tracking point. Two of four tracks have a graphical representation similar to second order polynomial.
Inertial Sensor Array Calibration Made Easy !oblu.io
Ultra-low-cost single-chip inertial measurement units (IMUs) combined into IMU arrays are opening up new possibilities for inertial sensing. However, to make these systems practical, calibration and misalignment compensation of low-cost IMU arrays are necessary and a simple calibration procedure that aligns the sensitivity axes of the sensors in the array is needed. Team at KTH suggests a novel mechanical-rotation-rig-free calibration procedure based on blind system identification and a platonic solid (Icosahedron) printable by a contemporary 3D-printer. Matlab-scripts for the parameter estimation and production files for the calibration device are made available.
This slide is about introduction of blurred image recognition system using legendre's moment invariant algorithm and explain about blurred image will be recognized and converted into original image
DETERMINING METAL SURFACE WAVINESS PARAMETERS AND HEIGHT LIQUID SURFACE WAVE ...IAEME Publication
The work is concentrated on an experimental approach to determine surface waviness of metal and height of liquid surface wave. It is based on illuminated those surfaces with a highly coherent light such as laser, and observed movement of center of speckle pattern which were gained from the movement surfaces by using tracking program. The movements of the speckle pattern work on carries important information about those surfaces. Two triangulation methods employed to evaluate metal waviness and liquid surface wave. The first method apply on a solid material (metal) and taking a video directly from the reflected surface, the second method apply on the liquid material (oil ) and taking a video after reflected from screen. The methods used here have a great potential for precise and non-contact optical measurements for surface wave measurements.
The flow of baseline estimation using a single omnidirectional cameraTELKOMNIKA JOURNAL
Baseline is a distance between two cameras, but we cannot get information from a single camera. Baseline is one of the important parameters to find the depth of objects in stereo image triangulation. The flow of baseline is produced by moving the camera in horizontal axis from its original location. Using baseline estimation, we can determined the depth of an object by using only an omnidirectional camera. This research focus on determining the flow of baseline before calculating the disparity map. To estimate the flow and to tracking the object, we use three and four points in the surface of an object from two different data (panoramic image) that were already chosen. By moving the camera horizontally, we get the tracks of them. The obtained tracks are visually similar. Each track represent the coordinate of each tracking point. Two of four tracks have a graphical representation similar to second order polynomial.
Inertial Sensor Array Calibration Made Easy !oblu.io
Ultra-low-cost single-chip inertial measurement units (IMUs) combined into IMU arrays are opening up new possibilities for inertial sensing. However, to make these systems practical, calibration and misalignment compensation of low-cost IMU arrays are necessary and a simple calibration procedure that aligns the sensitivity axes of the sensors in the array is needed. Team at KTH suggests a novel mechanical-rotation-rig-free calibration procedure based on blind system identification and a platonic solid (Icosahedron) printable by a contemporary 3D-printer. Matlab-scripts for the parameter estimation and production files for the calibration device are made available.
This slide is about introduction of blurred image recognition system using legendre's moment invariant algorithm and explain about blurred image will be recognized and converted into original image
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
A Detailed Analysis on Feature Extraction Techniques of Panoramic Image Stitc...IJEACS
Image stitching is a technique which is used for attaining a high resolution panoramic image. In this technique, distinct aesthetic images that are imaged from different view and angles are combined together to produce a panoramic image. In the field of computer graphics, photographic and computer vision, Image stitching techniques are considered as current research areas. For obtaining a stitched image it becomes mandatory that one should have the knowledge of geometric relations among multiple image co-ordinate system [1].First, image stitching will be done based on feature key point matches. Final image with seam will be blended with image blending technique. Hence in this paper we are going to address multiple distinct techniques like some invariant features as Scale Invariant Feature Transform and Speeded up Robust Transform and Corner techniques as Harris Corner Detection Technique that are useful in sorting out the issues related with stitching of images.
AUTOMATED IMAGE MOSAICING SYSTEM WITH ANALYSIS OVER VARIOUS IMAGE NOISEijcsa
Mosaicing is blending together of several arbitrarily shaped images to form one large balanced image such
that boundaries between the original images are not seen. Image mosaicing creates a large field of view
using of scene and the result image can be used for texture mapping of a 3D environment too. Blended
image has become a wide necessity in images captured from real time sensor devices, bio-medical
equipment, satellite images from space, aerospace, security systems, brain mapping, genetics etc. Idea
behind this work is to automate the Image Mosaicing System so that blending may be fast, easy and
efficient even if large number of images are considered. This work also provides an analysis of blending
over images containing different kinds of distortion and noise which further enhances the quality of the
system and make the system more reliable and robust.
Spot Edge Detection in cDNA Microarray Images using Window based Bi-Dimension...idescitation
Ongoing Microarray is an increasingly playing a crucial role applied in the field
of medical and biological operations. The initiator of Microarray technology is M. Schena et
al. [1] and from past few years microarrays have begun to be used in many fields such as
biomedicine, mostly on cancer and Diabetic, and medical diagnoses. A Deoxyribonucleic
Acid (DNA) microarray is a collection of microscopic DNA spots attached to a solid surface,
such as glass, plastic or silicon chip forming an array. Processing of DNA microarray image
analysis includes three tasks: gridding, segmentation and intensity extraction and at the
stage of processing, the irregularities of shape and spot position which leads to generate
significant errors. This article presents a new spot edge detection method using Window
based Bi-dimensional Empirical Mode Decomposition. On separating spots form the
background area and to decreases the probability of errors and gives more accurate
information about the states of spots we are proposing a spot edge detection via WBEMD.
By using this method we can identify the spots with low density, which leads to increasing
the performance of cDNA microarray images.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
A Detailed Analysis on Feature Extraction Techniques of Panoramic Image Stitc...IJEACS
Image stitching is a technique which is used for attaining a high resolution panoramic image. In this technique, distinct aesthetic images that are imaged from different view and angles are combined together to produce a panoramic image. In the field of computer graphics, photographic and computer vision, Image stitching techniques are considered as current research areas. For obtaining a stitched image it becomes mandatory that one should have the knowledge of geometric relations among multiple image co-ordinate system [1].First, image stitching will be done based on feature key point matches. Final image with seam will be blended with image blending technique. Hence in this paper we are going to address multiple distinct techniques like some invariant features as Scale Invariant Feature Transform and Speeded up Robust Transform and Corner techniques as Harris Corner Detection Technique that are useful in sorting out the issues related with stitching of images.
AUTOMATED IMAGE MOSAICING SYSTEM WITH ANALYSIS OVER VARIOUS IMAGE NOISEijcsa
Mosaicing is blending together of several arbitrarily shaped images to form one large balanced image such
that boundaries between the original images are not seen. Image mosaicing creates a large field of view
using of scene and the result image can be used for texture mapping of a 3D environment too. Blended
image has become a wide necessity in images captured from real time sensor devices, bio-medical
equipment, satellite images from space, aerospace, security systems, brain mapping, genetics etc. Idea
behind this work is to automate the Image Mosaicing System so that blending may be fast, easy and
efficient even if large number of images are considered. This work also provides an analysis of blending
over images containing different kinds of distortion and noise which further enhances the quality of the
system and make the system more reliable and robust.
Spot Edge Detection in cDNA Microarray Images using Window based Bi-Dimension...idescitation
Ongoing Microarray is an increasingly playing a crucial role applied in the field
of medical and biological operations. The initiator of Microarray technology is M. Schena et
al. [1] and from past few years microarrays have begun to be used in many fields such as
biomedicine, mostly on cancer and Diabetic, and medical diagnoses. A Deoxyribonucleic
Acid (DNA) microarray is a collection of microscopic DNA spots attached to a solid surface,
such as glass, plastic or silicon chip forming an array. Processing of DNA microarray image
analysis includes three tasks: gridding, segmentation and intensity extraction and at the
stage of processing, the irregularities of shape and spot position which leads to generate
significant errors. This article presents a new spot edge detection method using Window
based Bi-dimensional Empirical Mode Decomposition. On separating spots form the
background area and to decreases the probability of errors and gives more accurate
information about the states of spots we are proposing a spot edge detection via WBEMD.
By using this method we can identify the spots with low density, which leads to increasing
the performance of cDNA microarray images.
This presentation as made as a tutorial at NCVPRIPG (http://www.iitj.ac.in/ncvpripg/) at IIT Jodhpur on 18-Dec-2013.
Kinect is a multimedia sensor from Microsoft. It is shipped as the touch-free console for Xbox 360 video gaming platform. Kinect comprises an RGB Camera, a Depth Sensor (IR Emitter and Camera) and a Microphone Array. It produces a multi-stream video containing RGB, depth, skeleton, and audio streams.
Compared to common depth cameras (laser or Time-of-Flight), the cost of a Kinect is quite low as it uses a novel structured light diffraction and triangulation technology to estimate the depth. In addition, Kinect is equipped with special software to detect human figures and to produce its 20-joints skeletons.
Though Kinect was built for touch-free gaming, its cost effectiveness and human tracking features have proved useful in many indoor applications beyond gaming like robot navigation, surveillance, medical assistance and animation.
The objective of this project was to design a power wheelchair with a novel control system for quadriplegics with head or hand mobility. The control system translates the position of the user's hand into speed and directional control of the wheelchair. Hand movement was measured using accelerometer sensor-based motion- tracking of an sensor array on the back of the user's hand.
Human action recognition with kinect using a joint motion descriptorSoma Boubou
- We proposed a novel descriptor for motion of skeleton joints.
- Proposed descriptor proved to outperform the state-of-the-art descriptors such as HON4D and the one proposed by Chen et al 2013.
- Our proposed approached proved to be effective for periodic actions (e.g., Waving, Walking, Jogging, Side-Boxing, etc).
- Grouping was effective for actions with unique joints trajectories (e.g., Tennis serving, Side kicking , etc).
- Grouping joints into eight groups is always effective with actions of MSR3D dataset.
CHI'16 Journal "A Mouse With Two Optical Sensors That Eliminates Coordinate D...Byungjoo Lee
Presented by Byungjoo Lee at CHI'16 San Jose
ABSTRACT
The computer mouse is rarely used for drawing due to its body-fixed coordinate system, which creates a stroke that differs from the user’s original hand movement. In this study, we resolve this problem by implementing a new mouse called StereoMouse, which eliminates the rotational disturbance of the coordinate system in real-time. StereoMouse is a special mouse with two optical sensors, and its coordinate orientation at the beginning of a stroke is maintained throughout the movement by measuring and compensating for the angular deviation estimated from those sensors. The drawing performance of StereoMouse was measured by means of having users perform the task of repeatedly drawing a basic shape. The results of this experiment showed that StereoMouse eliminated the horizontal drift typically observed in a stroke drawn by a normal mouse. Consequently, StereoMouse allowed the users to draw shapes at a 10.6% faster mean speed with a 10.4% shorter travel time than a normal mouse would. Furthermore, StereoMouse showed 37.1% lower chance of making incorrect gesture input than the normal mouse.
Wearable Accelerometer Optimal Positions for Human Motion Recognition(LifeTec...sugiuralab
Wearable Accelerometer Optimal Positions for Human Motion Recognition. The 2020 IEEE 2nd Global Conference on Life Sciences and Technologies (LifeTech 2020), March 10-11, 2020
Machine Learning Essentials Demystified part2 | Big Data DemystifiedOmid Vahdaty
achine Learning Essentials Abstract:
Machine Learning (ML) is one of the hottest topics in the IT world today. But what is it really all about?
In this session we will talk about what ML actually is and in which cases it is useful.
We will talk about a few common algorithms for creating ML models and demonstrate their use with Python. We will also take a peek at Deep Learning (DL) and Artificial Neural Networks and explain how they work (without too much math) and demonstrate DL model with Python.
The target audience are developers, data engineers and DBAs that do not have prior experience with ML and want to know how it actually works.
Welcome to the Digital Signal Processing (DSP) Lab Manual. This manual is designed to be your comprehensive guide throughout your DSP laboratory sessions. Digital Signal Processing is a fundamental field in electrical engineering and computer science that deals with the manipulation of digital signals to achieve various objectives, such as filtering, transformation, and analysis. In this lab, you will have the opportunity to apply theoretical knowledge to practical, hands-on exercises that will deepen your understanding of DSP concepts.
This manual is structured to provide you with step-by-step instructions, explanations, and insights into the experiments you'll be performing. Each experiment is carefully designed to reinforce your understanding of fundamental DSP principles and help you develop the skills necessary for signal processing applications. Whether you are a student or an instructor, this manual is intended to facilitate a productive and enriching DSP lab experience.
"FingerPrint Recognition Using Principle Component Analysis(PCA)”Er. Arpit Sharma
Fingerprint recognition is one of the oldest and most popular biometric technologies and it is used in criminal investigations, civilian, commercial applications, and so on. Fingerprint matching is the process used to determine whether the two sets of fingerprints details come from the same finger or not. This work focuses on feature extraction and minutiae matching stage. There are many matching techniques used for fingerprint recognition systems such as minutiae based matching, pattern based matching, Correlation based matching, and image based matching.
A new method based upon Principal Component Analysis (PCA) for fingerprint enhancement is proposed in this paper. PCA is a useful statistical technique that has found application in fields such as face recognition and image compression, and is a common technique for finding patterns in data of high dimension. In the proposed method image is first decomposed into directional images using decimation free Directional Filter bank DDFB. Then PCA is applied to these directional fingerprint images which gives the PCA filtered images. Which are basically directional images? Then these directional images are reconstructed into one image which is the enhanced one. Simulation results are included illustrating the capability of the proposed method.
Welcome to the Digital Signal Processing (DSP) Lab Manual. This manual is designed to be your comprehensive guide throughout your DSP laboratory sessions. Digital Signal Processing is a fundamental field in electrical engineering and computer science that deals with the manipulation of digital signals to achieve various objectives, such as filtering, transformation, and analysis. In this lab, you will have the opportunity to apply theoretical knowledge to practical, hands-on exercises that will deepen your understanding of DSP concepts.
This manual is structured to provide you with step-by-step instructions, explanations, and insights into the experiments you'll be performing. Each experiment is carefully designed to reinforce your understanding of fundamental DSP principles and help you develop the skills necessary for signal processing applications. Whether you are a student or an instructor, this manual is intended to facilitate a productive and enriching DSP lab experience.
Do you want Software for your Business? Visit Deuglo
Deuglo has top Software Developers in India. They are experts in software development and help design and create custom Software solutions.
Deuglo follows seven steps methods for delivering their services to their customers. They called it the Software development life cycle process (SDLC).
Requirement — Collecting the Requirements is the first Phase in the SSLC process.
Feasibility Study — after completing the requirement process they move to the design phase.
Design — in this phase, they start designing the software.
Coding — when designing is completed, the developers start coding for the software.
Testing — in this phase when the coding of the software is done the testing team will start testing.
Installation — after completion of testing, the application opens to the live server and launches!
Maintenance — after completing the software development, customers start using the software.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Zoom is a comprehensive platform designed to connect individuals and teams efficiently. With its user-friendly interface and powerful features, Zoom has become a go-to solution for virtual communication and collaboration. It offers a range of tools, including virtual meetings, team chat, VoIP phone systems, online whiteboards, and AI companions, to streamline workflows and enhance productivity.
AI Genie Review: World’s First Open AI WordPress Website CreatorGoogle
AI Genie Review: World’s First Open AI WordPress Website Creator
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-genie-review
AI Genie Review: Key Features
✅Creates Limitless Real-Time Unique Content, auto-publishing Posts, Pages & Images directly from Chat GPT & Open AI on WordPress in any Niche
✅First & Only Google Bard Approved Software That Publishes 100% Original, SEO Friendly Content using Open AI
✅Publish Automated Posts and Pages using AI Genie directly on Your website
✅50 DFY Websites Included Without Adding Any Images, Content Or Doing Anything Yourself
✅Integrated Chat GPT Bot gives Instant Answers on Your Website to Visitors
✅Just Enter the title, and your Content for Pages and Posts will be ready on your website
✅Automatically insert visually appealing images into posts based on keywords and titles.
✅Choose the temperature of the content and control its randomness.
✅Control the length of the content to be generated.
✅Never Worry About Paying Huge Money Monthly To Top Content Creation Platforms
✅100% Easy-to-Use, Newbie-Friendly Technology
✅30-Days Money-Back Guarantee
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
#AIGenieApp #AIGenieBonus #AIGenieBonuses #AIGenieDemo #AIGenieDownload #AIGenieLegit #AIGenieLiveDemo #AIGenieOTO #AIGeniePreview #AIGenieReview #AIGenieReviewandBonus #AIGenieScamorLegit #AIGenieSoftware #AIGenieUpgrades #AIGenieUpsells #HowDoesAlGenie #HowtoBuyAIGenie #HowtoMakeMoneywithAIGenie #MakeMoneyOnline #MakeMoneywithAIGenie
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
In the ever-evolving landscape of technology, enterprise software development is undergoing a significant transformation. Traditional coding methods are being challenged by innovative no-code solutions, which promise to streamline and democratize the software development process.
This shift is particularly impactful for enterprises, which require robust, scalable, and efficient software to manage their operations. In this article, we will explore the various facets of enterprise software development with no-code solutions, examining their benefits, challenges, and the future potential they hold.
Game Development with Unity3D (Game Development lecture 3)
Smart Room Gesture Control
1. smart room gesture control using kinect
skeletal data
Diploma Thesis Presentation
Giwrgos Paraskevopoulos
10 July 2016
School of Electrical and Computer Engineering
National Technical University of Athens
Institute of Informatics and Telecommunications
NCSR Demokritos
2. goals of this work
• Build a simple and intuitive Natural User Interface for a smart
conference room.
• Use the Kinect skeletal information to devise some novel and
accurate techniques for pose and gesture recognition focused
on real-time systems.
• Create efficient implementations of the devised algorithms as
microservices.
• Demonstrate how Machine to Machine communication can be
used to compose these microservices into non trivial
applications for the Internet of Things ecosystem.
• Integrate our work into the Synaisthisi platform.
2
6. problem definition
• A pose is the temporary suspension of the animation of the
body joints in a discrete configuration.
• Pose detection is the problem of automatically classifying a
pose.
• Poses can be used as control signals
6
7. assumptions
• Sufficient room lighting conditions for the operation of the
Kinect sensor
• User is positioned inside the operating area of the Kinect
sensor and is facing the sensor
• Predefined set of target poses
7
8. design prerequisites
Detection is
• real-time
• invariant to the user position and distance from the camera
• invariant to the user physical characteristics (height, weight etc.)
• able to support multiple users (a limit of 2 is imposed by the
Kinect sensor)
8
9. the proposed algorithm
• Template matching to a set of geometric features
• Poses are predefined templates
• We compare the current feature vector to the template, and if it
is within a predefined error margin the pose is recognized
Simple, fast and performs well in real world scenarios.
9
12. skeleton joints
• A total of 20 inferred joints
• Organized in a hierarchical structure (we make use of this later)
Shoulder
Left
Hip Center
Elbow Left
Wrist Left
Hip Left
Hand Left
Knee RightKnee Left
Shoulder
Right
Elbow Right
Wrist Right
Hand Right
Hip Right
Shoulder
Center
Head
Skeletal Joints
HipCenter
Hip Right
Knee Right
Ankle Right
Foo tRight
Hip Left
Knee Left
Ankle Left
Foot Left
Spine
Shoulder Center
Shoulder Right
Elbow Right
Wrist Right
Hand Right
HeadShoulder Left
Elbow Left
Wrist Left
Hand Left
Joints in a tree structure
12
14. geometric features
• Make use of the joints hierarchical structure to compute the
angles between each parent and child joint.
• This single feature provides by default the position and depth
invariance and is enough to classify a large set of poses.
14
15. feature calculation
B
c
C
a
A
b
The triangle that is formed between the father joint A, the child joint B and
the point (A.x, 0)
Using the law of cosines, we calculate the ∠ACB as
C = cos−1 a2
+ b2
− c2
2ab
(1)
15
21. the proposed approach
• Extract a set of low level geometric features
• Evaluate the performance (classification accuracy and time) of a
set of well known machine learning techniques on this set of
features
21
23. feature extraction
• We are interested in gestures; use only elbows, wrists and hands
(LeftElbow, RightElbow, LeftWrist, RightWrist, LeftHand,
RightHand)
• For each we calculate a set of features, for the set of N frames
that correspond to the gesture and based on their 3D
coordinates
23
24. feature properties
Our features satisfy the following
• They require constant time O(1) in each frame to compute
• Small gesture changes are reflected to small feature changes
(angle and displacement are able to discriminate between
small movements)
• They are aggregate features that summarize the gesture over all
the frames
• They are small in size (a feature vector in minified JSON format
is about 3KB)
24
25. the features
Features are extracted
• from each skeletal
joint
• from a set of the
frames depicting the
gesture
• using the 3D
coordinates of the
joints
Feature name Frames involved Equation
Spatial angle F2, F1
v
(J)
2 − v
(J)
1
v
(J)
2 − v
(J)
1
Spatial angle Fn, Fn−1
v
(J)
n − v
(J)
n−1
v
(J)
n − v
(J)
n−1
Spatial angle Fn, F1
v
(J)
n − v
(J)
1
v
(J)
n − v
(J)
1
Total vector angle F1, . . . , Fn
n∑
i=1
arccos
v
(J)
i · v
(J)
i−1
v
(J)
i v
(J)
i−1
Squared total vector angle F1, . . . , Fn
n∑
i=1
arccos
v
(J)
i · v
(J)
i−1
v
(J)
i v
(J)
i−1
2
Total vector displacement Fn, F1 v
(J)
n − v
(J)
1
Total displacement F1, . . . , Fn
n∑
i=1
v
(J)
i − v
(J)
i−1
Maximum displacement F1, . . . , Fn max
n
(
v
(J)
i − v
(J)
i−1
)
Bounding box diagonal length∗
F1, . . . , Fn
√
a2
B(V(J ))
+ b2
B(V(J ))
Bounding box angle∗
F1, . . . , Fn arctan
bB(V(J ))
aB(V(J ))
25
26. the features (cont’d)
We also extract the initial and final (i.e., at F1 and FN, respectively),
mean and maximum angle (i.e., for F1, . . . FN) between any pair of
joints (pc = parent-child )
θpc = cos−1
(
a2
pc + b2
pc − c2
pc
2apcbpc
)
(2)
where
apc =
(
v(J)
− v
(Jc)
x
)2
+
(
v
(J)
y − v
(Jc)
y
)2
, (3)
bpc = v
(J)
x και (4)
cpc = (v
(Jp)
x )2
+ (v
(J)
y − v
(Jp)
y )2
. (5)
26
27. the features (cont’d)
Finally between HandLeft and HandRight and within the gesture we
extract
• The max...
dmax = max
i,j
{
d
(
vHR
i , vHL
j
)}
, (6)
• ... and the mean distance
dmean =
1
F(J)
∑
i,j
d
(
vHR
i , vHL
j
)
, (7)
27
29. machine learning approaches
For gesture recognition based on the aforementioned features, we
investigated the following techniques:
• SVM Linear/RBF kernel (LSVM/RBFSVM)
• Linear Discriminant Analysis (LDA)
• Quadratic Discriminant Analysis (QDA)
• Naive Bayes (NB)
• K-nearest neighbors (KNN)
• Decision Trees (DT)
• Random Forests (RF)
• Extra Trees (ET)
• Adaboost with Decision Trees / Extra Trees (ABDT/ABET)
29
30. experiments preparation - dataset construction
• We constructed a real-life data set of 10 users (7M/3F), ages:
22–36
• We selected a set of “swipes” (in/out/up/down) for both hands
(8 gestures)
• Each user performed a gesture at least 10 times
• We manually “cleaned” the dataset from obviously “wrongly”
performed gestures
• Each user was equipped with a push button, used to signify the
beginning/ending of a gesture
30
31. experiments preparation - optimal parameters
Find the optimal parameter set for each algorithm using Exhaustive
Grid Search
α learning rate n number of neighbors
e number of estimators s search algorithm
d maximum depth m metric between point p and q
f maximum number of features r regularization parameter
Classifier Parameters
ABDT e = 103, α = 621.6
ABET e = 82, α = 241.6
DT d = 48, f = 49
ET d = 17, f = 70, e = 70
KNN n = 22, s = kd_tree, m =
∑n
i=1(|pi − qi|)
LSVM C = 0.0091
QDA r = 0.88889
RBFSVM C = 44.445, γ = 0.0001
RF d = 27, f = 20, e = 75
31
32. experiment 1 - performance for a known set of users
TARGET
• Find the optimal machine learning approach to split our feature
space
METHOD
• Train using gestures from all users
• Evaluate using different gestures from all users
• Use standard Stratified K-fold Cross Validation
32
34. experiment 2 - performance against unknown users
TARGET
What is
• the performance for new / unknown users?
• the minimum number of users that can be used for training and
get an adequate classifier?
• the effect of “bad users”?
METHOD
• Split the data per user
• Use a varying number of users for training and the rest for
testing
34
35. experiment 2 - results per user (f1 score)
User 1 User 2 User 3 User 4 User 5 User 6 User 7 User 8 User 9 User 10 Mean w/o User 1
LH-SwipeDown 0.76 0.83 1.00 0.82 1.00 0.80 1.00 1.00 1.00 0.96 0.934
LH-SwipeIn 0.38 0.92 0.84 1.00 1.00 0.92 1.00 1.00 1.00 1.00 0.964
LH-SwipeOut 0.61 0.93 0.86 1.00 1.00 0.89 1.00 1.00 0.97 1.00 0.961
LH-SwipeUp 0.69 0.90 1.00 0.84 1.00 0.83 1.00 1.00 0.97 0.96 0.944
RH-SwipeDown 0.78 1.00 0.95 - 1.00 1.00 0.92 1.00 0.87 1.00 0.968
RH-SwipeIn 0.64 1.00 0.67 - 1.00 1.00 1.00 1.00 0.89 0.96 0.940
RH-SwipeOut 0.61 1.00 0.80 - 1.00 1.00 0.95 1.00 1.00 0.95 0.963
RH-SwipeUp 0.40 1.00 0.95 - 1.00 1.00 1.00 1.00 0.96 1.00 0.989
Average 0.62 0.94 0.88 0.92 1.00 0.92 0.99 1.00 0.96 0.97
35
37. experiment 3 - benchmarks
TARGET
• Test the prediction time in different architectures
METHOD
Construct a custom benchmark where:
• Split the data randomly in train/test sets (80/20 split)
• Classify every sample in the test data 1000 times
• Calculate the average prediction time for each classifier
• Evaluate in different architectures
37
40. use case
The described algorithms are combined into an IOT application
implementing a NUI system that is integrated into the SYNAISTHISI
platform.
This system is used to control the devices inside in the Aigaio
meeting room which is located in IIT Demokritos.
40
43. iot definition
The Internet of Things is a generalization of the World Wide Web to
incorporate “things” that exist in the physical world.
43
44. iot definition
The Internet of Things is the convergence of all virtual and physical
entities that are able to produce, consume and act upon data under
a common communications infrastructure.
44
45. synaisthisi platform
The Synaisthisi platform is a project developed in NCSR Demokritos
that aims to facilitate the development of Internet of Things
applications.
It provides
• A Service Oriented Architecture where applications can be
developed as the composition of small functional building
blocks (services)
• A categorization of services in Sensing, Processing and Actuating
(SPA services)
• A Message Oriented Middleware that provides a pub/sub
message passing infrastructure based on MQTT for inter-service
communication
• Mechanisms for logging, security, persistent storage, data
replication and administration of the applications and the
system
45
48. publish/subscribe protocols
Pros:
• Asynchronous communication
• Decoupling of the communicating entities
• Many to many communication
• Scalable
Cons:
• Semantic coupling (message type needs to be statically defined)
• Loose message delivery guarantees
48
49. mqtt
All of the above plus
• Lightweight
• Small code footprint
• Small header overhead
• QoS Levels
49
57. conclusions
In this project we have
• Implemented a simple and efficient pose detection algorithm
• Constructed a real-time machine learning approach to gesture
recognition
• Integrated these techniques to a real life IOT application for a
Natural User Interface
57
58. publications
• G. Paraskevopoulos, E. Spyrou and D. Sgouropoulos, A Real-Time
Approach for Gesture Recognition using the Kinect Sensor. In
Proc. of Hellenic Conference on Artificial Intelligence (SETN),
2016
• 2 more upcoming
58
59. future work
• Apply the aforementioned techniques to new / more
challenging problem domains
• Construct a larger data set with more users performing a bigger
variety of gestures
• Reduce the size/cost of the needed computing devices
• Implement a dynamic time segmentation method for gesture
detection
59
64. why not a stream of poses?
2 main approaches
• FSM of intermediate poses: Every new gesture needs to be hard
coded
• HMM with intermediate poses as observations: Generalizes but
couples recognition accuracy with frame rate / network stability
64
65. kinect on raspberry pi
APPROACH
Use Windows 10 IOT Edition
PROBLEMS
• Does not support Kinect drivers and Kinect SDK
• Not enough voltage provided through USB hub to sustain the
Kinect sensor
65
66. kinect on raspberry pi
APPROACH
• Use USB over IP to stream the low level Kinect data from a
Raspberry Pi to a Windows Server so that it appears the Kinect
sensor is connected to the server
• Cross-Compile the Linux kernel with USB-IP modules support
PROBLEMS
• USB-IP modules do not support USB hubs
• Bandwidth limitations
66
67. why c#? why python? why js?
C#
• Kinect SDK
Python
• scikit-learn
JS
• async . p a r a l l e l (
// L i s t of pose detectors to run concurrently
Object . keys ( poseDetectors )
. reduce ( ( prev , pose ) => {
prev [ pose ] = poseDetectors [ pose ] ( features )
return prev
} , { } ) ,
( err , result ) => {
/* handle l i s t of results . . . */
} )
67