This document describes a cloud computing framework that uses image recognition and remote assistance to help blind individuals identify products while grocery shopping. A blind person takes a photo of a product with their smartphone, which is sent to a cloud server for image matching. The top 5 matches are then sent to a sighted assistant for verification. If a match is incorrect, the assistant provides the right name either by voice or text. The system was tested in the lab and a supermarket, showing it can help blind people identify products independently with help from remote sighted assistants.
A revolution in computer interface design is changing the way we think about computers. Rather than typing on a keyboard and watching a television monitor, Augmented Reality lets people use familiar, everyday objects in ordinary ways. A revolution in computer interface design is changing the way we think about computers. Rather than typing on a keyboard and watching a television monitor, Augmented Reality lets people use familiar, everyday objects in ordinary ways. This paper surveys the field of Augmented Reality, in which 3-D virtual objects are integrated into a 3-D real environment in real time. It describes the medical, manufacturing, visualization, path planning, entertainment and military applications that have been explored. This paper describes the characteristics of Augmented Reality systems. Registration and sensing errors are two of the biggest problems in building effective Augmented Reality systems, so this paper throws light on problems. Future directions and areas requiring further research are discussed. This survey provides a starting point for anyone interested in researching or using Augmented Reality.
Teleassistance in Accessible Shopping for the BlindVladimir Kulyukin
In this paper, we present TeleShop, the teleassistance
module of ShopMobile 2, our mobile accessible shopping system for visually impaired (VI) and blind individuals
that we have been developing for the past several
years. TeleShop enables its users to obtain help from remote
sighted guides by transmitting images and voice from their
smartphones to the guides’ computers or phones. We have
successfully tested TeleShop in a laboratory study in which
a married couple (a blind husband and a sighted wife) used
it to retrieve grocery products and read nutrition facts from
product packages.
ANALYSING THE POTENTIAL OF BLE TO SUPPORT DYNAMIC BROADCASTING SCENARIOSijasuc
In this paper, we present a novel approach for broadcasting information based on a Bluetooth Low Energy
(BLE) ibeacon technology. We propose a dynamic method that uses a combination of Wi-Fi and BLE
technology where every technology plays a part in a user discovery and broadcasting process. In such
system, a specific ibeacon device broadcasts the information when a user is in proximity. Using
experiments, we conduct a scenario where the system discovers users, disseminates information, and later
we use collected data to examine the system performance and capability. The results show that our
proposed approach has a promising potential to become a powerful tool in the discovery and broadcasting
concept that can be easily implemented and used in business environments.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Face Recognition and Increased Reality System for Mobile Devicesijtsrd
The objective of this article is to explain the problems of using the facial recognition functions in current mobile devices, as well as to give a possible solution based on a client server design. Sirojiddin Tavboev | Tavboev Islom "Face Recognition and Increased Reality System for Mobile Devices" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-4 , June 2020, URL: https://www.ijtsrd.com/papers/ijtsrd31384.pdf Paper Url :https://www.ijtsrd.com/computer-science/artificial-intelligence/31384/face-recognition-and-increased-reality-system-for-mobile-devices/sirojiddin-tavboev
In a new article for Pharmaceutical Manufacturing and Packing Sourcer (PMPS), our Director of Design, Paul Greenhalgh discusses how design solutions can bridge the gap between a device’s inner workings and the end user.
A revolution in computer interface design is changing the way we think about computers. Rather than typing on a keyboard and watching a television monitor, Augmented Reality lets people use familiar, everyday objects in ordinary ways. A revolution in computer interface design is changing the way we think about computers. Rather than typing on a keyboard and watching a television monitor, Augmented Reality lets people use familiar, everyday objects in ordinary ways. This paper surveys the field of Augmented Reality, in which 3-D virtual objects are integrated into a 3-D real environment in real time. It describes the medical, manufacturing, visualization, path planning, entertainment and military applications that have been explored. This paper describes the characteristics of Augmented Reality systems. Registration and sensing errors are two of the biggest problems in building effective Augmented Reality systems, so this paper throws light on problems. Future directions and areas requiring further research are discussed. This survey provides a starting point for anyone interested in researching or using Augmented Reality.
Teleassistance in Accessible Shopping for the BlindVladimir Kulyukin
In this paper, we present TeleShop, the teleassistance
module of ShopMobile 2, our mobile accessible shopping system for visually impaired (VI) and blind individuals
that we have been developing for the past several
years. TeleShop enables its users to obtain help from remote
sighted guides by transmitting images and voice from their
smartphones to the guides’ computers or phones. We have
successfully tested TeleShop in a laboratory study in which
a married couple (a blind husband and a sighted wife) used
it to retrieve grocery products and read nutrition facts from
product packages.
ANALYSING THE POTENTIAL OF BLE TO SUPPORT DYNAMIC BROADCASTING SCENARIOSijasuc
In this paper, we present a novel approach for broadcasting information based on a Bluetooth Low Energy
(BLE) ibeacon technology. We propose a dynamic method that uses a combination of Wi-Fi and BLE
technology where every technology plays a part in a user discovery and broadcasting process. In such
system, a specific ibeacon device broadcasts the information when a user is in proximity. Using
experiments, we conduct a scenario where the system discovers users, disseminates information, and later
we use collected data to examine the system performance and capability. The results show that our
proposed approach has a promising potential to become a powerful tool in the discovery and broadcasting
concept that can be easily implemented and used in business environments.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Face Recognition and Increased Reality System for Mobile Devicesijtsrd
The objective of this article is to explain the problems of using the facial recognition functions in current mobile devices, as well as to give a possible solution based on a client server design. Sirojiddin Tavboev | Tavboev Islom "Face Recognition and Increased Reality System for Mobile Devices" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-4 , June 2020, URL: https://www.ijtsrd.com/papers/ijtsrd31384.pdf Paper Url :https://www.ijtsrd.com/computer-science/artificial-intelligence/31384/face-recognition-and-increased-reality-system-for-mobile-devices/sirojiddin-tavboev
In a new article for Pharmaceutical Manufacturing and Packing Sourcer (PMPS), our Director of Design, Paul Greenhalgh discusses how design solutions can bridge the gap between a device’s inner workings and the end user.
Augmented reality, the new age technology, has widespread applications in every field imaginable. This technology has proven to be an inflection point in numerous verticals, improving lives and improving performance. In this paper, we explore the various possible applications of Augmented Reality (AR) in the field of Medicine. The objective of using AR in medicine or generally in any field is the fact that, AR helps in motivating the user, making sessions interactive and assist in faster learning. In this paper, we discuss about the applicability of AR in the field of medical diagnosis. Augmented reality technology reinforces remote collaboration, allowing doctors to diagnose patients from a different locality. Additionally, we believe that a much more pronounced effect can be achieved by bringing together the cutting edge technology of AR and the lifesaving field of Medical sciences. AR is a mechanism that could be applied in the learning process too. Similarly, virtual reality could be used in the field where more of practical experience is needed such as driving, sports, neonatal care training.
Many problems in information retrieval and related fields depend on a reliable measure of the distance or similarity between objects that, most frequently, are represented
as vectors. This paper considers vectors of bits. Such data structures implement entities as diverse as bitmaps that indicate the occurrences of terms and bitstrings indicating the presence
of edges in images. For such applications, a popular distance measure is the Hamming distance. The value of the Hamming distance for information retrieval applications is limited by the
fact that it counts only exact matches, whereas in information retrieval, corresponding bits that are close by can still be considered to be almost identical. We define a "Generalized
Hamming distance" that extends the Hamming concept to give partial credit for near misses, and suggest a dynamic programming algorithm that permits it to be computed efficiently.
We envision many uses for such a measure. In this paper we define and prove some basic properties of the :Generalized Hamming distance," and illustrate its use in the area of object
recognition. We evaluate our implementation in a series of experiments, using autonomous robots to test the measure's effectiveness in relating similar bitstrings.
Robot-Assisted Shopping for the Blind: Issues in Spatial Cognition and Produc...Vladimir Kulyukin
Research on spatial cognition and blind
navigation suggests that a device aimed at helping blind people
to shop independently should provide the shopper with
effective interfaces to the locomotor and haptic spaces of the
supermarket. In this article, we argue that robots can act as
effective interfaces to haptic and locomotor spaces in modern
supermarkets.We also present the design and evaluation
of three product selection modalities—browsing, typing and
speech, which allow the blind shopper to select the desired
product from a repository of thousands of products.
Ergonomics-for-One in a Robotic Shopping Cart for the BlindVladimir Kulyukin
Assessment and design frameworks for human-robot teams
attempt to maximize generality by covering a broad range of
potential applications. In this paper, we argue that, in assistive
robotics, the other side of generality is limited applicability: it is
oftentimes more feasible to custom-design and evolve an
application that alleviates a specific disability than to spend
resources on figuring out how to customize an existing generic
framework. We present a case study that shows how we used a
pure bottom-up learn-through-deployment approach inspired by
the principles of ergonomics-for-one to design, deploy and
iteratively re-design a proof-of-concept robotic shopping cart for
the blind.
Augmented reality, the new age technology, has widespread applications in every field imaginable. This technology has proven to be an inflection point in numerous verticals, improving lives and improving performance. In this paper, we explore the various possible applications of Augmented Reality (AR) in the field of Medicine. The objective of using AR in medicine or generally in any field is the fact that, AR helps in motivating the user, making sessions interactive and assist in faster learning. In this paper, we discuss about the applicability of AR in the field of medical diagnosis. Augmented reality technology reinforces remote collaboration, allowing doctors to diagnose patients from a different locality. Additionally, we believe that a much more pronounced effect can be achieved by bringing together the cutting edge technology of AR and the lifesaving field of Medical sciences. AR is a mechanism that could be applied in the learning process too. Similarly, virtual reality could be used in the field where more of practical experience is needed such as driving, sports, neonatal care training.
Many problems in information retrieval and related fields depend on a reliable measure of the distance or similarity between objects that, most frequently, are represented
as vectors. This paper considers vectors of bits. Such data structures implement entities as diverse as bitmaps that indicate the occurrences of terms and bitstrings indicating the presence
of edges in images. For such applications, a popular distance measure is the Hamming distance. The value of the Hamming distance for information retrieval applications is limited by the
fact that it counts only exact matches, whereas in information retrieval, corresponding bits that are close by can still be considered to be almost identical. We define a "Generalized
Hamming distance" that extends the Hamming concept to give partial credit for near misses, and suggest a dynamic programming algorithm that permits it to be computed efficiently.
We envision many uses for such a measure. In this paper we define and prove some basic properties of the :Generalized Hamming distance," and illustrate its use in the area of object
recognition. We evaluate our implementation in a series of experiments, using autonomous robots to test the measure's effectiveness in relating similar bitstrings.
Robot-Assisted Shopping for the Blind: Issues in Spatial Cognition and Produc...Vladimir Kulyukin
Research on spatial cognition and blind
navigation suggests that a device aimed at helping blind people
to shop independently should provide the shopper with
effective interfaces to the locomotor and haptic spaces of the
supermarket. In this article, we argue that robots can act as
effective interfaces to haptic and locomotor spaces in modern
supermarkets.We also present the design and evaluation
of three product selection modalities—browsing, typing and
speech, which allow the blind shopper to select the desired
product from a repository of thousands of products.
Ergonomics-for-One in a Robotic Shopping Cart for the BlindVladimir Kulyukin
Assessment and design frameworks for human-robot teams
attempt to maximize generality by covering a broad range of
potential applications. In this paper, we argue that, in assistive
robotics, the other side of generality is limited applicability: it is
oftentimes more feasible to custom-design and evolve an
application that alleviates a specific disability than to spend
resources on figuring out how to customize an existing generic
framework. We present a case study that shows how we used a
pure bottom-up learn-through-deployment approach inspired by
the principles of ergonomics-for-one to design, deploy and
iteratively re-design a proof-of-concept robotic shopping cart for
the blind.
ANALYSING THE POTENTIAL OF BLE TO SUPPORT DYNAMIC BROADCASTING SCENARIOSijasuc
In this paper, we present a novel approach for broadcasting information based on a Bluetooth Low Energy
(BLE) ibeacon technology. We propose a dynamic method that uses a combination of Wi-Fi and BLE
technology where every technology plays a part in a user discovery and broadcasting process. In such
system, a specific ibeacon device broadcasts the information when a user is in proximity. Using
experiments, we conduct a scenario where the system discovers users, disseminates information, and later
we use collected data to examine the system performance and capability. The results show that our
proposed approach has a promising potential to become a powerful tool in the discovery and broadcasting
concept that can be easily implemented and used in business environments.
ANALYSING THE POTENTIAL OF BLE TO SUPPORT DYNAMIC BROADCASTING SCENARIOSijasuc
In this paper, we present a novel approach for broadcasting information based on a Bluetooth Low Energy
(BLE) ibeacon technology. We propose a dynamic method that uses a combination of Wi-Fi and BLE
technology where every technology plays a part in a user discovery and broadcasting process. In such
system, a specific ibeacon device broadcasts the information when a user is in proximity. Using
experiments, we conduct a scenario where the system discovers users, disseminates information, and later
we use collected data to examine the system performance and capability. The results show that our
proposed approach has a promising potential to become a powerful tool in the discovery and broadcasting
concept that can be easily implemented and used in business environments.
Analysing the Potential of BLE to Support Dynamic Broadcasting Scenariosjake henry
In this paper, we present a novel approach for broa
dcasting information based on a Bluetooth Low Energ
y
(BLE) ibeacon technology. We propose a dynamic meth
od that uses a combination of Wi-Fi and BLE
technology where every technology plays a part in a
user discovery and broadcasting process. In such
system, a specific ibeacon device broadcasts the in
formation when a user is in proximity. Using
experiments, we conduct a scenario where the system
discovers users, disseminates information, and lat
er
we use collected data to examine the system perform
ance and capability. The results show that our
proposed approach has a promising potential to beco
me a powerful tool in the discovery and broadcastin
g
concept that can be easily implemented and used in
business environments.
A privacy learning objects identity system for smartphones based on a virtu...ijcsit
Smartphones are widely used today, with many features such as GPS map navigation, capturing
photos with camera equipment such as digital camera, internet connection via wifi or 3G devices that
function as computers. These devices are being used for various purposes including online learning, where
learners can study from anywhere and anytime for example in the street, home, office and school. However,
identifing a method by which teachers in these virtural environements can remember their learners “faces”
in the classroom or manage "Identification Number Student" (ID student or user) is not reliable when the
teacher cannot see all of the learners in the class or know who is online from a particular account. In this
paper, we propose a system, Android Virtual Learner Identify (AVLI), which collects images captured by
the face of the learning object directly from the camera, the location of the learner by identifing where the
learner is studying and configuration of information including Time, Mac, IP addresses, IMEI number and
location via GPS. The systen then saves learner profiles to help the teacher or education managers on the
Virtual Learning Environment (VLE) identify learning object. We used the VLE that we built on
mobile.ona.vn domain. We implemented the AVLI prototype Android phone with solution password
encryption and images taken directly from the camera to ensure that the information is transmitted and
stored securely in the Virtual Learning Environment System Database (VLE Data) of learning objects while
preserving the ability to identify learning objects by a teacher or education manager.
Eye(I) Still Know! – An App for the Blind Built using Web and AIDr. Amarjeet Singh
This paper proposes eye(I) still know!, a voice control solution for the visually impaired people. The main purpose is even though the blind cannot see they can still know where to go and what to do! Nearby 60% of total blind population across the world is present in India. In a time where no one likes to rely on anyone, this is a small effort to make the blind independent individuals. This can be achieved using wireless communication, voice recognition and image scanning. The application with the use of object identification will priorly inform about the barriers in the path.
The software will use the camera of the device and scan all the obstacles with their corresponding distances from the user. This will be followed by audio instructions through audio output of the device.
This will efficiently direct the user through his/her way.
Advanced Fuzzy Logic Based Image Watermarking Technique for Medical ImagesIJARIIT
The segmentation algorithms vary for the types of medical images such as MRI, CT, US, etc.The current study work
can further be extended to develop a GUI tool based approach for separating the ROI. Additionally, a new technique of
separating ROI form the original image that will be applicable for all type of medical images can be evolved. Separated ROI
can be stored with xmin, xmax, ymin and ymax value so that at the end of embedding process before transmitting watermarked
image, the segmented ROI can be attached with watermarked image. Any medical image watermarking approach will be
suitable, if we segment the ROI from medical image with the four values, then embedding of watermark can be done on whole
medical image, in this paper work on different scan like ctscan ,brain scan etc. our results significant high than other.
Digitizing Buzzing Signals into A440 Piano Note Sequences and Estimating Fora...Vladimir Kulyukin
Digitizing Buzzing Signals into A440 Piano Note Sequences and Estimating Forager Traffic Levels from Images in Solar-Powered, Electronic Beehive Monitoring
Adapting Measures of Clumping Strength to Assess Term-Term SimilarityVladimir Kulyukin
Automated information retrieval relies heavily on statistical regularities that emerge as terms are deposited to produce text. This paper examines statistical patterns expected of a pair of
terms that are semantically related to each other. Guided by a conceptualization of the text generation process, we derive measures of how tightly two terms are semantically associated.
Our main objective is to probe whether such measures yields reasonable results. Specifically, we examine how the tendency of a content bearing term to clump, as quantified by previously
developed measures of term clumping, is influenced by the presence of other terms. This
approach allows us to present a toolkit from which a range of measures can be constructed.
As an illustration, one of several suggested measures is evaluated on a large text corpus built from an on-line encyclopedia.
Vision-Based Localization and Scanning of 1D UPC and EAN Barcodes with Relaxe...Vladimir Kulyukin
V. Kulyukin & T. Zaman. "Vision-Based Localization and Scanning of 1D UPC and EAN Barcodes with Relaxed Pitch, Roll, and Yaw Camera A lignment Constraints." International Journal of Image Processing (IJIP), V olume (8) : Issue (5) : 2014, pp. 355-383.
Toward Blind Travel Support through Verbal Route Directions: A Path Inference...Vladimir Kulyukin
The work presented in this article continues our investigation of such assisted navigation solutions where the
main emphasis is placed not on sensor sets or sensor fusion algorithms but on the ability of the travelers to interpret and
contextualize verbal route directions en route. This work contributes to our investigation of the research hypothesis that
we have formulated and partially validated in our previous studies: if a route is verbally described in sufficient and
appropriate amount of detail, independent VI travelers can use their O&M and problem solving skills to successfully
follow the route without any wearable sensors or sensors embedded in the environment.
In this investigation, we temporarily put aside the issue of how VI and blind travelers successfully interpret route
directions en route and tackle the question of how those route directions can be created, generated, and maintained by
online communities. In particular, we focus on the automation of path inference and present an algorithm that may be used
as part of the background computation of VGI sites to find new paths in the previous route directions written by online
community members, generate new route descriptions from them, and post them for subsequent community editing.
Wireless Indoor Localization with Dempster-Shafer Simple Support FunctionsVladimir Kulyukin
A mobile robot is localized in an indoor environment
using IEEE 802.11b wireless signals. Simple support
functions of the Dempster-Shafer theory are used to combine evidence
from multiple localization algorithms. Emperical results
are presented and discussed. Conclusions are drawn regarding
when the proposed sensor fusion methods may improve performance
and when they may not.
RFID in Robot-Assisted Indoor Navigation for the Visually ImpairedVladimir Kulyukin
We describe how Radio Frequency Identification
(RFID) can be used in robot-assisted indoor navigation for
the visually impaired. We present a robotic guide for the
visually impaired that was deployed and tested both with
and without visually impaired participants in two indoor
environments. We describe how we modified the standard
potential fields algorithms to achieve navigation at moderate
walking speeds and to avoid oscillation in narrow spaces.
The experiments illustrate that passive RFID tags deployed
in the environment can act as reliable stimuli that trigger local
navigation behaviors to achieve global navigation objectives.
RoboCart: Toward Robot-Assisted Navigation of Grocery Stores by the Visually ...Vladimir Kulyukin
This paper presents RoboCart, a proof-of-concept
prototype of a robotic shopping assistant for the visually
impaired. The purpose of RoboCart is to help visually impaired
customers navigate a typical grocery store and carry purchased
items. The hardware and software components of the system
are presented. For localization, RoboCart relies on RFID tags
deployed at various locations in the store. For navigation, Robo-
Cart relies on laser range finding. Experiences with deploying
RoboCart in a real grocery store are described. The current
status of the system and its limitations are outlined.
This paper examines the appropriateness of natural language dialogue (NLD) with assistive robots. Assistive robots are defined in terms of an existing human-robot interaction taxonomy. A
decision support procedure is outlined for assistive technology
researchers and practitioners to evaluate the appropriateness of
NLD in assistive robots. Several conjectures are made on when
NLD may be appropriate as a human-robot interaction mode.
A Wearable Two-Sensor O&M Device for Blind College StudentsVladimir Kulyukin
A major problem for visually impaired college students is independent campus
navigation. Many universities, such as Utah State University (USU), have no Orientation
and Mobility (O&M) instructors. Thus, visually impaired undergraduates must rely on
their friends, siblings, and even parents to learn their way around a large campus, which
reduces their sense of independence. This paper describes a wearable two-sensor O&M
device for visually impaired USU undergraduates and presents a single subject feasibility
test that estimates how a visually impaired navigator can use the device to learn new
routes on the USU campus.
On the Impact of Data Collection on the Quality of Signal Strength in Wi-Fi I...Vladimir Kulyukin
Wi-Fi signals can be used to localize navigators at topological landmarks in indoor and
outdoor environments. A major issue with Wi-Fi topological localization is calibration.
This paper describes the impact of data collection on the quality of signal strength
signatures.
Passive Radio Frequency Exteroception in Robot-Assisted Shopping for the BlindVladimir Kulyukin
In 2004, the Computer Science Assistive Technology Laboratory (CSATL) of Utah State University (USU) started a project whose objective is to develop RoboCart, a robotic shopping assistant for the
visually impaired. RoboCart is a continuation of our previous work on RG, a robotic guide for the visually impaired in structured indoor environments. The determinism provided by exteroception of passive RFID-
enabled surfaces is desirable when dealing with dynamic and uncertain
environments where probabilistic approaches like Monte Carlo Markov
localization (MCL) may fail. We present the results of a pilot feasibility study with two visually impaired shoppers in Lee’s MarketPlace, a
supermarket in Logan, Utah.
Surface-Embedded Passive RF Exteroception: Kepler, Greed, and Buffon’s NeedleVladimir Kulyukin
Surface-embedded passive radio frequency (PRF) exteroception is a
method whereby an action to be executed by a mobile unit is selected through a
signal received from a surface-embedded external passive RFID transponder. This
paper describes how Kepler’s hexagonal packing pattern is used to embed passive
RFID transponders into a carpet to create PRF surfaces. Proof-of-concepts
experiments are presented that show how such surfaces enable mobile robots to
reliably accomplish point-to-point navigation indoors and outdoors. Two greedy
algorithms are presented for automated design of PRF surfaces. A theoretical extension
of the classic Buffon’s Needle problem from computational geometry is
presented as a possible way to optimize the packing of RF transponders on a
surface.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Eyesight Sharing in Blind Grocery Shopping: Remote P2P Caregiving through Cloud Computing
1. Eyesight Sharing in Blind Grocery Shopping: Remote
P2P Caregiving through Cloud Computing
Vladimir Kulyukin, Tanwir Zaman, Abhishek Andhavarapu ,
and Aliasgar Kutiyanawala
Department of Computer Science
Utah State University
Logan, UT, USA
{vladimir.kulyukin}@usu.edu
Abstract. Product recognition continues to be a major access barrier for visual-
ly impaired (VI) and blind individuals in modern supermarkets. R&D ap-
proaches to this problem in the assistive technology (AT) literature vary from
automated vision-based solutions to crowdsourcing applications where VI cli-
ents send image identification requests to web services. The former struggle
with run-time failures and scalability while the latter must cope with concerns
about trust, privacy, and quality of service. In this paper, we investigate a mo-
bile cloud computing framework for remote caregiving that may help VI and
blind clients with product recognition in supermarkets. This framework empha-
sizes remote teleassistance and assumes that clients work with dedicated care-
givers (helpers). Clients tap on their smartphones’ touchscreens to send images
of products they examine to the cloud where the SURF algorithm matches in-
coming image against its image database. Images along with the names of the
top 5 matches are sent to remote sighted helpers via push notification services.
A helper confirms the product’s name, if it is in the top 5 matches, or speaks or
types the product’s name, if it is not. Basic quality of service is ensured through
human eyesight sharing even when image matching does not work well. We
implemented this framework in a module called EyeShare on two Android
2.3.3/2.3.6 smartphones. EyeShare was tested in three experiments with one
blindfolded subject: one lab study and two experiments in Fresh Market, a su-
permarket in Logan, Utah. The results of our experiments show that the pro-
posed framework may be used as a product identification solution in supermar-
kets.
1 Introduction
The term teleassistance covers a wide range of technologies that enable VI and blind
individuals to transmit video and audio data to remote caregivers and receive audio
assistance [1]. Research evidence suggests that the availability of remote caregiving
reduces the psychological stress on VI and blind individuals when they perform vari-
ous tasks in different environments [2].
2. A typical example of how teleassistance is used for blind navigation is the system
developed by Bujacz et. al. [1]. The system consists of two notebook computers: one
is carried by the VI traveler in a backpack and the other used by the remote sighted
caregiver. The traveler transmits video through a chest-mounted USB camera. The
traveler wears a headset (an earphone and a microphone) to communicate with the
caregiver. Several indoor navigation experiments showed that VI travelers walked
faster, at a steadier pace, and were able to navigate more easily when assisted by re-
mote guides then when they navigated the same routes by themselves.
Our research group has applied teleassistance to blind shopping in ShopMobile, a
mobile shopping system for VI and blind individuals [3]. Our end objective is to ena-
ble VI and blind individuals to shop independently using only their smartphones.
ShopMobile is our most recent system for accessible blind shopping that follows Ro-
boCart and ShopTalk [4]. The system has three software modules: an eyes-free bar-
code scanner, an OCR engine, and a teleassitance module called TeleShop. The eyes-
free barcode scanner allows VI shoppers to scan UPC barcodes on products and MSI
barcodes on shelves. The OCR engine is being developed to extract nutrition facts
from nutrition tables available on many product packages. TeleShop provides a tele-
assistance backup in situations when the barcode scanner or the OCR engine’s mal-
function.
The current implementation of TeleShop consists of a server running on the VI
shopper's smartphone (Google Nexus One with Android 2.3.3/2.3.6) and a client GUI
module running on the remote caregiver's computer. All client-server communication
occurs over UDP. Images from the phone camera are continuously transmitted to the
client GUI. The caregiver can start, stop, and pause the incoming image stream and to
change image resolution and quality. Images of high resolution and quality provide
more reliable detail but may cause the video stream to become choppy. Lower resolu-
tion images result in smoother video streams but provide less detail. The pause option
is for holding the current image on the screen.
TeleShop has so far been evaluated in two laboratory studies with Wi-Fi and 3G
[3]. The first study was done with two sighted students, Alice and Bob. The second
study was done with a married couple: a completely blind person (Carl) and his sight-
ed wife (Diana). For both studies, we assembled four plastic shelves in our laboratory
and stocked them with empty boxes, cans, and bottles to simulate an aisle in a grocery
store. The shopper and the caregiver were in separate rooms. In the first study, we
blindfolded Bob to act as a VI shopper. The studies were done on two separate days.
The caregivers were given a list of nine products and were asked to help the shoppers
find the products and read the nutrition facts on the products' packages or bottles. A
voice connection was established between the shopper and the caregiver via a regular
phone call. Alice and Bob took an average of 57.22 and 86.5 seconds to retrieve a
product from the shelf and to read its nutrition facts, respectively. The corresponding
times for Carl and Diana were 19.33 and 74.8 seconds, respectively [3].
In this paper, we present an extension of TeleShop, called EyeShare, that leverages
cloud computing to assist VI and blind shoppers (clients) with product recognition in
supermarkets. The client takes a still image of the product that he or she currently
examines and sends it to the cloud. The image is processed by an open source object
3. recognition software application that runs on a cloud server and returns the top 5
matches from its product database. Number 5 was chosen, because a 5-item list easily
fits on one Google Nexus One screen. The matches, in the form of a list of product
names, are sent to the helper along with the original image through a push notification
service. The helper uses his or her smartphone to select the correct product name from
the list or, if the product’s name is not found among the matches, to speak it into the
smartphone. If speech recognition (SR) does not work, the helper types in the prod-
uct’s name. This framework is flexible in that various image recognition algorithms
can tested in the cloud. It is also possible to use no image recognition, in which case
all product recognition is done by the sighted caregiver.
The remainder of our paper is organized as follows. In Section 3, we present our
cloud computing framework for remote caregiving with which mobile devices form
ad hoc peer-to-peer (P2P) communication networks. In Section 4, we describe three
experiments in two different environments: a laboratory and a local supermarket
where a blindfolded individual and a remote sighted caregiver evaluated the system
on different products. In Section 5, we present the results of our experiments. In Sec-
tion 6, we discuss our investigation.
Fig. 1. Cloud Computing Framework for Remote Caregiving.
2 A Cloud Computing Framework for Remote P2P Caregiving
The cloud computing framework we have implemented consists of mobile devices
that communicate with each other in an ad hoc P2P network. The devices have
Google accounts for authentication and are registered with Google's C2DM (cloud to
device messaging) service (http://code.google.com/android/c2dm/), a
push notification service that allocates unique IDs to registered devices. Our frame-
work assumes that the cloud computing services run on Amazon's Elastic Computing
Service (EC2) (http://aws.amazon.com/ec2/). Other cloud computing ser-
vices may be employed. We configured an Amazon EC2 Linux server with 1 GHz
4. processor and 512 MB RAM. The server runs an OpenCV 2.3.3
(http://opencv.willowgarage.com/wiki/) image matching application.
Product images are saved in a MySQL database. The use of this framework requires
that clients and helpers download the client and caregiver applications on their
smartphones. The clients and helpers subsequently find each other and form an ad hoc
P2P network via C2DM registration IDs.
Figure 1 shows this framework in action. A client sends a help request (Step 1). In
EyeShare, this request consists of a product image. However, in principle, this request
can be anything transmittable over available wireless channels such as Wi-Fi, 3G, 4G,
Bluetooth, etc. The image is received by the Amazon EC2 Linux server where it is
matched against the images in the MySQL database.
Our image matching application uses the SURF algorithm [5]. The matching op-
eration returns the top 5 matches and sends the names of the corresponding products
along with the URL that contains the client’s original image to the C2DM service
(Step 2). Thus, the image is transmitted only once – in the help request. C2DM for-
wards the message to the caregiver's smartphone (Step 3). The helper confirms the
product’s name by selecting it from the list of the top 5 matches. If the top matches
are incorrect, the helper uses SR to speak the product’s name or, if SR does not work
or is not available, types it in on the touchscreen. If the helper cannot determine the
product’s name from the image, the helper sends a resend request to the client. The
helper’s message goes back to the C2DM service (Step 4) and then on to the client's
smartphone (Step 5). The helper application is designed in such a way that the helper
does not have to interrupt its smartphone activities for too long to render assistance.
2.1 Android Cloud to Device Messaging (C2DM) Framework
C2DM (http://code.google.com/android/c2dm/) takes care of message
queuing and delivery. Push notifications ensure that the application does not need to
keep polling the cloud server for new incoming requests. C2DM wakes up the An-
droid application when messages are received through intent broadcasts. However,
the application must be set up with the proper C2DM broadcast receiver permissions.
In EyeShare, C2DM is used in two separate activities. First, C2DM forwards the mes-
sage from the server to the helper application. This message consists of a formatted
string of the client registration ID, the names of the top 5 product matches, and the
URL containing the client’s image. Clients’ images are temporarily saved on the
cloud-based Linux server and removed as soon as the corresponding help requests are
processed. Second, C2DM is used when helper messages are sent back to clients.
2.2 Image Matching
We have used SURF (Speeded Up Robust Features) [5] as a black box image match-
ing algorithm in our cloud server. SURF extracts unique key points and descriptors
from images and later uses them to match indexed images against incoming image.
SURF uses an intermediate image representation called Integral Image that is com-
puted from the input image. This intermediate representation speeds up the calcula-
5. tions in rectangular areas. It is formed by summing up the pixel values of the x,y co-
ordinates from the origin to the ends of the image. This makes computation time in-
variant to change in size and is useful in matching large images. The SURF detector is
based on the determinant of the Hessian matrix. The SURF descriptor describes how
pixel intensities are distributed within a scale dependent neighborhood of each interest
point detected by Fast Hessian. Object detection using SURF is scale and rotation
invariant and does not require long training. The fact that SURF is rotation invariant
makes the algorithm useful in situations where image matching works with object
images taken at different orientations than the images of the same objects used in
training.
3 Experiments
We evaluated EyeShare in product recognition experiments at two locations. The first
study was conducted in our laboratory. The second and third studies were conducted
at Fresh Market, a local supermarket in Logan, Utah.
3.1 A Laboratory Study
We assembled four shelves in our laboratory and placed on them 20 products: bottles,
boxes, and cans. The same setup was successfully used in our previous experiments
on accessible blind shopping [3, 4]. We created a database of 100 images. Each of the
20 products on the shelves had 5 images taken at different orientations. The SURF
algorithm was trained on these 100 images. A blindfolded individual was given a
Google Nexus One smartphone (Android 2.3.3) with the EyeShare client application
installed on it. A sighted helper was given another Google Nexus One (Android 2.3.3)
with the EyeShare helper app installed on it.
The blindfolded client was asked to take each product from the assembled shelves
and recognize it. The client took a picture of the product by tapping the touchscreen.
The image was sent to the cloud Linux server where it was processed by the SURF
algorithm. The names of the top 5 matched products were sent to the helper for verifi-
cation along with the URL with the original image through C2DM. The helper, locat-
ed in a different room in the same building, selected the product’s name from the list
of the top matches and sent the product’s name back to the client. If the product’s
name was not in the list, the helper spoke the name of the product or, if SR was not
recognized after three attempts, typed in the product’s name on the virtual
touchscreen keyboard. The run for an individual product was considered completed
when the product’s name was spoken on the client’s smartphone through TTS. Thus,
the total run time (in seconds) for each run included all five steps given in Fig. 1.
3.2 Store Experiments
The next two experiments were executed in Fresh Market, a local supermarket in
Logan, Utah. Prior to the experiments we added 270 images to our image database
6. used in the laboratory study. We selected 45 products from 9 aisles (5 products per
aisle) in the supermarket and took 6 images at different rotations for every product.
The products included boxes, bottles, cans, and bags. We biased our selection to
products that an individual can hold in one hand. SURF was retrained on these 370
images (100 images from the lab study and 270 new ones).
The same blindfolded subject who participated in the laboratory study was given a
Samsung Galaxy S2 smartphone (Android 2.3.6) with the EyeShare client application
installed on it. The client used a 4G data plan. The same helper who participated in
the laboratory study was given a Google Nexus One (Android 2.3.6) with the Eye-
Share helper application installed on it. The helper was located in a building approx-
imately one mile away from the supermarket. The helper used a Wi-Fi connection.
The first set of experiments was confined to the first three aisles of the supermarket
and lasted for 30 minutes. In each aisle, three products from the database and three
products not from the database were chosen by a research assistant who went to the
supermarket with the blindfolded subject. The assistant gave each product to the sub-
ject who was asked to use the EyeShare client application to recognize the product.
There was no training involved, because it was the same blindfolded subject who did
the laboratory study. The subject was given 16 products, one product at a time, by the
assistant. One experimental run began at the time when the subject was given a prod-
uct and went on until the time when the subject’s smartphone received the product’s
name and read it out to the subject through TTS.
The second set of experiments was conducted in the same supermarket on a differ-
ent day with the same subject and helper. The experiments lasted 30 minutes. Since,
as explained in the discussion section, the image matching did not perform as well as
we hoped it would in the first supermarket study, we did not do any image matching
in the second set of experiments. All product recognition was done by the remote
sighted helper. The subject was given 17 products, one product at a time, taken from
the next three aisles of the supermarket by the assistant. The experimental run times
were computed in the same way as they were in the first supermarket study.
4 Results
The results of the experiments are summarized in Table 1. Column 1 gives the envi-
ronments where the experiments were executed. Column 2 gives the number of prod-
ucts used in the experiments in the corresponding environments. Column 3 gives the
mean time (in seconds) of the experimental runs. Column 4 gives the standard devia-
tions of the corresponding mean time values. Column 5 gives the number of times the
correct product was found in the top 5 matches. Column 6 gives the mean number of
SR attempts. Column 7 gives the number of SR failures when the helper had to type
the product names on the touchscreen keyboard after attempting to use SR three
times. In all experiments, all products were successfully recognized by the blindfold-
ed subject. As can be seen in Table 1, in supermarket study 1, after our image data-
base had grown in size, there were no correct product names in the top 5 matches.
Consequently, we decided not to use SURF in supermarket study 2. In supermarket
7. study 1, there were three cases when the helper requested the client to send another
image of a product because he could not identify the product’s name from the original
image. In supermarket study 1, there was one brief (several seconds) loss of Wi-Fi
connection on the helper’s smartphone.
Table 1. Experimental results.
Environment # Products Mean Time STD Top 5 Mean SR SR Failures
Lab 16 40 .00021 8 1.1 0
Store 1 16 60 .00033 0 1.2 2
Store 2 17 60 .00081 0 1.1 3
5 Discussion
Our study contributes to the recent body of research that addresses various aspects
of independent blind shopping through mobile and cloud computing (e.g., [6, 7, 8]).
Our approach differs from these studies in its emphasis on dedicated remote caregiv-
ing. Our approach addresses, at least to some extent, both image recognition failures
of fully automated solutions and the concerns about trust, privacy, and basic quality of
service of pure crowdsourcing approaches. Dedicated caregivers alleviate image
recognition failures through human eyesight sharing. Since dedicated caregiving is
more personal and trustworthy, clients are not required to post image recognition
requests on open web forums, which allows them to preserve more privacy. Interested
readers may watch our research videos at www.youtube.com/csatlusu for
more information on our accessible shopping experiments and projects.
The experiments show that the average product recognition is within one minute.
The results demonstrate that SR is a viable option for product naming. We attribute
the poor performance of SURF in the first supermarket study to our failure to properly
parameterize the algorithm. As we gain more experience with SURF, we may be able
to improve the performance of automated image matching. However, database
maintenance may be a more serious long-term concern for automated image matching
unless there is direct access to the supermarket’s inventory control system.
Our findings should be interpreted with caution, because we used only one blind-
folded subject in the experiments. Nonetheless, our findings may serve as a basis for
future research on remote teleassisted caregiving in accessible blind shopping. Our
experience with the framework suggests that telassistance may be an feasible option
for VI individuals in modern supermarkets. Dedicated remote caregiving can be ap-
plied not only to product recognition but also to assistance with cash payments and
supermarket navigation. It is a relatively inexpensive solution, because the only re-
quired hardware device is a smartphone with a data plan.
8. As the second supermarket study suggests, cloud-based image matching may not
be necessary. The use of mobile phones as the means of caregiving allows caregivers
to provide assistance from the comfort of their homes or offices or on the go. As data
plans move toward 4G network speeds, we can expect faster response times and better
quality of service. Faster network connections may, in time, make it feasible to com-
municate via streaming videos.
References
1. Bujacz M., Baranski P., Moranski M., Strumillo P., and Materka A. “Remote Guidance for
the Blind - A Proposed Teleassistance System and Navigation Trials,” In Proceedings of
the Conference on Human System Interactions, pp. 888-892, IEEE, Krakow, Poland, 2008.
2. Peake P. and Leonard J., “The Use of Heart-Rate as an Index of Stress in Blind Pedestri-
ans,” Ergonomics, 1971.
3. Kutiyanawala, A., Kulyukin, V., and Nicholson, J. “Teleassistance in Accessible Shopping
for the Blind.” In Proceedings of the 2011 International Conference on Internet Compu-
ting, ICOMP Press, pp. 190-193, July 18-21, 2011, Las Vegas, USA.
4. Kulyukin, V. and Kutiyanawala, A. “Accessible Shopping Systems for Blind and Visually
Impaired Individuals: Design Requirements and the State of the Art.” The Open Rehabili-
tation Journal, ISSN: 1874-9437, Volume 2, 2010, pp. 158-168, DOI:
10.2174/1874943701003010158.
5. Bay, H., Tuytelaars, T., L. Van Cool. “SURF: Speeded Up Robust Features.” Computer
Vision-ECCV, pp. 404-417, Springer-Verlag, 2006.
6. Sam S. Tsai, S., Chen, D., Chandrasekhar, V., Takacs, G., Ngai-Man, C.,
7. Vedantham, R., Grzeszczuk, R., and Girod, B. “Mobile Product Recognition.” In Proceed-
ings of the International Conference on Multimedia (MM '10). ACM, New York, NY,
USA, 1587-1590. DOI=10.1145/1873951.1874293,
http://doi.acm.org/10.1145/1873951.1874293
8. Girod, B., Chandrasekhar, V., Chen, D.M., Ngai-Man C., Grzeszczuk, R., Reznik, Y.,
Takacs, G., Tsai, S.S., and Vedantham, R. "Mobile Visual Search," Signal Processing
Magazine, IEEE, vol.28, no.4, pp.61-76, July 2011. doi: 10.1109/MSP.2011.940881.
9. Von Reischach, F., Michahelles, F., Guinard, D., Adelmann, R., Fleisch, E., and Schmidt,
A. “An Evaluation of Product Identification Techniques for Mobile Phones.” In Proceed-
ings of the 2nd IFIP TC13 Conference in Human-Computer Interaction (Interact 2009).
Uppsala, Sweden.