1. Elevator Controllers
The clearest example is the sign languages of deaf communities handed down from
generation to generation (e.g. ). Kendon notes that ‘the more generalized circumstances are,
the more complex systems become’ (, p. 292). Thus, systems restricted to a specific type of
interaction—say, operating heavy machinery—do not face pressures to adopt greater
complexity because they are not used frequently enough, and in enough different scenarios
to require significant modification. When human gesture systems are used frequently in a
variety of situations—as in the case of the sign languages of the Plains Indians and
Australian Aborigines—they begin to take on the complexities of spoken language. Users can
thus control the elevator from their mobile phones and avoid using its buttons, which in
normal operation are touched several times a day by virtually every employee and visitor to
the building. "We are developing this new technology in cooperation with major elevator
suppliers and based on the requirements of our customers all over the world," adds Ondej
Langr.
Specifically, as shown, a sub-action 1 is a up and outward rotating motion starting from a
completely down position to a halfway point were the user arm is perpendicular with the
ground. The second sub-action 2 is a second movement going up and rotating in toward the
user with the user hand rotating in toward the user. As shown the vector for each sub-action
is shown as a collection of sub vectors for each frame the movement passes through. These
sub-actions can then be concatenated together into an overall vector for the gesture.
The technical effects and benefits of the detection system 200 include gesture based door
operations that improves a passenger experience, along with implementing gesture based
hall call capabilities. The technical effects and benefits of the detection system 200 include
eliminating door stuck problems, which are responsible for major service turn-backs, based
on improper passenger use of the tactile sensors. Although shown and described with a
roping system including tension member 107, elevator systems that employ other methods
and mechanisms of moving an elevator car within an elevator shaft may employ
embodiments of the present disclosure. For example, embodiments may be employed in
ropeless elevator systems using a linear motor to impart motion to an elevator car.
Embodiments may also be employed in ropeless elevator systems using a hydraulic lift to
impart motion to an elevator car. 1 is merely a non-limiting example presented for illustrative
and explanatory purposes. The controller 115 is located, as shown, in a controller room 121
of the elevator shaft 117 and is configured to control the operation of the elevator system
101, and particularly the elevator car 103.
An interface which requires a variety of input modalities would likely require a hand tracking
system, where the user could touch a floating virtual keyboard with their fingers. Another
consideration here is that if you are displaying icons on a screen that you do not expect the
user to actually touch, you will need to design the system accordingly. At CES 2015, Intel
RealSense Technology displayed a proof of concept demo where a ‘holographic’ floating
piano keyboard could be played by a user without touching a screen or physical keys. It used
2. a combination of mirrors and lenses to project a floating keyboard. For example, a touchless
ATM interface could be expected to be accessed by large numbers of people. While the
modern ATM relies on a limited number of physical buttons that perform a number of tasks
based on the on-screen dialog, replacing such a system with a touchless one is not trivial. An
ATM that used voice commands would not be ideal, since privacy is a concern here.
As shown, a system starts in state 0 where no action is recognized, started, or partially
recognized at all. If no sub-action is detected the system will continue to remain in state 0 as
indicated by the “no sub-action” loop. Next, when a sub-action 1 is detected the system
moves into state 1 in which the system now has partially recognized a gesture and is actively
searching for another sub-action. If no sub-action is detected for a set time then system will
return the state 0.
However, addition features could be incorporated in the future to improve the ease of use
and intuitiveness. The usability of gesture control for robot can be enhanced when
complemented by a graphical interface with conventional elements which provide user with
visual feedback. Visual feedback can be achieved Gesture Control for lift by using
augmented reality or virtual reality technology to provide real-time information and
instructions. Contactless tactile feedback can be provided by integrating ultrasonic force
fields devices around the control area. The evaluation of the contactless robot control system
is described in this section.
Thus, logic is included in the memory 108 that allows identification of remote control device
commands that operate the controlled device. Alternatively, the electronic device 400 may be
configured to control a specific controlled device. Thus, such an embodiment would have
specific, and predefined, remote control device commands residing in the RCD command
database 124. Also, the position of the user's hand at the ending location 306 of the second
hand gesture was in the closed hand or first position. A similar closed hand or first position at
the ending location 306 was used to define the fast forward play hand gestures illustrated in
FIG. 3I. Here, position of the user's hand of the first gesture, followed by the second hand
gesture in the closed hand or first position, defined the intended command to select a single
channel up change. In some embodiments, the position of the user's hand has a predefined,
unique meaning.
In contrast if all six sub-vectors are recognized then a strong detection was made. Losing
true positives is not critical due to the fact that the user can see when the elevator has been
actually called or when the gesture has not been detected, as explained above (see FIG. 4).
This way, the user can repeat the gesture if not detected the first time. Furthermore, we can
build a second state machine that allows one to accumulate the evidence of the gesture
detection over time, as illustrated in FIG. 8 includes an example of state machine that can be
used to detect a complex gesture as consecutive sub-actions (see FIG. 7), where each sub-
action is detected by a specialized classifier.
It was also suggested that because gesturing is a natural, automatic behaviour, the system
must be adjusted to avoid false responses to movements that were not intended to be
system inputs . It is particularly important when industrial robots could potentially cause
damage to people and surrounding objects by false triggers. At block 541, the gesture
3. detection unit stores the pattern as non-valid gesture . At block 570, the gesture detection
unit stores the pattern as a valid gesture. At block 580, the gesture detection unit sends a
command to execute door open operation. The process flow 400 begins at block 410, where
the gesture detection unit reads an elevator door status. At decision block 420, the gesture
detection unit determines if the elevator door status is open or closed.
The design of this control system is to ensure the user interface is comfortable and intuitive
to use without the requirement of significant training and the user can produce input with
simple hand movements. The BS ISO/IEC and BS ISO provide guidelines for the design of
gesture-based interfaces. A number of these recommendations have been considered during
the design of the developed system . The system consists of a small industrial robot with six
degrees of freedom , control system with software written in C#, and a hand-tracking sensor.
A Leap Motion sensor is used to capture the user’s hands positions and orientations and the
control software processes the input and sends signals to robot controller. A decoder
programme is embedded in the robot controller which receives signals from the control
software through TCP server and actuate robot to perform actions.
However, if the system does recognize the sub-action 2, the system will transition to state 2
which is a state where the system had detected action and will take action in accordance with
the detected action. 6 and 8, the system, which can be an elevator system, would call an
elevator car to take the user upward in the building. According to another embodiment, a
user may move along the path 270 only to decide to no longer take the elevator.
Participants were asked to score the system against three ergonomic criterions which are
physical workload, intuitiveness, and enjoyableness. These criteria are similar to those
described by Bhuiyan and Picking in , but the questionnaire used in this test was simplified.
In a safety critical environment like an elevator though, having a physical interface as back
up in case of emergency or fire would also be something to consider. In safety critical
applications, edge case scenarios are as important to design for as the expected usages.
Now more than ever there is a substantially reduced appetite for touching doors, screens,
kiosks and other commonplace control systems in the public domain. Rising to meet this
demand, Terabee has created the Evo Swipe sensor for simple gesture control. Simply swipe
your hand through the air right to left, left to right, or up or down, to control machines, doors,
information displays and many other systems.
A similar problem exists in moving these same security design practices to an even more
constrained platform in a wearable. Using the non-traditional output and input methods of
smartphones and wearables as the framework for delivering the security interface can either
increase or decrease the usability. This study uses a metric for security usability based on
the energy or effort required from the user successfully navigate the security interface of
wearable devices. Understanding and measuring the demands on the user from both a
cognitive and physical perspective is key to establishing and retaining security usability for
the largest number of target users. In block 302, library 207 or database of reference
gestures is established. Library 207 of reference gestures may be provided by a
manufacturer of an elevator system. The library of reference gestures may be provided by an
elevator operator, a building owner, or any other person or entity.
4. We can leverage the existing network of PIR sensors for detecting gestures made by the
users. The PIR detect movement, and we can ask the user to move the hand in a
characteristic way in front of the sensor. 8 depicts a state diagram for a gesture being
processed with one or more embodiments. In order to account for a dynamic gesture, which
produces different motion vectors at consecutive times, one can break down the gesture into
a sequence of sub-actions, as illustrated in FIG. Based on this, one can follow a number of
approaches, such as the two described herewith, or others not described.