This work presents WonderLens, a system of optical lenses and mirrors for enabling tangible interactions on printed paper. When users perform spatial operations on the optical components, they deform the visual content that is printed on paper, and thereby provide dynamic visual feedback on user interactions without any display devices. The magnetic unit that is embedded in each lens and mirror allows the unit to be identified and tracked using an analog Hall-sensor grid that is placed behind the paper, so the system provides additional auditory and visual feedback through different levels of embodiment, further enhancing the interactivity with the printed content on the physical paper.
[UIST 2015] FlexiBend: Enabling Interactivity of Multi-Part, Deformable Fabri...Rong-Hao Liang
Rong-Hao from National Taiwan University and Keio University introduces FlexiBend, a shape-sensing strip that enables interactivity for deformable and multi-part fabrications. FlexiBend uses a single strain gauge array embedded in a flexible strip to reliably track user interactions by sensing deformations in the strip. It supports fabrications with multiple movable parts like buttons, sliders, and dials. The presenter demonstrates how FlexiBend can turn physical objects like a toy pistol into computer input devices to control a game. FlexiBend provides an easy way to add interactivity to fabrications through its simple installation in 3D printed objects.
The document discusses different types of sensors used for 3D digitization, including passive and active vision techniques. It describes synchronization circuit-based dual photocells that improve measurement stability and repeatability. Position sensitive detectors are discussed that can measure the position of a light spot in one or two dimensions on a sensor surface to acquire high-resolution 3D images. A proposed sensor architecture combines color and range sensing for applications like hand-held 3D cameras.
Fingerprint scanners work by capturing an image of a fingerprint's unique ridge pattern using either optical or capacitive sensing. Optical scanners use light to generate a digital image, while capacitive scanners detect ridges and valleys through their effect on electrical current. Both generate images of the ridges and valleys that are then analyzed and compared to stored fingerprints by examining distinctive minutiae points rather than attempting to directly overlay entire fingerprint images. Fingerprint scanning provides identification based on "who you are" by verifying unique physical characteristics rather than what users know or possess.
Biometrics are automated methods of recognizing a person based on a physiological or behavioral characteristic. Among the features measured are face, fingerprints, hand geometry, handwriting, iris, retinal, vein, and voice. Biometric data are separate and distinct from personal information.
The mouse uses an LED and CMOS sensor to track its movement across a surface, sending coordinates to the computer hundreds of times per second to smoothly move the cursor. The keyboard uses a processor and circuitry to detect which keys are pressed by completing circuits and comparing the locations to a character map. A touchpad senses finger pressure and movement across electrode grids to determine where the pointer should move on screen.
Sensors on 3 d digitization seminar reportVishnu Prasad
The document discusses sensors for 3D digitization. It describes two main strategies for 3D vision - passive vision which analyzes ambient light, and active vision which structures light using techniques like laser range cameras. It then discusses an auto-synchronized scanner that can provide registered 3D surface maps and color data by scanning a laser spot across a scene and detecting the reflected light with a linear sensor, producing registered images with spatial and color information.
Digital 3D imaging can be accelerated using advances in VLSI technology. High-resolution 3D images can be captured using laser-based vision systems, which produce 3D information insensitive to background illumination and surface texture. Complete images of featureless surfaces invisible to the human eye can be generated. Sensors for 3D digitization include position sensitive detectors and laser sensors. Continuous response position sensitive detectors provide precise centroid measurement while discrete response detectors are slower but more accurate. An integrated sensor architecture is proposed using a combination of these sensors to simultaneously measure color and 3D.
The document discusses different types of mobile displays, including CSTN, resistive, capacitive, TFD, TFT, OLED, AMOLED, Retina Display, Gorilla Glass, and IPS. It also covers various screen resolutions for mobile phones such as QVGA, wVGA, HVGA, nHD, WVGA, FWVGA, qHD, DVGA, XGA, and HD. The types of displays discussed are differentiated based on technology used, viewing quality parameters like viewing angles, and factors such as cost, power consumption and durability. Common screen resolutions for mobile phones ranging from lower to higher are also outlined.
[UIST 2015] FlexiBend: Enabling Interactivity of Multi-Part, Deformable Fabri...Rong-Hao Liang
Rong-Hao from National Taiwan University and Keio University introduces FlexiBend, a shape-sensing strip that enables interactivity for deformable and multi-part fabrications. FlexiBend uses a single strain gauge array embedded in a flexible strip to reliably track user interactions by sensing deformations in the strip. It supports fabrications with multiple movable parts like buttons, sliders, and dials. The presenter demonstrates how FlexiBend can turn physical objects like a toy pistol into computer input devices to control a game. FlexiBend provides an easy way to add interactivity to fabrications through its simple installation in 3D printed objects.
The document discusses different types of sensors used for 3D digitization, including passive and active vision techniques. It describes synchronization circuit-based dual photocells that improve measurement stability and repeatability. Position sensitive detectors are discussed that can measure the position of a light spot in one or two dimensions on a sensor surface to acquire high-resolution 3D images. A proposed sensor architecture combines color and range sensing for applications like hand-held 3D cameras.
Fingerprint scanners work by capturing an image of a fingerprint's unique ridge pattern using either optical or capacitive sensing. Optical scanners use light to generate a digital image, while capacitive scanners detect ridges and valleys through their effect on electrical current. Both generate images of the ridges and valleys that are then analyzed and compared to stored fingerprints by examining distinctive minutiae points rather than attempting to directly overlay entire fingerprint images. Fingerprint scanning provides identification based on "who you are" by verifying unique physical characteristics rather than what users know or possess.
Biometrics are automated methods of recognizing a person based on a physiological or behavioral characteristic. Among the features measured are face, fingerprints, hand geometry, handwriting, iris, retinal, vein, and voice. Biometric data are separate and distinct from personal information.
The mouse uses an LED and CMOS sensor to track its movement across a surface, sending coordinates to the computer hundreds of times per second to smoothly move the cursor. The keyboard uses a processor and circuitry to detect which keys are pressed by completing circuits and comparing the locations to a character map. A touchpad senses finger pressure and movement across electrode grids to determine where the pointer should move on screen.
Sensors on 3 d digitization seminar reportVishnu Prasad
The document discusses sensors for 3D digitization. It describes two main strategies for 3D vision - passive vision which analyzes ambient light, and active vision which structures light using techniques like laser range cameras. It then discusses an auto-synchronized scanner that can provide registered 3D surface maps and color data by scanning a laser spot across a scene and detecting the reflected light with a linear sensor, producing registered images with spatial and color information.
Digital 3D imaging can be accelerated using advances in VLSI technology. High-resolution 3D images can be captured using laser-based vision systems, which produce 3D information insensitive to background illumination and surface texture. Complete images of featureless surfaces invisible to the human eye can be generated. Sensors for 3D digitization include position sensitive detectors and laser sensors. Continuous response position sensitive detectors provide precise centroid measurement while discrete response detectors are slower but more accurate. An integrated sensor architecture is proposed using a combination of these sensors to simultaneously measure color and 3D.
The document discusses different types of mobile displays, including CSTN, resistive, capacitive, TFD, TFT, OLED, AMOLED, Retina Display, Gorilla Glass, and IPS. It also covers various screen resolutions for mobile phones such as QVGA, wVGA, HVGA, nHD, WVGA, FWVGA, qHD, DVGA, XGA, and HD. The types of displays discussed are differentiated based on technology used, viewing quality parameters like viewing angles, and factors such as cost, power consumption and durability. Common screen resolutions for mobile phones ranging from lower to higher are also outlined.
The document discusses various types of output devices for virtual environments, including graphics displays, 3D audio hardware, haptics interfaces, and potential future interfaces for smell. It focuses on graphics displays, describing stereo viewing, personal displays like head-mounted displays (HMDs), and large volume displays. Key aspects of HMDs discussed include field of view, resolution, weight, and price points needed for a good user experience. Specific HMD models, including organic LED models, are highlighted. Floor-supported and auto-stereoscopic displays are also summarized.
Mobile phone displays come in several types, but the most common are LCD, OLED, AMOLED, and SUPER AMOLED. LCD screens use liquid crystals to block or allow light to pass through and create images, but do not emit their own light. They tend to have poorer viewing angles compared to other types. OLED screens can emit their own light without needing a backlight, allowing for thinner, sharper, and more power efficient displays compared to LCD. AMOLED is a type of OLED screen that uses an active matrix to control individual pixels for improved image quality and refresh rates.
Graphics display interfaces are a type of output devices in virtual reality, which is a growing sector now.It includes HMD's, HSD's and CAVE Simulations, that are generally used in many applications now.
Fingerprint scanners work by capturing an image of a fingerprint using either an optical sensor like in digital cameras or a capacitive sensor using electrical current. The scanner then compares the captured fingerprint pattern to stored fingerprint images to determine if there is a match. Capacitive scanners have an advantage over optical scanners in that they require the actual fingerprint ridges and valleys rather than just a light/dark pattern, making them harder to fool.
3D films and TVs provide depth perception by showing two slightly different perspectives that are interpreted by the brain as a 3D image. There are several technologies for producing and displaying 3D content, including anaglyph, polarization, and interference filtering systems. 3D TVs use technologies like eclipse filtering glasses or lenticular displays to show different images to each eye and create the 3D effect without glasses in some cases. Broadcasting 3D content involves generating, compressing, transmitting, and displaying the left and right perspectives in an alternating sequence.
This document describes a major project to create a persistence of vision (POV) display using an array of LEDs rotated at high speed by a motor. The project will use a microcontroller to control an LED driver chip and synchronize the LEDs to display 2D or 3D messages. Hardware components include an ATmega644 microcontroller, motor/driver system, sensor circuit, LED driver chip, and LED array. Software includes Keil for programming and Proteus for simulation. The POV display can be used for advertising, education, entertainment and animation by taking advantage of the visual persistence of the human eye.
Fingerprint scanners have become more common for security and access. They work by capturing an image of a fingerprint's unique ridge and valley patterns using either optical or capacitive sensors. The scanner then analyzes and compares these minutiae points to fingerprints stored in its database to authenticate identity. While fingerprint authentication has advantages over traditional passwords, there are also privacy concerns since fingerprints cannot be changed if compromised.
Three key technologies for 3D TV displays include glasses-based methods like anaglyph glasses using red-blue lenses or polarized glasses, autostereoscopic displays without glasses using lenticular lenses or a parallax barrier to direct images to each eye, and active shutter glasses that alternate frames. The architecture of a 3D TV involves transmitting left and right eye views through technologies like gigabit Ethernet and displaying them using one of these 3D presentation methods. Applications include video games, TV and other media while advantages are a richer experience over 2D TV and disadvantages include the need for special glasses with some methods.
Hand Gesture Recognition Based on Shape ParametersNithinkumar P
Hi guys,
I am sharing a new link for code & project report. Hope it help you in your academics. Contact me if you need any help.
https://drive.google.com/drive/folders/1H0p852jfoyQuFig_IoMyVVK-U5o18Mxh?usp=sharing
A real time system for hand gesture recognition on the basis of detection of some meaningful shape based features like orientation, centre of mass (centroid), status of fingers and thumb in terms of raised or folded and their respective location in image.
Algorithm is implemented in Matlab v7.10
We use this hand gestures for
1. Sign Language Recognition
2. Human Machine Interaction.
ATmega32 Controlled “Persistence of Vision” DisplayUday Wankar
This paper explains the project which includes design and fabrication of a display based on Persistence of
vision. The objective of the project is to create virtual display in air. A class of display device described as
"POV" is one that composes an image by displaying one spatial portion at a time in rapid succession (for
example, one column of pixels every few milliseconds). A two-dimensional POV display is often accomplished
by means of rapidly moving a single row of LEDs along a linear or circular path. The effect is that the image is
perceived as a whole by the viewer as long as the entire path is completed during the visual persistence time of
the human eye. A further effect is often to give the illusion of the image floating in mid-air. For building this
project, requirement is just a small 40 pin microcontroller, a position encoder, and SMD LEDs.
Easy Presentation style. always new interesting topic . Gesture Recognition many types. According to the Markets and Markets analysis, the growth of gesture recognition is going to be huge. So, we have a huge opportunity to play in this technology.
Fingerprint scanners work by obtaining an image of a fingerprint and analyzing the unique pattern of ridges and valleys. There are two main types of scanners - optical scanners use cameras to take a picture of the print, while capacitive scanners use electrical current to sense the print's shape. The scanner software then analyzes features called minutiae (ridge endings and bifurcations) to match fingerprints by measuring the relative positions of multiple minutiae. Fingerprint analysis is useful for security and identification because each print is unique, cannot be forgotten or easily stolen like passwords, and is very difficult to forge.
Modern displays have evolved significantly over time. Early displays included CRTs (cathode ray tubes), which were large, power-intensive, and had limited resolution. TFT (thin film transistor) LCD displays then became popular, using thin film transistors to control each pixel for faster refresh rates. LCDs are now ubiquitous but have limited viewing angles. LED displays then emerged, using arrays of light-emitting diodes for backlighting to provide brighter images with better color and contrast than LCDs. The latest technology is OLED (organic light-emitting diode) displays, where each pixel internally emits its own light for perfect black levels and wider viewing angles compared to LCDs. Display technologies continue advancing toward thinner,
IRJET- Mouse on Finger Tips using ML and AIIRJET Journal
This document describes a system that uses computer vision and machine learning to allow users to control a computer mouse using only their fingertips. The system tracks colored fingertips using a webcam and processes the video frames in real-time to detect and track the fingertips. It then maps the fingertip movements to mouse movements and gestures to control clicking, scrolling, and other mouse functions without any physical contact with the computer. The system was created using Python and aims to provide a more natural and cost-effective way for human-computer interaction through a virtual mouse controlled by hand gestures.
Detection Hand Motion on Virtual Reality Mathematics Game with Accelerometer ...TELKOMNIKA JOURNAL
Montessori method is a learning method using props. One of the developments props is to use the game as a medium of learning. The examples Game media as learning is the use of Virtual Reality or VR Technology. By using the VR, players will be brought into the virtual world as if the player is in the real world. The weakness of the VR game is the limited interaction with the outside world. Interaction uses only buttons and joysticks. In this paper we use Flex sensor and accelerometer sensor to detect hand movements for VR mathematic game. The result is VR games are more interactive and interesting with hand motion.
The kinetic installation is located on the staircase of Arenberg Castle in Belgium. It uses origami tessellations made from paper that are attached to motorized units. When a person moves on the stairs or interacts with the installation, the origami will open and close through passive and active interactions. This creates a game between the user and the installation. The hardware consists of motorized frames that control the movement of the origami through connections to sensors and microcontrollers.
The document summarizes a voice-controlled robot called Home Butler that is designed to assist handicapped individuals. The robot takes voice commands, locates requested objects using image processing and a database, navigates to the object using SLAM and LIDAR sensors, identifies the object with camera vision and image matching, grabs the object using sensors and motors, and returns it to the user by retracing its path. The robot integrates LabVIEW for voice decoding, MATLAB for image processing, and a Raspberry Pi operating system to run the integrated software and databases.
This document provides an overview of a computer graphics and visualization course. It includes links to two textbooks, definitions of key graphics concepts like raster, pixel, resolution and depth. It also covers different types of displays like CRT, flat panel displays, and emissive vs non-emissive displays. Specific display technologies like plasma panels, LCDs and graphics workstations are described. The document also discusses graphics input devices, graphics software, OpenGL and using graphics over networks.
Gestures are an important form of non-verbal communication that involve visible bodily motions. The document discusses the history and development of gesture recognition technologies, describing early data gloves and videoplace systems as well as current technologies like Cepal and ADITI that help people with disabilities control devices with gestures. It also outlines the key components of a gesture recognition system including modeling, analysis, and recognition of gestures and discusses classification methods like HMMs and MLPs. Applications discussed include virtual keyboards, navigaze, and Sixth Sense technology.
[CHI2016] GaussMarbles: Spherical Magnetic Tangibles for Interacting with Por...Rong-Hao Liang
This work develops a system of spherical magnetic tangi- bles, GaussMarbles, that exploits the unique affordances of spherical tangibles for interacting with portable physical con- straints. The proposed design of each magnetic sphere in- cludes a magnetic polyhedron in the center. The magnetic polyhedron provides bi-polar magnetic fields, which are ex- panded in equal dihedral angles as robust features for track- ing, allowing an analog Hall-sensor grid to resolve the near- surface 3D position accurately in real-time. Possible inter- actions between the magnetic spheres and portable physical constraints in various levels of embodiment were explored us- ing several example applications.
[CHI2016] GaussRFID: Reinventing Physical Toys Using Magnetic RFID Developmen...Rong-Hao Liang
We present GaussRFID, a hybrid RFID and magnetic-field tag sensing system that supports interactivity when embedded in retrofitted or new physical objects. The system consists of two major components — GaussTag, a magnetic-RFID tag that is combined with a magnetic unit and an RFID tag, and GaussStage, which is a tag reader that is combined with an analog Hall-sensor grid and an RFID reader. A GaussStage recognizes the ID, 3D position, and partial 3D orientation of a GaussTag near the sensing platform, and provides simple in- terfaces for involving physical constraints, displays and actu- ators in tangible interaction designs. The results of a two-day toy-hacking workshop reveal that all six groups of 31 par- ticipants successfully modified physical toys to interact with computers using the GaussRFID system.
The document discusses various types of output devices for virtual environments, including graphics displays, 3D audio hardware, haptics interfaces, and potential future interfaces for smell. It focuses on graphics displays, describing stereo viewing, personal displays like head-mounted displays (HMDs), and large volume displays. Key aspects of HMDs discussed include field of view, resolution, weight, and price points needed for a good user experience. Specific HMD models, including organic LED models, are highlighted. Floor-supported and auto-stereoscopic displays are also summarized.
Mobile phone displays come in several types, but the most common are LCD, OLED, AMOLED, and SUPER AMOLED. LCD screens use liquid crystals to block or allow light to pass through and create images, but do not emit their own light. They tend to have poorer viewing angles compared to other types. OLED screens can emit their own light without needing a backlight, allowing for thinner, sharper, and more power efficient displays compared to LCD. AMOLED is a type of OLED screen that uses an active matrix to control individual pixels for improved image quality and refresh rates.
Graphics display interfaces are a type of output devices in virtual reality, which is a growing sector now.It includes HMD's, HSD's and CAVE Simulations, that are generally used in many applications now.
Fingerprint scanners work by capturing an image of a fingerprint using either an optical sensor like in digital cameras or a capacitive sensor using electrical current. The scanner then compares the captured fingerprint pattern to stored fingerprint images to determine if there is a match. Capacitive scanners have an advantage over optical scanners in that they require the actual fingerprint ridges and valleys rather than just a light/dark pattern, making them harder to fool.
3D films and TVs provide depth perception by showing two slightly different perspectives that are interpreted by the brain as a 3D image. There are several technologies for producing and displaying 3D content, including anaglyph, polarization, and interference filtering systems. 3D TVs use technologies like eclipse filtering glasses or lenticular displays to show different images to each eye and create the 3D effect without glasses in some cases. Broadcasting 3D content involves generating, compressing, transmitting, and displaying the left and right perspectives in an alternating sequence.
This document describes a major project to create a persistence of vision (POV) display using an array of LEDs rotated at high speed by a motor. The project will use a microcontroller to control an LED driver chip and synchronize the LEDs to display 2D or 3D messages. Hardware components include an ATmega644 microcontroller, motor/driver system, sensor circuit, LED driver chip, and LED array. Software includes Keil for programming and Proteus for simulation. The POV display can be used for advertising, education, entertainment and animation by taking advantage of the visual persistence of the human eye.
Fingerprint scanners have become more common for security and access. They work by capturing an image of a fingerprint's unique ridge and valley patterns using either optical or capacitive sensors. The scanner then analyzes and compares these minutiae points to fingerprints stored in its database to authenticate identity. While fingerprint authentication has advantages over traditional passwords, there are also privacy concerns since fingerprints cannot be changed if compromised.
Three key technologies for 3D TV displays include glasses-based methods like anaglyph glasses using red-blue lenses or polarized glasses, autostereoscopic displays without glasses using lenticular lenses or a parallax barrier to direct images to each eye, and active shutter glasses that alternate frames. The architecture of a 3D TV involves transmitting left and right eye views through technologies like gigabit Ethernet and displaying them using one of these 3D presentation methods. Applications include video games, TV and other media while advantages are a richer experience over 2D TV and disadvantages include the need for special glasses with some methods.
Hand Gesture Recognition Based on Shape ParametersNithinkumar P
Hi guys,
I am sharing a new link for code & project report. Hope it help you in your academics. Contact me if you need any help.
https://drive.google.com/drive/folders/1H0p852jfoyQuFig_IoMyVVK-U5o18Mxh?usp=sharing
A real time system for hand gesture recognition on the basis of detection of some meaningful shape based features like orientation, centre of mass (centroid), status of fingers and thumb in terms of raised or folded and their respective location in image.
Algorithm is implemented in Matlab v7.10
We use this hand gestures for
1. Sign Language Recognition
2. Human Machine Interaction.
ATmega32 Controlled “Persistence of Vision” DisplayUday Wankar
This paper explains the project which includes design and fabrication of a display based on Persistence of
vision. The objective of the project is to create virtual display in air. A class of display device described as
"POV" is one that composes an image by displaying one spatial portion at a time in rapid succession (for
example, one column of pixels every few milliseconds). A two-dimensional POV display is often accomplished
by means of rapidly moving a single row of LEDs along a linear or circular path. The effect is that the image is
perceived as a whole by the viewer as long as the entire path is completed during the visual persistence time of
the human eye. A further effect is often to give the illusion of the image floating in mid-air. For building this
project, requirement is just a small 40 pin microcontroller, a position encoder, and SMD LEDs.
Easy Presentation style. always new interesting topic . Gesture Recognition many types. According to the Markets and Markets analysis, the growth of gesture recognition is going to be huge. So, we have a huge opportunity to play in this technology.
Fingerprint scanners work by obtaining an image of a fingerprint and analyzing the unique pattern of ridges and valleys. There are two main types of scanners - optical scanners use cameras to take a picture of the print, while capacitive scanners use electrical current to sense the print's shape. The scanner software then analyzes features called minutiae (ridge endings and bifurcations) to match fingerprints by measuring the relative positions of multiple minutiae. Fingerprint analysis is useful for security and identification because each print is unique, cannot be forgotten or easily stolen like passwords, and is very difficult to forge.
Modern displays have evolved significantly over time. Early displays included CRTs (cathode ray tubes), which were large, power-intensive, and had limited resolution. TFT (thin film transistor) LCD displays then became popular, using thin film transistors to control each pixel for faster refresh rates. LCDs are now ubiquitous but have limited viewing angles. LED displays then emerged, using arrays of light-emitting diodes for backlighting to provide brighter images with better color and contrast than LCDs. The latest technology is OLED (organic light-emitting diode) displays, where each pixel internally emits its own light for perfect black levels and wider viewing angles compared to LCDs. Display technologies continue advancing toward thinner,
IRJET- Mouse on Finger Tips using ML and AIIRJET Journal
This document describes a system that uses computer vision and machine learning to allow users to control a computer mouse using only their fingertips. The system tracks colored fingertips using a webcam and processes the video frames in real-time to detect and track the fingertips. It then maps the fingertip movements to mouse movements and gestures to control clicking, scrolling, and other mouse functions without any physical contact with the computer. The system was created using Python and aims to provide a more natural and cost-effective way for human-computer interaction through a virtual mouse controlled by hand gestures.
Detection Hand Motion on Virtual Reality Mathematics Game with Accelerometer ...TELKOMNIKA JOURNAL
Montessori method is a learning method using props. One of the developments props is to use the game as a medium of learning. The examples Game media as learning is the use of Virtual Reality or VR Technology. By using the VR, players will be brought into the virtual world as if the player is in the real world. The weakness of the VR game is the limited interaction with the outside world. Interaction uses only buttons and joysticks. In this paper we use Flex sensor and accelerometer sensor to detect hand movements for VR mathematic game. The result is VR games are more interactive and interesting with hand motion.
The kinetic installation is located on the staircase of Arenberg Castle in Belgium. It uses origami tessellations made from paper that are attached to motorized units. When a person moves on the stairs or interacts with the installation, the origami will open and close through passive and active interactions. This creates a game between the user and the installation. The hardware consists of motorized frames that control the movement of the origami through connections to sensors and microcontrollers.
The document summarizes a voice-controlled robot called Home Butler that is designed to assist handicapped individuals. The robot takes voice commands, locates requested objects using image processing and a database, navigates to the object using SLAM and LIDAR sensors, identifies the object with camera vision and image matching, grabs the object using sensors and motors, and returns it to the user by retracing its path. The robot integrates LabVIEW for voice decoding, MATLAB for image processing, and a Raspberry Pi operating system to run the integrated software and databases.
This document provides an overview of a computer graphics and visualization course. It includes links to two textbooks, definitions of key graphics concepts like raster, pixel, resolution and depth. It also covers different types of displays like CRT, flat panel displays, and emissive vs non-emissive displays. Specific display technologies like plasma panels, LCDs and graphics workstations are described. The document also discusses graphics input devices, graphics software, OpenGL and using graphics over networks.
Gestures are an important form of non-verbal communication that involve visible bodily motions. The document discusses the history and development of gesture recognition technologies, describing early data gloves and videoplace systems as well as current technologies like Cepal and ADITI that help people with disabilities control devices with gestures. It also outlines the key components of a gesture recognition system including modeling, analysis, and recognition of gestures and discusses classification methods like HMMs and MLPs. Applications discussed include virtual keyboards, navigaze, and Sixth Sense technology.
[CHI2016] GaussMarbles: Spherical Magnetic Tangibles for Interacting with Por...Rong-Hao Liang
This work develops a system of spherical magnetic tangi- bles, GaussMarbles, that exploits the unique affordances of spherical tangibles for interacting with portable physical con- straints. The proposed design of each magnetic sphere in- cludes a magnetic polyhedron in the center. The magnetic polyhedron provides bi-polar magnetic fields, which are ex- panded in equal dihedral angles as robust features for track- ing, allowing an analog Hall-sensor grid to resolve the near- surface 3D position accurately in real-time. Possible inter- actions between the magnetic spheres and portable physical constraints in various levels of embodiment were explored us- ing several example applications.
[CHI2016] GaussRFID: Reinventing Physical Toys Using Magnetic RFID Developmen...Rong-Hao Liang
We present GaussRFID, a hybrid RFID and magnetic-field tag sensing system that supports interactivity when embedded in retrofitted or new physical objects. The system consists of two major components — GaussTag, a magnetic-RFID tag that is combined with a magnetic unit and an RFID tag, and GaussStage, which is a tag reader that is combined with an analog Hall-sensor grid and an RFID reader. A GaussStage recognizes the ID, 3D position, and partial 3D orientation of a GaussTag near the sensing platform, and provides simple in- terfaces for involving physical constraints, displays and actu- ators in tangible interaction designs. The results of a two-day toy-hacking workshop reveal that all six groups of 31 par- ticipants successfully modified physical toys to interact with computers using the GaussRFID system.
This document describes a measurement project to determine flow rates using different shaped weirs and notches. The project uses a basic model of a tank and pipes to release water samples through notches of varying shapes. Flow rates are calculated for each shape by collecting and measuring the output water over a set period of time. Comparing the flow rates helps understand how notch shape impacts discharge. The goal is to gain knowledge about weirs and notches that is important for applications like flood prediction and water resource management.
ACM UIST 2014: GaussStones: Shielded Magnetic Tangibles for Multi-Token Inter...Rong-Hao Liang
GaussStones: Shielded Magnetic Tangibles for Multi-Token Interactions on Portable Displays
This work presents GaussStones, a system of shielded magnetic tangibles design for supporting multi-token interactions on portable displays. Unlike prior works in sensing magnetic tangibles on portable displays, the proposed tangible design applies magnetic shielding by using an inexpensive galvanized steel case, which eliminates interference between magnetic tangibles. An analog Hall-sensor grid can recognize the identity of each shielded magnetic unit since each unit generates a magnetic field with a specific intensity distribution and/or polarization. Combining multiple units as a knob further allows for resolving additional identities and their orientations. Enabling these features improves support for applications involving multiple tokens. Thus, using prevalent portable displays provides generic platforms for tangible interaction design.
Project Page of GaussStones:
http://www.cmlab.csie.ntu.edu.tw/~howieliang/GaussStones.html
ACM CHI 2014 - GaussBricks: Magnetic Building Blocks for Constructive Tangibl...Rong-Hao Liang
Rong-Hao Liang, Liwei Chan, Hung-Yu Tseng, Han-Chih Kuo, Da-Yuan Huang, De-Nian Yang, and Bing-Yu Chen.
"GaussBricks: Magnetic Building Blocks for Constructive Tangible Interactions on Portable Displays", In Proceedings of ACM CHI 2014, pp.3153--3162.
[CHI 2014 Best Paper Honorable Mention Award]
[CHI 2014 People's Choice Best Talk Award]
Project page: http://graphics.csie.ntu.edu.tw/~howieliang/GaussBricks.html
--
This work describes a novel building block system for tangible interaction design, GaussBricks, which enables real-time constructive tangible interactions on portable displays. Given its simplicity, the mechanical design of the magnetic building blocks facilitates the construction of configurable forms. The form constructed by the magnetic building blocks, which are connected by the magnetic joints, allows users to stably manipulate with various elastic force feedback mechanisms. With an analog Hall-sensor grid mounted to its back, a portable display determines the geometrical configuration and detects various user interactions in real time. This work also introduce several methods to enable shape changing, multi-touch input, and display capabilities in the construction. The proposed building block system enriches how individuals interact with the portable displays physically.
--
Project Gauss: Portable and Occlusion-Free Magnetic Object Tracking Using Analog Hall-Sensor Grid http://www.cmlab.csie.ntu.edu.tw/~howieliang/HCIProjects/projectGauss.html
ACM CHI 2013 - GaussBits: Magnetic Tangible Bits for Portable and Occlusion-F...Rong-Hao Liang
We present GaussBits, which is a system of the passive magnetic tangible designs that enables 3D tangible interactions in the near-surface space of portable displays. When a thin magnetic sensor grid is attached to the back of the display, the 3D position and partial 3D orientation of the GaussBits can be resolved by the proposed bi-polar magnetic field tracking technique. This portable platform can therefore enrich tangible interactions by extending the design space to the near-surface space. Since non-ferrous materials, such as the user's hand, do not occlude the magnetic field, interaction designers can freely incorporate a magnetic unit into an appropriately shaped non-ferrous object to exploit the metaphors of the real-world tasks, and users can freely manipulate the GaussBits by hands or using other non-ferrous tools without causing interference. The presented example applications and the collected feedback from an explorative workshop revealed that this new approach is widely applicable.
Project page: http://graphics.csie.ntu.edu.tw/~howieliang/GaussBits.html
Rong-Hao Liang, Kai-Yin Cheng, Liwei Chan, Chuan-Xhyuan Peng, Mike Y. Chen, Rung-Huei Liang, De-Nian Yang, and Bing-Yu Chen,
"GaussBits: Magnetic Tangible Bits for Portable and Occlusion-Free Near-Surface Interactions", In Proceedings of ACM CHI 2013, pp.1391--1400.
Project pages:
GaussBits: http://graphics.csie.ntu.edu.tw/~howieliang/GaussBits.html
Project Gauss: Portable Object Tracking Using Magnetic Sensor Grid http://www.cmlab.csie.ntu.edu.tw/~howieliang/HCIProjects/projectGauss.html
Invision Biomedical's ophthalmic lens design and prototyping facility uses cryogenic lathing and milling technology to produce intraocular and contact lenses in a 5-step process. The process begins with circular acrylic blanks that are frozen to mandrels before being cut into lenses using a precision lathe and mill. The blanks have their optics and haptics shaped in sub-zero temperatures and are then polished and packaged as finished intraocular lenses.
Mirrors and lenses interact with light in predictable ways. Plane mirrors reflect light at an equal angle, forming a virtual upright image. Concave mirrors either form a real upside-down image behind the focal point or a virtual upright image in front. Convex mirrors always form a virtual smaller image. Refraction bends light when it passes through different materials, with more bending at higher indexes of refraction. Concave lenses diverge light rays, while convex lenses converge light to a focal point, forming either real or virtual images.
Various laser lenses have been introduced following Goldmann 3- mirror and Goldmann fundus contact lens for retinal photocoagulation.
Below described some of the time-tested lenses in widespread use. Precise knowledge of these lenses is necessary for safe retinal photocoagulation.
This document provides an overview of lasers and their uses in ophthalmology. It begins with definitions of laser terminology and physics. It then discusses different types of lasers classified by medium (solid state, gas, etc.) and wavelength used. Applications of lasers described include refractive surgery, glaucoma treatment, retinal photocoagulation, and ocular oncology. Specific laser procedures and their parameters are outlined. Complications of laser treatment and safety considerations are also reviewed.
Hypertensive retinopathy is caused by high blood pressure and damages the small blood vessels in the retina. It is diagnosed through an eye exam where signs include narrowed retinal arteries, arteriovenous nicking, and cotton wool spots. Left untreated, it can progress to vision loss from hemorrhages, fluid buildup, or optic nerve damage. Treatment involves controlling the underlying hypertension through medication to prevent further eye and health issues.
In the simplest terms, eye tracking is the measurement of eye activity. Where do we look? When do we blink? How does the pupil react to different stimuli? The concept is basic, but the process and interpretation can be quite complex. There are many different methods of exploring eye data.The most common is to analyze the visual path of one or more participants across an interface such as a computer screen. Each eye data observation is translated into a set of pixel coordinates.
IRJET - Smart Blind Stick using Image ProcessingIRJET Journal
1. The document describes a proposed smart blind stick system that uses ultrasonic sensors, a camera, and a raspberry pi to help visually impaired people avoid obstacles and navigate independently.
2. The system would use ultrasonic sensors to detect obstacles and a camera to capture images of obstacles. The images would be processed using CNN and RNN models to generate captions, which would then be converted to speech for the user.
3. The proposed system aims to help the blind community travel independently by detecting obstacles using sensors, identifying obstacles using image processing and captioning, and informing users of obstacles and the environment through audio.
Augmented reality is a live direct or indirect view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called mediated reality, in which a view of reality is modified by a computer. As a result, the technology functions by enhancing one’s current perception of reality. By contrast, virtual reality replaces the real world with a simulated one.
This document discusses recent advances in augmented reality applications. It provides an overview of augmented reality, including definitions and how it differs from virtual reality. The document then discusses several types of augmented reality applications including education, medical, gaming, navigation, construction, and military. It also summarizes the results of a survey conducted by NASA on the impact of instructional medium on task completion times. Finally, it covers limitations of augmented reality technology and barriers to widespread adoption, as well as the future potential of augmented reality.
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Kalle
The intuitive user interfaces of PCs and PDAs, such as pen display and touch panel, have become widely used in recent times. In this study, we have developed an eye-tracking pen display based on the stereo bright pupil technique. First, the bright pupil camera was developed by examining the arrangement of cameras and LEDs for pen display. Next, the gaze estimation method was proposed for the stereo bright pupil camera, which enables one point calibration. Then, the prototype of the eyetracking pen display was developed. The accuracy of the system was approximately 0.7° on average, which is sufficient for human interaction support. We also developed an eye-tracking tabletop as an application of the proposed stereo bright pupil technique.
presentation for augmented reality. ,It consists of introduction, working, components of AR, applications, limitations, recent development and conclusion. all the best for your presentation
This document discusses a thesis on using augmented reality for full body immersion. It describes using a motion capture suit and video see-through technologies with the Oculus Rift head mounted display. The thesis aims to achieve realistic interactions between the user and virtual characters through body movements and gestures controlled in the augmented reality scene. It plans to evaluate the system using surveys after subjects interact with a virtual agent scenario while wearing the motion capture suit and Oculus Rift.
Eye tracking technology allows users to control devices with their eyes. It works by tracking the movement of the user's eyes using infrared light and cameras. The technology measures the point of gaze and motion of the eyes. It is being used in applications like assistive technologies, video games, and marketing research. In the future, eye tracking may allow new methods of human-computer interaction and be integrated into more devices.
IRJET- Object Recognization from Structural Information using PerceptionIRJET Journal
This document summarizes research on object recognition from images using machine perception techniques. It discusses how machine perception works by using sensor input like cameras to deduce aspects of the world. It then describes various methods researchers are using for object recognition, including extracting features from images, training classifiers to recognize objects, and reconstructing 3D models of objects and scenes from 2D images. Key techniques discussed include using histograms of oriented gradients to recognize pedestrians, exploiting motion parallax and binocular stereopsis to derive depth from multiple images, and analyzing texture, shading and other visual cues to help reconstruct 3D worlds from 2D inputs.
The document discusses screenless display technology, including its origins, types (such as Google Glass and SixthSense), components, advantages, and effects. Screenless displays transmit visual information without the use of a screen through methods like retinal projection, projected air interfaces, and direct brain interfaces. This technology has benefits like higher resolution and portability compared to traditional screens, and could improve access to information for visually impaired individuals. However, challenges remain regarding device costs, dependence on hardware, and potential issues from component failures.
A SEMINAR PRESENTATION
On
SIXTH SENSE TECHNOLOGY
Submitted in partial fulfillment of the award of the degree
of
Bachelor of Technology
in
ELECTRONICS & COMMUNICATION ENGINEERING
IRJET- Hand Movement Recognition for a Speech Impaired PersonIRJET Journal
This document describes a system to recognize hand gestures from a speech-impaired person and convert them to speech using a flex sensor glove and microcontroller. The system uses flex sensors attached to a glove to detect hand movements and gestures. The microcontroller matches the gestures to a database of templates and outputs the corresponding speech signal through a speaker. This allows speech-impaired individuals to communicate through natural hand gestures that are translated to audio speech in real-time. The system aims to help overcome communication barriers for those unable to speak.
The document describes a smart cap system to help blind people navigate independently. The cap uses a camera to capture the user's surroundings, detects objects using TensorFlow and describes the scene to the user via earphones. It analyzes frames using CNN models and a text-to-speech synthesizer. The system aims to boost confidence for blind users to move freely and identify objects like fruits and vegetables. It provides real-time navigation and notifications of obstacles while converting text to speech. The researchers believe this could help the 285 million visually impaired people live independently.
1) The document proposes a method for gesture detection using a virtual surface detected by a webcam or front-facing camera on a laptop or smartphone.
2) By tracking the number and position of pixels representing an object's shape at different distances from the camera, the computer can detect movements of the object maintaining a constant distance as gestures on a virtual surface or plane.
3) This technique aims to enable gesture control of computer functions like mouse movement or app launching without requiring specialized cameras, as a cheaper and more portable alternative to physical input devices.
This document proposes an eye tracking interpretation system to help paralyzed or physically disabled people communicate. It uses a Raspberry Pi 2 with a CMOS camera and IR sensor to track eye movements via the corneal reflection and pupil center method. The system analyzes eye-related measurements and potentially other physiological signals to interpret the user's intentions and provide a natural interface without need for physical input devices. It was designed to enable communication through eye movements alone.
Paulin hansen.2011.gaze interaction from bedmrgazer
This document describes a low-cost gaze tracking system designed for bedbound people. The system uses off-the-shelf hardware including a video camera, projector, and computer running open-source gaze tracking software. A large image is projected on a wall in front of the bed to allow visibility for others in the room while freeing up space around the bedridden user. An experiment tested the system's accuracy and precision with 12 subjects in both seated and lying positions. Gaze tracking was found to be most accurate and precise when subjects were seated versus lying down, and in the bottom half of the projected image versus the top half. The system achieved sufficient accuracy for basic gaze-based interaction applications.
Eye Gaze Tracking With a Web Camera in a Desktop Environment1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
The document discusses the Blue Eyes technology, which aims to develop computers that can understand users' emotions, identity, and presence through techniques like facial recognition and speech recognition. The technology uses non-obtrusive sensing methods to gather physiological data from users to determine their emotional states. This would allow computers to interact more naturally with humans. Experimental results showed that measures of skin conductivity, heart rate, finger temperature, and mouse movements can reliably predict a user's emotional state. Future work aims to improve these techniques with smaller, less intrusive sensors.
This document provides an overview of augmented reality (AR) including:
- A definition of AR as overlaying digital information on the real world
- A brief history of AR and comparison to virtual reality
- Current applications of AR in areas like mobile devices, automotive repair, and medical procedures
- Future possibilities for AR including use in contact lenses and advanced head-mounted displays
- A demonstration of an AR product catalog and conclusions about the technology's potential growth.
Similar to CHI'15 - WonderLens: Optical Lenses and Mirrors for Tangible Interactions on Printed Paper (20)
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
AI-Powered Food Delivery Transforming App Development in Saudi Arabia.pdfTechgropse Pvt.Ltd.
In this blog post, we'll delve into the intersection of AI and app development in Saudi Arabia, focusing on the food delivery sector. We'll explore how AI is revolutionizing the way Saudi consumers order food, how restaurants manage their operations, and how delivery partners navigate the bustling streets of cities like Riyadh, Jeddah, and Dammam. Through real-world case studies, we'll showcase how leading Saudi food delivery apps are leveraging AI to redefine convenience, personalization, and efficiency.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
CHI'15 - WonderLens: Optical Lenses and Mirrors for Tangible Interactions on Printed Paper
1. WonderLens: Optical Lenses and Mirrors
for Tangible Interactions on Printed Paper
Rong-Hao Liang, Chao Shen, Yu-Chien Chan, Guan-Ting Chou,
Liwei Chan, De-Nian Yang, Mike Y. Chen, and Bing-Yu Chen
National Taiwan University and Academia Sinica
This is Rong-Hao from National Taiwan University,
Today we are glad to talk about WonderLens,
a system of optical lenses and mirrors for tangible interactions on printed paper.
4. Even so, paper still plays an important role today.
because paper is more comfortable to see and play with than electronic displays.
Also, paper affords natural interactions, because we already know how to use paper.
5. Printed content is not interactive
However, paper has its limitation, the content printed on paper is not interactive.
When we interact with the printed content,
the content does not change its state,
So, the interactions stop here.
6. The MagicBook
[Billinghurst et. al. 2001]
Handheld Displays or HUDs
Adding a Visual Display
To add interactivity on paper, many researches augmented a visual displays for users,
such as holding a display or wearing a head-mounted display.
7. The Digital Desk
[Wellner 1993]
Tabletop Pro-Cam Modules
Adding a Visual Display
mounting a projector-camera module to augment the paper.
8. MouseLight
[Song et. al. 2010]
Handheld Pro-Cam Modules
Adding a Visual Display
and making projector-camera modules graspable.
9. HideOut
[Willis et. al. 2013]
Handheld Pro-Cam Modules
Adding a Visual Display
These methods are effective to provide dynamic visual feedback.
10. HideOut
[Willis et. al. 2013]
MouseLight
[Song et. al. 2010]
The Digital Desk
[Wellner 1993]
The MagicBook
[Billinghurst et. al. 2001]
External Device and Notable Latency Reduce the Immersion
However, these external devices maybe bulky,
or may introducing additional latencies to reduce the immersion of interactions.
11. Listen Reader
[Back et. al. 2001]
Hiding the RFID sensor behind paper
Increasing Immersion
To increase immersion,
designers “hide” the sensing mechanism behind paper, and use audio feedback instead.
such as placing an RFID sensor behind paper with voice feedback.
12. JabberStamp
[Raffel et. al. 2007]
Hiding the EMR sensor behind paper
Increasing Immersion
or placing an EMR sensor behind paper with voice feedback.
13. provide only auditory feedback
Listen Reader
[Back et. al. 2001]
JabberStamp
[Raffel et. al. 2007]
However, the modality of interaction is limited to auditory.
14. Actually,
adding a visual display on paper does not necessarily reduce immersion,
For example, the magnifying glass helped us to see the printed content in details,
provides very immersive user experiences.
15. Interaction Model of Lenses & Mirrors
Optical Illusions
Printed content
on PaperUser
Spatial Operations
Tool: Spatial Operations ∼ Optical Illusions
When we move the magnifying glass,
it feeds the enlarged content back to us immediately.
So, we feel this tool useful.
Optical lenses and mirrors are likely to have this property.
18. ca b d e
Low N
High N
Low S
High S
tilted
cylindrical
lens
convex
lens
concave
lens
prism angled
mirror
5 basic lenses and mirrors
After analyzing the affordances and optical illusions,
we identify a set of 5 basic lenses and mirrors to be the most useful.
35. Printed content
on Paper
User
Lenses and Mirrors
Immediate
Visual & Haptic
Feedback
Close the Interaction Loop
The lenses and mirrors provide immediate visual and haptic feedback,
close the interaction loop,
and therefore allow for more interaction designs on printed paper.
36. Printed content
on Paper
User Computer
Lenses and Mirrors
Immediate
Visual & Haptic
Feedback
Dynamic Information
Close the Interaction Loop
But, to facilitate Human-Computer Interaction,
such as guiding a multi-step process.
the system should allow for communicating dynamic information.
37. Printed content
on Paper
User Computer
Lenses and Mirrors
Immediate
Visual & Haptic
Feedback
Input
Output
Close the Interaction Loop
If so,
the computer should sense users operations as input,
and display the output accordingly.
38. North South
TILTED CYLINDER CONVEX CONCAVE PRISM ANGLED MIRROR
c
a
b
Low N High NLow S High S
released pressed released pressed
ca b d e
Low
High
Low
High
On the input side, to get the lenses and mirrors sensed.
39. North South
TILTED CYLINDER CONVEX CONCAVE PRISM ANGLED MIRROR
c
a
b
Low N High NLow S High S
released pressed released pressed
Adding magnets on the lenses and mirrors
Invisible magnetic fields get tracked above the paper
We choose to add magnets on the lenses and mirrors.
Because the magnets are small,
and the Invisible magnetic fields can be tracked on and above the paper.
40. North South
TILTED CYLINDER CONVEX CONCAVE PRISM ANGLED MIRROR
c
a
b
Low N High NLow S High S
released pressed released pressed
Adding magnets on the lenses and mirrors
Invisible magnetic fields get tracked above the paper
Each magnetic unit is designed in unique pattern.
41. North South
TILTED CYLINDER CONVEX CONCAVE PRISM ANGLED MIRROR
c
a
b
Low N High NLow S High S
released pressed released pressed
Adding magnets on the lenses and mirrors
Invisible magnetic fields get tracked above the paper
So the magnetic fields can represent their ID and states.
42. GaussSense
Analog Hall-Sensor Grid
[Liang et al. 2012]
Sense the Lenses and Mirrors
To sense the magnetic lenses and mirrors,
We use GaussSense, the thin-form analog Hall-Sensor Grid,
as the sensing platform.
43. Sense the Lenses and Mirrors
The GaussSense senses multiple magnetic lenses and mirrors through the paper.
46. When the piece of paper snaps to the platform,
the platform recognizes the RFID tag, and loads the content for interactions.
The magnets align the coordinates between paper and the GaussSense.
47. User Computer
Lenses and Mirrors
Immediate
Visual & Haptic
Feedback
Magnetic
GaussSense
RFID Reader
Printed content
on RFID Paper
[Liang et al. 2012]
Output
Environmental: ambient light, audio
Distant: remote display
Nearby: point light
Kenneth P. Fishkin. 2004. A taxonomy for and analysis of tangible interfaces.
Personal Ubiquitous Comput. 8, 5 (September 2004), 347-358.
Close the Interaction Loop
After sensed the operation,
the system can provide additional output in different levels of embodiment.
We show 3 examples to illustrate them.
51. Application #2: CPR-Learning
Nearby Point Light
stethoscope
Second, Output with a Nearby Point Light.
In the CPR learning program,
a user uses an LED-mounted flexible convex lens as a stethoscope.
52. Application #2: CPR-Learning
Nearby Point Light
stethoscope
When placing the stethoscope on a patient’s heart,
the user sees the point light changes and hears the heartbeats.
53. Application #2: CPR-Learning
Nearby Point Light
Then, the user presses and releases the soft convex lens at a constant rate to save the
patient.
The blinking point light shows it is well done.
55. Application #3: Hide-and-Seek
Ambient Light + Remote Display
Third, Output with Ambient Light & Remote Display.
In the hide-and-seek game,
a user sets the time of the game by placing an angled mirror.
58. Application #3: Hide-and-Seek
Ambient Light + Remote Display
When a character is found,
the glowing lens prompts the user to check the remote display,
to see who is found and what the character is doing.
59. User Computer
Lenses and Mirrors
Immediate
Visual & Haptic
Feedback
Magnetic
GaussSense
RFID Reader
Printed content
on RFID Paper
Environmental: ambient light, audio
Distant: remote display
Nearby: point light
Kenneth P. Fishkin. 2004. A taxonomy for and analysis of tangible interfaces.
Personal Ubiquitous Comput. 8, 5 (September 2004), 347-358.
Conclusion Close the Interaction Loop
Conclusion, we introduced WonderLens,
a system of lenses and mirrors that augments tangible and embodied interactions with
printed paper.
The double interaction loop allows for communicating dynamic information.
60. Future Work
Printed Optics [Willis et al. 2012]
Magic Lens [Willis et al. 2012]
Paper Generator [Karagozler et al. 2013]
3D Optic Printing Energy Harvesting
Advanced Lens Fabrication and Visual Designs
Future work can consider using advanced methods of lens fabrication and visual designs.
or incorporating energy harvesting mechanism with paper.
61. Attachable Stylus Sensing Using Magnetic Sensor Grid
GaussSense
GaussBits
GaussBricks
GaussBits
Magnetic
Tangible Bits
[Liang et al. CHI 2013]
GaussBricks
Magnetic
Building Blocks
[Liang et al. CHI 2014]
GaussSense
Analog
Hall-Sensor Grid
[Liang et al. UIST 2012]
GaussStones
Shielded
Magnetic Tangibles
[Liang et al. UIST 2014]
GaussStones
eng†
Chao-Huai Su†
Chen‡
@cmlab.csie.ntu.edu.tw
d
c
c
ouchpad function through pinch ges-
words to the private glass display (b)
mb tip on the index fingertip. (c) The
hrough magnetic tracking by adding a
he fingernails.
splays permit personal and pri-
put methods may not offer the
voice input is commonly used
because it is expressive and ef-
put can be problematic in loud
ssues arise with its use in pub-
nput)[11]. Gesture input suffers
ns because input actions are eas-
nt research proposes subtle inter-
re based on implicit movements
cially acceptable. For example,
input through unobservable mus-
[12] detects subtle foot motions.
rics [6] have been developed to
inputs. Although these methods
allow for privacy and social ac-
uffer from limited input space.
Pad, a nail-mounted device that
o a touchpad, allowing private,
FingerPad
Wearable
Private Input
[Chan et al. UIST 2013]
GaussSense
WonderLens:
Optical Lenses and Mirrors for
Tangible Interactions on Printed Paper
Rong-Hao Liang, Chao Shen, Yu-Chien Chan, Guan-Ting Chou,
Liwei Chan, De-Nian Yang, Mike Y. Chen, and Bing-Yu Chen
National Taiwan University and Academia Sinica
WonderLens
WonderLens
TUI on
Printed Paper
[Liang et al. CHI 2015]
FingerPad
This project, WonderLens, shows another applications of GaussSense,
that is enabling tangible interactions on Printed Paper.
For makers and researchers, who want to try out the GaussSense technology,
we are happy to announce that.
62. Attachable Stylus Sensing Using Magnetic Sensor Grid
GaussSense
GaussBits
GaussBricks
GaussBits
Magnetic
Tangible Bits
[Liang et al. CHI 2013]
GaussBricks
Magnetic
Building Blocks
[Liang et al. CHI 2014]
GaussSense
Analog
Hall-Sensor Grid
[Liang et al. UIST 2012]
GaussStones
Shielded
Magnetic Tangibles
[Liang et al. UIST 2014]
GaussStones
eng†
Chao-Huai Su†
Chen‡
@cmlab.csie.ntu.edu.tw
d
c
c
ouchpad function through pinch ges-
words to the private glass display (b)
mb tip on the index fingertip. (c) The
hrough magnetic tracking by adding a
he fingernails.
splays permit personal and pri-
put methods may not offer the
voice input is commonly used
because it is expressive and ef-
put can be problematic in loud
ssues arise with its use in pub-
nput)[11]. Gesture input suffers
ns because input actions are eas-
nt research proposes subtle inter-
re based on implicit movements
cially acceptable. For example,
input through unobservable mus-
[12] detects subtle foot motions.
rics [6] have been developed to
inputs. Although these methods
allow for privacy and social ac-
uffer from limited input space.
Pad, a nail-mounted device that
o a touchpad, allowing private,
WonderLens
WonderLens
TUI on
Printed Paper
[Liang et al. CHI 2015]
FingerPad
Wearable
Private Input
[Chan et al. UIST 2013]
FingerPad
GaussSense
WonderLens:
Optical Lenses and Mirrors for
Tangible Interactions on Printed Paper
Rong-Hao Liang, Chao Shen, Yu-Chien Chan, Guan-Ting Chou,
Liwei Chan, De-Nian Yang, Mike Y. Chen, and Bing-Yu Chen
National Taiwan University and Academia Sinica
http://gausstoys.com
$19 GaussSense is coming soon!
subscribe this information on:
The 19-dollar GaussSense is coming soon.
We have worked very hard on it, and really excited to see this finally happens.
If you are interested in, please subscribe us on GaussToys.com.
So we can keep you in the loop.
Thanks for your attention, and I’m happy to take all of your questions.