Jim Williamson's technical portfolio includes consumer and industrial optical product work. For consumer products, he invented an award-winning vertical scanner, non-imaging illuminator, waveguide scanhead, phone sensors, optical storage film, and a laser toy. For industrial products, he designed III-V detectors for instruments, an array detector and spectrometer, and a flat response detector for power meters. His experience spans optics, optoelectronics, testing, materials processing, research, design, development, applications, electronics, mechanical design, and programming.
2 d, 2.5d, and 3d x ray inspection – what’s a “d”?Bill Cardoso
In this presentation we clarify one of the most confusing topics in x-ray inspection: the difference between 2D, 2.5D, and 3D x-ray inspection. Check the slides to see if you can tell if an image is 2D, 2.5D, or 3D!
SIGGRAPH 2014 Course on Computational Cameras and Displays (part 1)Matthew O'Toole
Recent advances in both computational photography and displays have given rise to a new generation of computational devices. Computational cameras and displays provide a visual experience that goes beyond the capabilities of traditional systems by adding computational power to optics, lights, and sensors. These devices are breaking new ground in the consumer market, including lightfield cameras that redefine our understanding of pictures (Lytro), displays for visualizing 3D/4D content without special eyewear (Nintendo 3DS), motion-sensing devices that use light coded in space or time to detect motion and position (Kinect, Leap Motion), and a movement toward ubiquitous computing with wearable cameras and displays (Google Glass).
This short (1.5 hour) course serves as an introduction to the key ideas and an overview of the latest work in computational cameras, displays, and light transport.
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to analyze how Light Field Technology is becoming economic feasible for an increasing number of applications. Light Field Cameras record all of the light fields in a picture instead of just one light field. This capability enables users to change the focus of pictures after they have been taken and to more easily record 3D data. These features are becoming economically feasible improvements because of rapid improvements in camera chips and micro-lens arrays (an example of micro-electronic mechanical systems, MEMS). These features offer alternative ways to do 3D sensing for automated vehicles and augmented reality and can enable faster data collection with telescopes.
We have built a camera that can look around corners and beyond the line of sight. The camera uses light that travels from the object to the camera indirectly, by reflecting off walls or other obstacles, to reconstruct a 3D shape.
Computational Displays in 4D, 6D, 8D
We have explored how light propagates from thin elements into a volume for viewing for both automultiscopic displays and holograms. In particular, devices that are typically connected with geometric optics, like parallax barriers, differ in treatment from those that obey physical optics, like holograms. However, the two concepts are often used to achieve the same effect of capturing or displaying a combination of spatial and angular information. Our work connects the two approaches under a general framework based in ray space, from which insights into applications and limitations of both parallax-based and holography-based systems are observed.
Both parallax barrier systems and the practical holographic displays are limited in that they only provide horizontal parallax. Mathematically, this is equivalent to saying that they can always be expressed as a rank-1 matrix (i.e, a matrix in which all the columns are linearly related). Knowledge of this mathematical limitation has helped us to explore the space of possibilities and extend the capabilities of current display types. In particular, we have designed a display that uses two LCD panels, and an optimisation algorithm, to produce a content-adaptive automultiscopic display (SIGGRAPH Asia 2010).
(Joint work with R Horstmeyer, Se Baek Oh, George Barbastathis, Doug Lanman, Matt Hirsch and Yunhee Kim) http://cameraculture.media.mit.edu
In other work we have developed a 6D optical system that responds to changes in viewpoint as well as changes in surrounding light. Our lenticular array alignment allows us to achieve such a system as a passive setup, omitting the need for electrical components. Unlike traditional 2D flat displays, our 6D displays discretize the incident light field and modulate 2D patterns in order to produce super-realistic (2D) images. By casting light at variable intensities and angles onto our 6D displays, we can produce multiple images as well as store greater information capacity on a single 2D film (SIGGRAPH 2008).
Ramesh Raskar joined the Media Lab from Mitsubishi Electric Research Laboratories in 2008 as head of the Lab’s Camera Culture research group. His research interests span the fields of computational photography, inverse problems in imaging and human-computer interaction. Recent inventions include transient imaging to look around a corner, next generation CAT-Scan machine, imperceptible markers for motion capture (Prakash), long distance barcodes (Bokode), touch+hover 3D interaction displays (BiDi screen), low-cost eye care devices (Netra) and new theoretical models to augment light fields (ALF) to represent wave phenomena.
In 2004, Raskar received the TR100 Award from Technology Review, which recognizes top young innovators under the age of 35, and in 2003, the Global Indus Technovator Award, instituted at MIT to recognize the top 20 Indian technology innovators worldwide. In 2009, he was awarded a Sloan Research Fellowship. In 2010, he received the Darpa Young Faculty award. He holds over 40 US patents and has received four Mitsubishi Electric Invention Awards. He is currently co-authoring a book on Computational Photography. http://raskar.info
Radiation Damage on Electronic ComponentsBill Cardoso
Ever wondered how radiation impacts the performance of electronic components? In this presentation we address this issue by covering how radiograph systems, namely the TruView X-ray inspection system currently in use worldwide, effect electronic components. In short TruView systems don't have enough power to damage components.
Bloomberg recently reported that an attack by Chinese spies reached almost 30 U.S. companies, including Amazon and Apple, by compromising America’s technology supply chain, according to extensive interviews with government and corporate sources.
In 2016 we presented a solution to this threat at the SMTA Symposium on Counterfeit Parts and Materials. X-ray inspection and AI are the key technologies we deploy to make sure this threat is excluded from our supply chain.
2 d, 2.5d, and 3d x ray inspection – what’s a “d”?Bill Cardoso
In this presentation we clarify one of the most confusing topics in x-ray inspection: the difference between 2D, 2.5D, and 3D x-ray inspection. Check the slides to see if you can tell if an image is 2D, 2.5D, or 3D!
SIGGRAPH 2014 Course on Computational Cameras and Displays (part 1)Matthew O'Toole
Recent advances in both computational photography and displays have given rise to a new generation of computational devices. Computational cameras and displays provide a visual experience that goes beyond the capabilities of traditional systems by adding computational power to optics, lights, and sensors. These devices are breaking new ground in the consumer market, including lightfield cameras that redefine our understanding of pictures (Lytro), displays for visualizing 3D/4D content without special eyewear (Nintendo 3DS), motion-sensing devices that use light coded in space or time to detect motion and position (Kinect, Leap Motion), and a movement toward ubiquitous computing with wearable cameras and displays (Google Glass).
This short (1.5 hour) course serves as an introduction to the key ideas and an overview of the latest work in computational cameras, displays, and light transport.
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to analyze how Light Field Technology is becoming economic feasible for an increasing number of applications. Light Field Cameras record all of the light fields in a picture instead of just one light field. This capability enables users to change the focus of pictures after they have been taken and to more easily record 3D data. These features are becoming economically feasible improvements because of rapid improvements in camera chips and micro-lens arrays (an example of micro-electronic mechanical systems, MEMS). These features offer alternative ways to do 3D sensing for automated vehicles and augmented reality and can enable faster data collection with telescopes.
We have built a camera that can look around corners and beyond the line of sight. The camera uses light that travels from the object to the camera indirectly, by reflecting off walls or other obstacles, to reconstruct a 3D shape.
Computational Displays in 4D, 6D, 8D
We have explored how light propagates from thin elements into a volume for viewing for both automultiscopic displays and holograms. In particular, devices that are typically connected with geometric optics, like parallax barriers, differ in treatment from those that obey physical optics, like holograms. However, the two concepts are often used to achieve the same effect of capturing or displaying a combination of spatial and angular information. Our work connects the two approaches under a general framework based in ray space, from which insights into applications and limitations of both parallax-based and holography-based systems are observed.
Both parallax barrier systems and the practical holographic displays are limited in that they only provide horizontal parallax. Mathematically, this is equivalent to saying that they can always be expressed as a rank-1 matrix (i.e, a matrix in which all the columns are linearly related). Knowledge of this mathematical limitation has helped us to explore the space of possibilities and extend the capabilities of current display types. In particular, we have designed a display that uses two LCD panels, and an optimisation algorithm, to produce a content-adaptive automultiscopic display (SIGGRAPH Asia 2010).
(Joint work with R Horstmeyer, Se Baek Oh, George Barbastathis, Doug Lanman, Matt Hirsch and Yunhee Kim) http://cameraculture.media.mit.edu
In other work we have developed a 6D optical system that responds to changes in viewpoint as well as changes in surrounding light. Our lenticular array alignment allows us to achieve such a system as a passive setup, omitting the need for electrical components. Unlike traditional 2D flat displays, our 6D displays discretize the incident light field and modulate 2D patterns in order to produce super-realistic (2D) images. By casting light at variable intensities and angles onto our 6D displays, we can produce multiple images as well as store greater information capacity on a single 2D film (SIGGRAPH 2008).
Ramesh Raskar joined the Media Lab from Mitsubishi Electric Research Laboratories in 2008 as head of the Lab’s Camera Culture research group. His research interests span the fields of computational photography, inverse problems in imaging and human-computer interaction. Recent inventions include transient imaging to look around a corner, next generation CAT-Scan machine, imperceptible markers for motion capture (Prakash), long distance barcodes (Bokode), touch+hover 3D interaction displays (BiDi screen), low-cost eye care devices (Netra) and new theoretical models to augment light fields (ALF) to represent wave phenomena.
In 2004, Raskar received the TR100 Award from Technology Review, which recognizes top young innovators under the age of 35, and in 2003, the Global Indus Technovator Award, instituted at MIT to recognize the top 20 Indian technology innovators worldwide. In 2009, he was awarded a Sloan Research Fellowship. In 2010, he received the Darpa Young Faculty award. He holds over 40 US patents and has received four Mitsubishi Electric Invention Awards. He is currently co-authoring a book on Computational Photography. http://raskar.info
Radiation Damage on Electronic ComponentsBill Cardoso
Ever wondered how radiation impacts the performance of electronic components? In this presentation we address this issue by covering how radiograph systems, namely the TruView X-ray inspection system currently in use worldwide, effect electronic components. In short TruView systems don't have enough power to damage components.
Bloomberg recently reported that an attack by Chinese spies reached almost 30 U.S. companies, including Amazon and Apple, by compromising America’s technology supply chain, according to extensive interviews with government and corporate sources.
In 2016 we presented a solution to this threat at the SMTA Symposium on Counterfeit Parts and Materials. X-ray inspection and AI are the key technologies we deploy to make sure this threat is excluded from our supply chain.
We propose a flexible light field camera architecture that is at the convergence of optics, sensor electronics, and applied mathematics. Through the co-design of a sensor that comprises tailored, Angle Sensitive Pixels and advanced reconstruction algorithms, we show that—contrary to light field cameras today—our system can use the same measurements captured in a single sensor image to recover either a high-resolution 2D image, a low-resolution 4D light field using fast, linear processing, or a high-resolution light field using sparsity-constrained optimization.
Mitchell Reifel (pmdtechnologies ag): pmd Time-of-Flight – the Swiss Army Kni...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Mitchell Reifel (pmdtechnologies ag): pmd Time-of-Flight – the Swiss Army Knife of 3D depth sensing
pmd's Time-of-Flight technology is integrated into two AR-smartphones on the market! pmd ToF is in 4 AR headsets! This talk will show what pmd has achieved, what they can do with our 3D ToF technology and why depth sensing is one secret sauce for AR, VR and MR.
http://AugmentedWorldExpo.com
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/onsemi/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Robin Jenkin, Director of Analytics, Algorithm and Module Development at ON Semiconductor, presents the "Image Sensors for Vision: Foundations and Trends" tutorial at the May 2016 Embedded Vision Summit.
Choosing the right sensor, lens and system configuration is crucial to setting you off in the right direction for your vision application. Jenkin examines fundamental considerations of image sensors that are important for embedded vision, such as pixel size, frame rate, rolling shutter vs. global shutter, back side illumination vs. front side illumination, color filter array choice and lighting, and quantum efficiency vs. crosstalk. He also explains chief ray angle, phase detect auto focus pixels, dynamic range, electron multiplied charge coupled devices, synchronization and noise, and concludes with observations on sensor trends.
Virtual Retinal Display: their falling cost and rising performanceJeffrey Funk
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to analyze the increasing economic feasibility of virtual retinal displays. These displays focus light on a person’s retina using LEDs, digital micro-mirrors and lenses, which are all encased in a head-set about the size of glasses. They enable high resolution 3D video images with a large field of view that are far superior to existing displays. Rapid improvements in LEDs and digital micro-mirrors (one type of MEMS) are enabling these displays to experience rapid reductions in cost and improvements in performance.
We propose a flexible light field camera architecture that is at the convergence of optics, sensor electronics, and applied mathematics. Through the co-design of a sensor that comprises tailored, Angle Sensitive Pixels and advanced reconstruction algorithms, we show that—contrary to light field cameras today—our system can use the same measurements captured in a single sensor image to recover either a high-resolution 2D image, a low-resolution 4D light field using fast, linear processing, or a high-resolution light field using sparsity-constrained optimization.
Mitchell Reifel (pmdtechnologies ag): pmd Time-of-Flight – the Swiss Army Kni...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Mitchell Reifel (pmdtechnologies ag): pmd Time-of-Flight – the Swiss Army Knife of 3D depth sensing
pmd's Time-of-Flight technology is integrated into two AR-smartphones on the market! pmd ToF is in 4 AR headsets! This talk will show what pmd has achieved, what they can do with our 3D ToF technology and why depth sensing is one secret sauce for AR, VR and MR.
http://AugmentedWorldExpo.com
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/onsemi/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Robin Jenkin, Director of Analytics, Algorithm and Module Development at ON Semiconductor, presents the "Image Sensors for Vision: Foundations and Trends" tutorial at the May 2016 Embedded Vision Summit.
Choosing the right sensor, lens and system configuration is crucial to setting you off in the right direction for your vision application. Jenkin examines fundamental considerations of image sensors that are important for embedded vision, such as pixel size, frame rate, rolling shutter vs. global shutter, back side illumination vs. front side illumination, color filter array choice and lighting, and quantum efficiency vs. crosstalk. He also explains chief ray angle, phase detect auto focus pixels, dynamic range, electron multiplied charge coupled devices, synchronization and noise, and concludes with observations on sensor trends.
Virtual Retinal Display: their falling cost and rising performanceJeffrey Funk
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to analyze the increasing economic feasibility of virtual retinal displays. These displays focus light on a person’s retina using LEDs, digital micro-mirrors and lenses, which are all encased in a head-set about the size of glasses. They enable high resolution 3D video images with a large field of view that are far superior to existing displays. Rapid improvements in LEDs and digital micro-mirrors (one type of MEMS) are enabling these displays to experience rapid reductions in cost and improvements in performance.
This portfolio is a collated set of works throughout my Bachelor of Architecture Degree at Deakin University, as well as other artistic skills I possess.
On August 11, 2015, I graduate with an Associate of Applied Science degree in Engineering Design Technology program from Renton Technical College near Seattle. The portfolio represents several samples of my computer-aided drafting projects I have been working since September 2014.
Defining the business value proposition of EA and PPM
Eliminating project risks
Accelerating project execution
Managing project and architecture inter-dependencies
Delivering realized value
Improving collaboration of Architecture and PMO
Here is a sampling of architectural works from my portfolio. These projects were designed when I was a student at the Universities of Colorado and Minnesota. Take a look and enjoy.
Applications Of Mems In Robotics And Bio Mems Using Psoc With Metal Detector ...IOSR Journals
Abstract: This project deals with accelerometercontrolled robot with wireless image and voice transmission as well as metal Detector. This robot is prototype for the “Path Finder”. This robot is controlled based on PSoCdevice using MEMS accelerometer remote. This can be moved forward and reverse direction using geared motors of 60RPM. Also this robot can take sharp turnings towards left and right directions. A high sensitive induction type metal detector is designed using colpitts oscillator principle and fixed to this robot. Also a wireless camera with voice is interfaced to the kit. When the robot is moving on a surface, the system produces a beep sound when metal is detected. This beep sound will be transmitted to remote place. Simultaneously the images around the robot will be transmitted to remote place. User can monitor the images and metal detection alarms on Television. Keywords: PSoC designer 1.0, keil -c,PSOC device (CY8C29466), AT89S52.
David Prendergast - Innovative Physics - From AI to Fukushima - Isle of Wight...onthewight
David Prendergast from Shanklin-based Innovative Physics presented the talk 'From AI to Fukushima' to the Isle of Wight Cafe Scientifique on 21 Jan 2019.
Yole Intel RealSense 3D camera module and STM IR laser 2015 teardown reverse ...Yole Developpement
INNOVATIVE 3D CAMERA FOR FACIAL ANALYSIS AND HAND/FINGER TRACKING, BASED ON RESONANT MICRO-MIRROR, IR LASER, VISIBLE AND NEAR INFRARED IMAGE SENSORS.
Intel RealSense is an intelligent 3D camera equipped with a system of three components: a conventional camera, a near infrared image sensor and an infrared laser projector. Infrared parts are used to calculate the distance between objects, but also to separate objects on different planes. They serve for facial recognition as well as gestures tracking.
The Intel 3D camera can scan the environment from 0.2m to 1.2m. The fixed-focal length camera will support up to 1080p @30FPS capture in RGB with a 77° FOV. Its lens has a built in IR cut filter. The 640x480 pixel VGA camera has a frame rate up to 60fps with a 90° FOV, moreover its lens has an IR Band Pass filter.
More information on that report at http://www.i-micronews.com/reports.html
Polymer light guides for slim sensor TV integration - presentation of TPVision during the Change2Micro event 'Innovation through polymer microtechnology'.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/10/building-the-eyes-of-a-vision-system-from-photons-to-bits-a-presentation-from-gopro/
Jon Stern, Director of Optical Systems at GoPro, presents the “Building the Eyes of a Vision System: From Photons to Bits” tutorial at the May 2021 Embedded Vision Summit.
In this tutorial, Stern presents a guide to the multidisciplinary science of building the eyes of a vision system. CMOS image sensors have been instrumental in lowering the barrier for embedding vision into systems. Their high degree of integration allows photons to be converted into bits with minimal support circuitry. Simple protocols and interfaces mean that companies can design camera-based systems with comparatively little specialist expertise.
To produce high-quality output, the image sensor and optics must be carefully co-optimized to fit the application. To assist with component selection and help avoid common pitfalls, Stern describes the key parameters and provides a practical guide to selecting both sensor and optics for a camera. He also provides an introduction to other hardware considerations and to correcting optical aberrations in the image processing pipeline.
Orbbec’s Front 3D Depth Sensing System in the Oppo Find Xsystem_plus
The first introduction of Orbbec’s 3D front depth sensing system in a mobile application featuring a global shutter, a dot projector and a custom system-on-chip.
More information on that report at: https://www.systemplus.fr/reverse-costing-reports/orbbecs-front-3d-depth-sensing-system-in-the-oppo-find-x/
Development of wearable object detection system & blind stick for visuall...Arkadev Kundu
It is a wearable device. It has a camera, and it detects all living and non living object. This module detects moving object also. It is made with raspberry pi 3, and a camera. One headphone connect with raspberry pi. When this module detects items, it gave a sound output through headphone. Hence the blind man know that item, which is in-front of him or her. We made it in very low budget, and it is very helpful for visually challenged people. And the Blind stick help him to detect obstacles.
Want to move your career forward? Looking to build your leadership skills while helping others learn, grow, and improve their skills? Seeking someone who can guide you in achieving these goals?
You can accomplish this through a mentoring partnership. Learn more about the PMISSC Mentoring Program, where you’ll discover the incredible benefits of becoming a mentor or mentee. This program is designed to foster professional growth, enhance skills, and build a strong network within the project management community. Whether you're looking to share your expertise or seeking guidance to advance your career, the PMI Mentoring Program offers valuable opportunities for personal and professional development.
Watch this to learn:
* Overview of the PMISSC Mentoring Program: Mission, vision, and objectives.
* Benefits for Volunteer Mentors: Professional development, networking, personal satisfaction, and recognition.
* Advantages for Mentees: Career advancement, skill development, networking, and confidence building.
* Program Structure and Expectations: Mentor-mentee matching process, program phases, and time commitment.
* Success Stories and Testimonials: Inspiring examples from past participants.
* How to Get Involved: Steps to participate and resources available for support throughout the program.
Learn how you can make a difference in the project management community and take the next step in your professional journey.
About Hector Del Castillo
Hector is VP of Professional Development at the PMI Silver Spring Chapter, and CEO of Bold PM. He's a mid-market growth product executive and changemaker. He works with mid-market product-driven software executives to solve their biggest growth problems. He scales product growth, optimizes ops and builds loyal customers. He has reduced customer churn 33%, and boosted sales 47% for clients. He makes a significant impact by building and launching world-changing AI-powered products. If you're looking for an engaging and inspiring speaker to spark creativity and innovation within your organization, set up an appointment to discuss your specific needs and identify a suitable topic to inspire your audience at your next corporate conference, symposium, executive summit, or planning retreat.
About PMI Silver Spring Chapter
We are a branch of the Project Management Institute. We offer a platform for project management professionals in Silver Spring, MD, and the DC/Baltimore metro area. Monthly meetings facilitate networking, knowledge sharing, and professional development. For event details, visit pmissc.org.
The Impact of Artificial Intelligence on Modern Society.pdfssuser3e63fc
Just a game Assignment 3
1. What has made Louis Vuitton's business model successful in the Japanese luxury market?
2. What are the opportunities and challenges for Louis Vuitton in Japan?
3. What are the specifics of the Japanese fashion luxury market?
4. How did Louis Vuitton enter into the Japanese market originally? What were the other entry strategies it adopted later to strengthen its presence?
5. Will Louis Vuitton have any new challenges arise due to the global financial crisis? How does it overcome the new challenges?Assignment 3
1. What has made Louis Vuitton's business model successful in the Japanese luxury market?
2. What are the opportunities and challenges for Louis Vuitton in Japan?
3. What are the specifics of the Japanese fashion luxury market?
4. How did Louis Vuitton enter into the Japanese market originally? What were the other entry strategies it adopted later to strengthen its presence?
5. Will Louis Vuitton have any new challenges arise due to the global financial crisis? How does it overcome the new challenges?Assignment 3
1. What has made Louis Vuitton's business model successful in the Japanese luxury market?
2. What are the opportunities and challenges for Louis Vuitton in Japan?
3. What are the specifics of the Japanese fashion luxury market?
4. How did Louis Vuitton enter into the Japanese market originally? What were the other entry strategies it adopted later to strengthen its presence?
5. Will Louis Vuitton have any new challenges arise due to the global financial crisis? How does it overcome the new challenges?
This comprehensive program covers essential aspects of performance marketing, growth strategies, and tactics, such as search engine optimization (SEO), pay-per-click (PPC) advertising, content marketing, social media marketing, and more
2. What’s Included?
•Consumer Product Work
•Industrial Product Work
Please note: In all these examples below, I have tried to provide
the motivation for the technology created, not just the technical
work alone. Particularly in the cases where I was the primary
inventor, I tired to create a solution that met the requirements of
the customer and also provided a significant ROI and competitive
advantage.
3. Consumer Products
1) Award Winning Vertical Scanner
2) Non-Imaging Illuminator
3) Waveguide Array ScanHead
4) Cell Phone Sensors
5) Optical Storage Film
6) BeatLight Laser Toy
4. 1) Vertical See Thru Scanner: Original Patent
• USPTO 6307649
• Invented to solve Problem of Large Scanner FootPrint
• Envisaged as Fitting over Monitor or Storable off Table
5. 1) Vertical See Thru Scanner :Final Product
Winner of Industrial Design Awards
http://www.idsa.org/awards/idea/computer-equipment/hp-scanjet-4670
Scanner Stand
REMOVABLE
SEE-THRU
SCANNER
6. 2) Non-Imaging Scanner Illuminator
• Non-Imaging Truncated Compound Elliptical Concentrator (CEC)
• Created to Improve Efficiency and reduce System BOM
• Result: $10 savings/scanner (100 k units/mo)
• USPTO 5903404
Scan Line Width only ~100 um
BUT Illumination Width >>1 cm Illumination Width<< 1 cm
Truncated CEC
Designed using ASAP
PROBLEM SOLUTION!
7. 3) Waveguide Array ScanHead
This is a polymer waveguide based image scanner. Designed to eliminate the delicate
optics in conventional document scan systems and, thus, reduce component and assembly
cost. Accomplished by using a polymer waveguide; there are thousands of individual
waveguides in this system. I modeled the entire thing in ASAP. I also performed MTF
and other image quality metrics on the completed array. USPTO 5930433
CCD
Waveguide
Array
8. 4)Design/Simulation Ambient & Proximity Sensors
I built this part in SolidWorks and analyzed its performance in Zemax. These
were designed and built for a customer thru Technical Optics, LLC.
Lens
Light Source
Prox
ALS
9. 5) Optical Storage Phosphors Film Structure
At a time when CCD detectors were expensive and CMOS detectors too noisy, we started a project
to create a “digital film” using an Optical Storage Phosphor. Initial film had poor resolution and crosstalk.
In conjunction with Rick Trutna, I invented this structure which significantly improved the resolution and
limited crosstalk in this system. USPTO 5534702
Storage
Phosphor
Reflective Silicon
Film Structure
10. BeatLight is a niche laser entertainment product that projects various images such
as a Circles, Lissajous patterns, Comic Faces, Words etc. The size of these images changes
in Rhythm to the beat of music played near the BeatLight. This system showcases my skills in E
Design, and Programming
6) BeatLight Laser Toy
12. 6) Sample BeatLight Images*
Images respond (shape, size) to the “Beat” of Sound Source near Device
Beaded Circle
LissajousScript
Smiley Face
* Simulated..It was hard to capture a good image without blur on camera
13. 6) BeatLight Prototype
Analog, Digital, Embedded Electronics Design
I designed the electronics and installed all components
14. 6) BeatLight :Double Sided Printed Circuit Board
I selected the components and did board layout.
15. 6) BeatLight: C Program Example
I Wrote the Code and Programmed
It into MicroController Memory
16. 6) BeatLight Actuator Prototype
I designed and built these Beam Deflectors. They are equivalent in performance to
Galvanometers costing hundreds of dollars, but are compact and inexpensive.
These Devices incorporate Optics, Mechanics, Magnetics, and Electronics
17. 6) BeatLight :FEA of Actuator Magnets
This sim allowed me to optimize magnet design for
optimal field uniformity and strength
Original Design: Low Density Flux Improved Design: High Density Flux
18. Industrial Products
7) III-V Detectors for Instruments
8) Array Detector & Spectrometer
9) Flat Response Detector for Power Meters
19. 7) III-V Photodetector Design and Fabrication
I have designed, built, and tested Large Area and High Speed Photodetectors
similar to the one shown below. These detectors were developed for HP’s
Optical Measurement Systems at the request of our product divisions
and incorporated into systems such as the HP 8504 Precision Reflectometer.
20. 7) III-V Detector: Optical Measurement System
Broadband
Light Source
Monochromator
Fiber
Coupler
Reference
Detector
Parameter
Analyzer
Voltmeter
Computer
Probe Station
Device
Under
Test
Microscope
Camera/Eye
As part of the detector development, I’ve built systems for and performed optical and electrical test.
The system below measures the wavelength response of the device. I’ve also built and operated
similar systems for Optical Fiber Measurement, including Attenuation vs Wavelength, Numerical
Aperture, Frequency Response, OTDR and Refractive Index
21. 7) Typical Measurement Systems
The Detector measurement system and other optical measurement systems are typically
set up on an optical tables and look something like this. I’ve built many measurement
systems in this manner: for fiber optics, detectors, laser test, and scanner/optical pickup.
22. 8) NIR ARRAY DETECTOR & SPECTROPHOMETER
I have also designed, built, and tested Array Photodetectors. These required
creation of significant additional technology due to the close spacing. These
detectors were developed for a NIR Spectrophotometer as an extension to the
existing UV-VIS market. These systems can analyze materials only accessible in the
NIR. I built a similar system as shown and demonstrated liquid Ethanol absorption.
23. 8) NIR Array Detector: Photoetching Diffusion Analysis
The InGaAs Array Detector shown on the previous page had very close
spacing. The Zn diffusion could overlap and short the detectors. I invented
and published this electroless photoetching system for visualizing the Zn
diffusion in adjacent detectors. Also, transferred this technique to our
product division.
24. 9) Flat Response Detector (FReD)
FReD was created in response to a HP Optical Power Meter Division request for
a Photodetector not requiring calibration. I did this by thinning down the epi
and using a compensating filter designed using TFCalc.
Conventional InGaAs Detector
FReD InGaAs Detector
p+ InP
n- InGaAs
n+ InP
n+ InP
Thin n- InGaAs
p+ InP
compensating filter
Conventional
InGaAs
Photodetector
FReD
InGaAs
Photodetector
25. Jim Williamson Summary
•Very Experienced, Broadly Skilled Engineer
•Experience: Optics, Optoelectronics, Test, Materials,
Processing, Research, Design, Development,
Applications, Electronics, Mechanical Design, and
Programming
•Seeking Contract & Unique Full Time Positions
• Contact me at opticalengineer123@gmail.com