BME 4900 Final Report

639 views

Published on

The purpose of this project is to design and construct a low-cost device to produce a 3D image from existing ultrasound images in a clinical setting. Our client is Joseph McIsaac, an anesthesiologist at Hartford Hospital. When determining where the brachial plexus is using the ultrasound probe, it is difficult to reconstruct in one’s mind the 3D configuration of this nerve, so the proposal for this project is to ease this aspect of a clinician’s job. With knowledge of the location of the probe when the two dimensional ultrasound image is produced and mathematical calculations, it was expected that an algorithm could be made to take these images and reconstruct them into a three dimensional image.
To determine the spatial configuration of the probe, an attachment consisting of three spheres oriented in an equilateral triangle is placed directly on the probe. Two typical web cameras are used to take pictures of the ultrasound probe and this “tracking pyramid” attachment. The cameras are interfaced with an image recognition program to detect changes in the position of the probe throughout the procedure. Knowing these changes along with the distance between the probe and the cameras, and the differences between the two images produced from the cameras, we are able to use stereo triangulation to calculate the exact spatial configuration of the probe. The images produced from the cameras are synced with the ultrasound images which are compiled in a 3D reconstruction program to produce a 3D image which corresponds to what was imaged in the clinical procedure.
It is intended to make this design freely available to hospitals and clinics nationwide. All software is written in LabVIEW Developer Suite, and will be made available as a stand-alone program. Included with this will be a manual with instructions to setup and implement this device, making it truly useful to physicians in any aspect of healthcare.

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
639
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
5
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

BME 4900 Final Report

  1. 1. 3D Ultrasound Reconstruction BME 4900 Final Report Michael Golden Khayriyyah Munir Omid Nasser Bigdeli Team 3 Client Contact Information: Dr. Joseph McIsaac Hartford Hospital 80 Seymour St. PO Box 5037 Hartford, CT 06102 (860) 545 2117
  2. 2. Table of ContentsAbstract Page 51 Introduction 6 1.1 Background 6 1.2 Purpose of the Project 6 1.3 Previous Word Done by Others 6 1.3.1 Summer Interns 6 1.3.2 BrainLAB 6 1.4 Map for the rest of the project 72 Project Design 8 2.1 Alternative Designs 8 2.1.1 Camera Mounting (1): Fixed Sawhorse Support 8 2.1.2 Camera Mounting (2): Sliding Angled Support 9 2.1.3 Camera Mounting (3): Sliding Perpendicular Support 9 2.1.4 Camera Configuration (1): Perpendicular 10 2.1.5 Camera Configuration (2): Parallel 11 2.1.6 Tracking System Configuration (1): In-line Triangle Configuration 12 2.1.7 Tracking System Configuration (2): Perpendicular and Off Center 2.1.8 Tracking System Configuration (3): Perpendicular and Centered 2.2 Optimal Design 14 2.2.1 Objective 14 2.2.2 Subunits 15 2.2.2.1 Ultrasound System and Probe 15 |2
  3. 3. 2.2.2.2 Cameras Page 16 2.2.2.3 Camera Support 16 2.2.2.4 Camera Configuration 18 2.2.2.5 Camera Mount 18 2.2.2.6 Tracking Pyramid 19 2.2.2.7 Stereo Triangulation 20 2.2.2.8 Image Acquisition 25 2.2.2.9 Image Reconstruction 253 Realistic Constraints 26 3.1 Engineering 26 3.2 Economical 26 3.3 Manufacturability 26 3.4 Ethical 27 3.5 Health and Safety 27 3.6 Social and Political 284 Safety Issues 295 Impact of Engineering Solutions 306 Life-Long Learning 317 Budget and Timeline 32 7.1 Time Line8 Team Members Contributions to the Project9 Conclusion10 References11 Acknowledgements |3
  4. 4. |4
  5. 5. AbstractThe purpose of this project is to design and construct a low-cost device to produce a 3Dimage from existing ultrasound images in a clinical setting. Our client is Joseph McIsaac, ananesthesiologist at Hartford Hospital. When determining where the brachial plexus is usingthe ultrasound probe, it is difficult to reconstruct in one’s mind the 3D configuration of thisnerve, so the proposal for this project is to ease this aspect of a clinician’s job. Withknowledge of the location of the probe when the two dimensional ultrasound image isproduced and mathematical calculations, it was expected that an algorithm could be madeto take these images and reconstruct them into a three dimensional image.To determine the spatial configuration of the probe, an attachment consisting of threespheres oriented in an equilateral triangle is placed directly on the probe. Two typical webcameras are used to take pictures of the ultrasound probe and this “tracking pyramid”attachment. The cameras are interfaced with an image recognition program to detectchanges in the position of the probe throughout the procedure. Knowing these changesalong with the distance between the probe and the cameras, and the differences betweenthe two images produced from the cameras, we are able to use stereo triangulation tocalculate the exact spatial configuration of the probe. The images produced from thecameras are synced with the ultrasound images which are compiled in a 3D reconstructionprogram to produce a 3D image which corresponds to what was imaged in the clinicalprocedure.It is intended to make this design freely available to hospitals and clinics nationwide. Allsoftware is written in LabVIEW Developer Suite, and will be made available as a stand-alone program. Included with this will be a manual with instructions to setup andimplement this device, making it truly useful to physicians in any aspect of healthcare. |5
  6. 6. 1 Introduction 1.1 BackgroundThis project is designed to assist anesthesiologists in imaging the brachial plexus region inthe neck when administering anesthesia. The client is Dr. Joseph McIsaac, ananesthesiologist at Hartford Hospital. He currently uses an ultrasound probe to image intwo dimensions the region of the brachial plexus. It would be more informative if thisimage was a three dimensional image—eliminating the need for the doctor to mentallyrecreate the patient’s nerve. This design would be freely available to the public, allowingphysicians nationwide to make use of this tool. 1.2 Purpose of the ProjectThe main purpose of this project is to provide a low cost, unpatented tool forreconstructing 2D ultrasound images into 3D images which can be seen during theadministration of anesthesia and throughout the procedure. Anesthesiologists currentlyhave to imagine what the patient looks like subcutaneously (a 3D image), using the 2Dultrasound as a stepping stone. There is currently a product marketed by BrainLab whichprovides 3D images in real time, but its costs greatly exceeds the budget of many hospitals.This project will achieve the same goal as the product that already exists using twoinexpensive web cameras and LabVIEW to acquire and process the image. This LabVIEWprogram and design of our tool will be made available to the public, allowing physiciansnationwide to make use of this tool in a very cost effective manner. 1.3 Previous Work Done by Others 1.3.1 Summer InternsIn the summer of 2010, two high school juniors from the Avon robotics team worked asinterns in Dr. McIsaac’s lab and started this project. They did a lot of preliminaryresearch—establishing industry contacts, and writing a LabVIEW program whichaccomplishes many of the necessary tasks individually, but they have not integrated thevarious aspects together. It is expected that we will use what they have accomplished as afoundation for our project, and tie together what they have already produced. We havepurchased the same cameras they used to ensure proper image acquisition with theexisting LabVIEW program. The web cameras will image three balls placed on top of theultrasound probe to recognize its spatial location. The model Taylor and Justin producedwill be very similar to what we use, as this is what currently works well with what alreadyexists in industry. 1.3.2 BrainLABBrainLAB, a company in Munich, Germany, which specializes in technology forneurosurgery currently markets VectorVision. In April 1997, BrainLAB received 510(k)clearance from the Food and Drug Administration to market this product. VectorVisionuses three reflective balls arranged in an equilateral triangle, and is imaged by two |6
  7. 7. cameras. This is the platform for image-guided surgery, especially neurosurgical andorthopedic procedures. The system does not have wires and can be integrated with anyinstruments currently used in the operating procedure. Using VectorVision, surgeons canfollow the movements of their instruments on the computer screen in real-time duringsurgical procedures. The basis of BrainLAB’s product is the starting point for our project,but we hope to achieve the same goal at a significantly smaller fraction of the cost. 1.4 Map for the rest of the projectForward movement on this project consists mostly of software development. Each of ushas installed National Instrument’s LabVIEW 2010 with Developer Suite on our personalcomputers. We will use Source Code Control to develop and work on various programcomponents at the same time without worry of having information overridden or thecomplications of merging programs. The components of the program that need to bedeveloped include the image recognition to differentiate between the three differentcolored spheres in the tracking pyramid located on top of the probe. We also needcalculate in our program the change in location of the pyramid to have an accurateunderstanding of the spatial change made by the probe during the imaging. The ultrasoundimages will be taken as input to our program and reconstructed into a 3D image using athird party reconstruction program that is compatible with LabVIEW. The program will bedeveloped using the Developer Suite component of LabVIEW to produce a stand-aloneprogram, useful for anyone who has a computer and two web cameras. |7
  8. 8. 2 Project Design 2.1 Alternative DesignsThere are three main components to the hardware implementation. For each phase, thereare different choices available to optimize this project. The first option is the mounting ofthe two webcams. The second option is the cameras’ configuration to each other and to thepatient. The final option is the tracking pyramid’s design. 2.1.1 Camera Mounting (1): Fixed Sawhorse SupportIn this design, there are a total of five bars, all composed of 80/20 Inc’s T-slotted aluminumframing. Bar A is four feet long and arranged parallel to the floor with both camerasmounted parallel to each other. The cameras are mobile in the x-direction in order to findan optimal distance between them for more accurate determinations of the distancebetween the cameras and the patient. Bars B and Bars C are also four feet long and areconnected at 30 degrees. All bars are connected at points P and Q.The advantages of this design include the mobility of the cameras in the x direction as wellas the convenience of the support structure. The major disadvantage of this design is thatthe height of the cameras is fixed. Depending on the patient, the cameras’ height cannot beoptimized to individual needs. However, with everything connected, the structure will beable to be moved from room to room. An example of this support structure is seen in Fig2.1. Figure 2.1 Fixed Sawhorse Support |8
  9. 9. 2.1.2 Camera Mounting (2): Sliding Angled SupportIn this design, there are a total of five bars, all composed of 80/20 Inc’s T-slotted aluminumframing. Bar A is four feet long and arranged parallel to the floor, with two camerasmounted parallel to each other. The cameras are mobile in the x-direction in order to findthe optimal distance between them for more accurate determinations of the distancebetween the cameras and the patient. Bar A is attached at either end to Bars B, and ismobile in the plane of Bars B. Bars B are each fixed to a two foot Bar C at 60 degrees. Theadvantages of this design include the ability for the cameras to move in the y-direction.This allows the physician to adjust the height of the cameras to be specific for any bed orpatient in the hospital. The disadvantages of this design include the fact that as the heightof Bar A decreases, the angle between the cameras and what needs to be imaged on thepatient changes. This significantly increases the mathematical calculations necessary todetermine the position of the ultrasound probe because a new set of calculations isnecessary for every location of Bar A. This camera mount is shown below in Fig 2.2. Figure 2.2 Sliding Angle Support 2.1.3 Camera Mounting (3): Sliding Perpendicular SupportIn this design, there are a total of five bars, all composed of 80/20 Inc’s T-slotted aluminum.Bar A is four feet long and arranged parallel to the floor, with two cameras mountedparallel to each other. The cameras are mobile in the x-direction in order to find an optimaldistance between them for more accurate determinations of the distance between thecameras and the patient. Bars B are perpendicular to the floor, and each attached to eitherend of Bar A. Bar A is mobile in the y-direction because Bar A is composed of T-slotted |9
  10. 10. aluminum. The advantages of this design include the ability for the cameras to move inboth the x-direction and the y-direction, making it possible to find the optimal position forboth cameras for various bed heights throughout the hospital and patient sizes. An imageof this mount is shown below in Fig 2.3. Figure 2.3 Sliding Perpendicular Support 2.1.4 Camera Configuration (1): PerpendicularIn this design, each camera will be mounted to one end of an L-shaped support with eachleg measuring 4.5 feet in length. They will both image the probe and tracking system, butin a perpendicular fashion. The disadvantages of this design include the fact that is verybulky and requires a lot of material. It takes up a lot of space and would inhibit themovement of the physician, and therefore his contact with the patient, from two directions.Also with the cameras arranged 90 degrees to each other, the convergence of the imageareas is much less, so the location of the patient needs to be very specific and is not easilychanged with patient variance. The perpendicular camera configuration is shown below inFig 2.4. |10
  11. 11. Figure 2.4 Perpendicular Orientation 2.1.5 Camera Configuration (2): ParallelIn this design, both cameras are arranged on one bar, four feet in length, and parallel toeach other. Because they both image from the same direction, the stereotriangulationcalculations are simpler. The cameras are close to each other which makes this designextremely space efficient. The optimal distance from the patient is four feet, and as long asthis is the case the spatial configuration can be anything, as long as the patient is in theview of the cameras. There is no need for the cameras to be in any one specific location,which is desirable for the physician. The parallel camera orientation is illustrated with Fig2.5. |11
  12. 12. Figure 2.5 Parallel Orientation 2.1.6 Tracking System Configuration (1): In-line Triangle ConfigurationIn this design, spheres A, B, and D are configured in an equilateral triangle, all in the sameplane. The ultrasound probe is attached to sphere A, in the same line as that made byconnected A and B. The disadvantages of this design include the fact that if for some reasonthe ultrasound probe is in line with the focal line of one of the cameras, the camera willonly image two spheres, with the other being lost. This would give rise to complications inthe calculations, as information from three spheres is necessary for sterotraingulation.This tracking design is shown below in Fig 2.6. Figure 2.6 In-Line Triangle Configuration |12
  13. 13. 2.1.7 Tracking System Configuration (2): Perpendicular and Off CenterIn this design, spheres A, B, and D are configured in an equilateral triangle, all in the sameplane. The ultrasound probe is attached to sphere D, perpendicular to the plane formed byspheres A, B, and D. The advantages to this design include that at any point all threespheres are imaged, and there is no way for them to block one another from either one ofthe cameras. Disadvantages of this design include a non-optimal range of motion,particularly with respect to spheres A and B. As the angle between the probe and the neckregion decreases, spheres A and B get significantly closer to the patient and can eventuallymake contact with the patient, limiting the range of motion in that direction. Theperpendicular and off centered tracking system is shown in Fig 2.7. Figure 2.7 Off Center Triangle Configuration 2.1.8 Tracking System Configuration (3): Perpendicular and CenteredIn this design, spheres A, B, and D are configured in an equilateral triangle, all in the sameplane. The ultrasound probe is perpendicular to the plane, but at the centroid of the plane.Advantages to this design include that at any point, all three spheres are imaged and thereis no way for them to block one another from either of the cameras. Also, with theultrasound probe being located at the centroid, the physician has the greatest range ofmotion in any direction. This tracking system is illustrated by Fig 2.8. |13
  14. 14. Figure 2.8 Centered Triangle Configuration 2.2 Optimal DesignAfter a review of the proposed alternative designs, the decision was to use a structuresimilar to the sliding perpendicular mount, with the cameras parallel to each other. Thetracking system is designed to be similar to the perpendicular and centered configuration. 2.2.1 ObjectiveThis project is designed to assist anesthesiologists in imaging the brachial plexus region inthe neck when administering anesthesia. Currently anesthesiologists us an ultrasoundprobe to image in two dimensions the region of the brachial plexus, but it would be moreinformative if this image was 3-D. Anesthesiologists currently have to imagine what theregion they are imaging looks like in 3-D, using the 2-D movie as a platform.We intend to use inexpensive web cameras in conjunction with LabVIEW to write aprogram to calculate the spatial location of the ultrasound probe and sync that with theproduced ultrasound image. This image will be disassembled and reassembled in 3-D.Figure 2.9 displays the sequence of steps to be taken to obtain a 3-D image of the brachialplexus region. |14
  15. 15. Image “Tracking Stereo triangulation Disassemble MP4 Pyramid” with to calculate specific into separate JPEG two cameras configuration of images “Tracking Pyramid” Calculate coordinates of Reconstruct series of anatomy of what is produces in 2D JPEG images into MP4 ultrasound image 3D image Figure 2.9 Sequence of Tasks to Reconstruct a 3D ImageWe intend to design and manufacture an unpatented tool for reconstructing 2-D ultrasoundimages into 3-D images which can be seen during the administration of anesthesia andthroughout the procedure. The design of our project will be made freely available to thepublic, particularly physicians who wish to have access to this technology and would like todo so inexpensively.Our alternative designs were separated into the three components that will be part of ourdesign--the camera support, the camera configuration, and the "Tracking Pyramid". Fromthe various designs for each of these components we chose the most efficient and mostpractical components and have put those together for our optimal design.For the camera support, we chose the sliding perpendicular support because it had thesimplest calculations, most camera mobility, and consumed the least amount of space in theoperating room. We chose the parallel option for the camera configuration because thestereo triangulation calculations are simpler than those necessary with the perpendicularconfiguration. Also with this design, both cameras can rest on the same bar and not requireanother entire support, conserving space in the operating room. For the ultrasoundtracking system we chose the option in which the spheres for detecting the location of theultrasound are arranged in an equilateral triangle, with the probe located at the center andperpendicular to the plane formed by the triangle. This design provides the greatest rangeof motion for the probe with the least possible patient interference. Also with thisconfiguration, all of the spheres will be imaged at all times, as they cannot physically blockeach other. 2.2.2 Subunits 2.2.2.1 Ultrasound System and ProbeSonosite’s Micromaxx ultrasound system will be used in the design and implementation ofthis project. This system operates by the means of a single ultrasound probe, of whichthere are several to choose from. The specific probe used by clinicians to examine thebrachial plexus is known as the SLA probe. Dr. McIsaac has offered to us some time towork with the SLA probe to become familiar with it. During manufacturing of this project,the ultrasound cannot in any way be damaged due to the cost of the probe. |15
  16. 16. This ultrasound produces a MPEG4 movie. It operates at a frequency of 13-6 MHz, with a25 mm broadband linear array. The scan depth is approximately 6 cm. According to themanufacturer, the applications are for vascular, musculoskeletal, superficial, and nerve. [1] 2.2.2.2 CamerasFor this project, we will use two Logitech WebCam Pro 9000s web cameras. This camera isshown below in Fig 2.10. These cameras are capable of taking 8-megapixel photos and highdefinition video up to 30 frames-per-second. This model is Hi-Speed USB 2.0 certified forconnection with the computer, and is compatible with Windows XP (SP2) or higher. Thesupport of the cameras has been removed, exposing several options to properly secure thecamera to the support bar. It is only after the cameras are fixed to this bar can severalother key steps towards completion be taken. Figure 2.10 Logitech Webcam Pro9000 2.2.2.3 Camera SupportThe design chosen for the camera support is one in which it is easy to determine the idealposition for the cameras by way of adjusting their spatial location. All of the material inthis support will be made with 80/20 Inc’s T-slotted aluminum. Figure 2.11 illustrates thecamera support in detail. The side beams (A) will measure 6 feet (1.83 m) in length and arearranged perpendicular to the floor and parallel to each other. The top and bottomsupports (B) as well as the camera support (C) will each measure 4 feet (1.22 m) in length,and are arranged parallel to the floor and perpendicular to the side beams (A). The top andbottom supports will be fixed in place with respect to the vertical supports whereas thecamera support (C) will be able to move vertically up and down the side beams by means ofa linear brake assembly. The base stands (E) will measure 2 feet (.60 m) each and theangled supports (D) will each measure 1 foot (.30 m) in length. Figure 2 displays thedesign for this support system.The advantages of this design include the ability for the cameras to move in both the x andy-directions, making it possible to find the optimal position for both cameras for variousbed heights and patient sizes. A current disadvantage to this setup is the size of thesystem. Standing 6 feet (2.22 m), the support structure is very tall—taller than it needs tobe. The reason it is so tall is for optimization. If it is determined that the cameras willnever have to be more than 4 feet (1.22 m) tall, then the support structure will bedisassembled, cut and reassembled to fit the specifications. |16
  17. 17. Figure 2.11 Camera Support StructureThe assembly of this support system will be simple due to the complexity of the 80/20 Inc.parts. These parts are designed to intimately interact with each other. This company hasalready developed many parts that allow for different angles to be made, as well as manyother attachments to be added onto their T-Slotted aluminum framing. Using a linearbrake will allow bar C to move up and down with ease.[2] This gives the cameras a greaterrange of motion. 2.2.2.4 Camera ConfigurationThe design chosen for the cameras configuration is the one in which the cameras’ opticalaxes are arranged parallel to each other and parallel to the ground. They will be placedsuch that their fields of view provide the greatest overlap. This ensures that the trackingsystem will be imaged by both cameras. In this design, the cameras are both LogitechWebcam Pro 9000s, and are arranged on one bar which measures 4 feet (1.22 m) in length.Because they both image in the same direction, the stereo triangulation calculations are |17
  18. 18. simplified (stereo triangulation is discussed later in section 2.2.2.7). Another advantage oforienting the cameras in this manner is that the two webcams will be physically close toeach other, making this design space efficient. 2.2.2.5 Camera MountThe bracket the Logitech Webcam Pro 9000 comes with is not sufficient for this project.There are two joints where the webcam has practically free rotation: one at the neck of thecamera; the other joint between the two plastic pieces holding the camera in a fixedposition. In order to keep the cameras in one specific place during operation, a specializedbracket has been designed.This newly designed bracket is more rigid and will only allow the camera to move alongone direction. It is designed to mount onto the camera and act as a hinge with a plate (C).The original mount of the camera is removed and two new metal supports (A) are designedto hold the camera, as shown in Fig 2.12. These metal supports operate as the hinge withthe three prongs of the bracket (B). The three prongs and the plate are the same piece ofmetal and are therefore rigid. The height of the hinge is determined such that when thecamera rests on the plate, the camera’s optical axis is level with the ground. Figure 2.12 Camera Bracket without WebcamThis bracket will be machined in the University of Connecticut machine shop. The mill willbe used to create both a smooth surface as well as the intimate details. The two metalsupports (A) will be machine independent of the rest of the bracket. The mill will make thelarger cylinder protruding from the block by removing the other material from a largerpiece of aluminum. The smaller cylinder is a pin that drops into a hole drilled using themill. The advantage of this is that the mill has technology allowing it to accurately mill in acircular pattern—making a boss protrusion with an accurate diameter. |18
  19. 19. The bracket itself will be milled from an angled piece of aluminum. Since the angle isalready there, the only important parts of the bracket are the depression for the larger sideof the camera as well as the three prongs of the bracket. The hole that will act as a hingefor the two parts will be made using a 1/8 inch diameter bit because it is the smallest bitthat is long enough to drill through the three prongs accurately. 2.2.2.6 Tracking PyramidThe purpose of the tracking pyramid is so the cameras have an object to watch where thesoftware will be able to compare two images and give the 3D coordinate of the ultrasoundprobe. We must also make sure that LabVIEW will be able to locate the position of thesethree points, regardless of their orientation in 3D space. An object that has the samegeneral outline regardless of which angle you take a picture of it is a sphere. Therefore, wehave decided to use three spheres to determine the position and plane of the ultrasoundprobe.The key to the design is how the three spheres are oriented to each other and to theultrasound probe in space. We must take into account a few major factors whendetermining this. The first concept that must be considered is that the spheres should beconnected to probe in such a way that when the doctor moves the probe along the neck ofthe patient and at different angles, the three spheres do not hinder the doctor’s desiredmovement of the probe. For example, if the spheres are placed in front of the probe insteadof behind it, the spheres will hit the patient’s neck and limit the angular movement of theprobe. The sphere must also come out far enough behind the probe so that the neck of thepatient will never hinder the view of the spheres from the cameras. Another factor thatmust be considered is that the spheres are oriented with respect to the probe in such amanner that the 3D position and plane of the ultrasound probe can be calculated usingvector analysis.The design chosen for the configuration of the three spheres located on top of theultrasound probe is such that the three spheres will be arranged in an equilateral triangle.The centroid of this triangle will be oriented so it is directly above the center of theultrasound probe. This design allows all the spheres to be images by the two cameras atthe same time. There will be few, if any, instances when one sphere blocks another. Also,this design brings the spheres away from the patient, allowing the clinician to have agreater freedom of movement. A CAD drawing of this proposal is found in Fig 2.13. |19
  20. 20. Figure 2.13 Tracking PyramidWith the spheres arranged in the equilateral triangle, as they are in this design, and theentire system being imaged by two cameras, we are able to use stereo triangulation tocalculate the exact spatial location of the ultrasound probe (discussed in section 2.2.2.7).The images taken by both cameras will be imported into LabVIEW, after which LabVIEWassigns coordinates to the image. 2.2.2.7 Stereo TriangulationThe concept behind stereo triangulation is that if one knows the distance and orientation oftwo cameras to one another, then one can calculate the position of an object in 3D space byanalyzing the difference in position of the object in the pictures taken by the two differentcameras. The simplest form of stereo triangulation occurs when the camera lens have thesame optical axes and are at a certain distance, b, apart, as can be seen from Fig 2.14. Whenlooking at this image, one should think of the zx plane as parallel to the ground, and the y-axis corresponds to an altitude or height. The optical axes point in the direction that thelens are looking, and in this case the optical axes of both cameras are pointing towards thepositive z-axis and are parallel to the z-axis. The variables X1 and X2 correspond to the x-distances from the center vertical axis of the two images. Figure 2.14 Stereo Triangulation[3] |20
  21. 21. Because we intend to determine the actual spatial location relative to the cameras, we willuse a constant K which represents the difference in location between LabVIEWsassignment and the actual location. If we assume that the reference origin in this 3D spaceis the center of the left camera lens noted as L in Fig 2.14, then we can calculate the Z, X,and Y positions of point P with the following equations to determine K, because all othervalues are known: ( ) ( )To ensure that these calculations are accurate, we have made the design with the twocameras having the same optical axes. This is done by having the cameras parallel to eachother and resting on the same bar. After determining the value of K, our next step will be tocreate a method to determine 3D position and plane of the ultrasound probe vs. time.Because we want to be able to determine a plane in 3D space, we need to determine theposition of at least three non-collinear points in space. The tracking pyramid will be firmlyattached to the ultrasound probe and will therefore serve as these three non-collinearpoints.Once we have two cameras in the proper alignment and determine the value for K in ourstereo triangulation equations, we will be able to determine the x, y, and z coordinates ofpoints A, B, and D, which are the centers of the three spheres. Point C is the centroid oftriangle ABD. Placing the center of the probe scanner, P, along the line perpendicular to theplane defined by ABD at the centroid allows for maximum angular revolution of the probeabout the neck and maximum rotation about the line CP without the spheres hitting theneck. The combination of the placement of P along the line perpendicular to ABD at thecentroid and the fact that the three spheres are aligned such that ABD is an equilateraltriangle allows for unbiased movement of the probe about the neck. These points can beseen in Fig 2.15. |21
  22. 22. Figure 2.15 2D Space of the Ultrasound ImageOnce we calculate the coordinates of points A, B, and D using LabVIEW and our stereotriangulation equations, our next step is to calculate the centroid of the triangle ABD usingthe following equations:Next the vector ⃑⃑⃑⃑ is determined. Vector ⃑⃑⃑⃑ is the vector from the centroid to the tip of theprobe. Points A, B, and D are the three spheres and a vector of two of these points is simplythe vector between them. By definition, a vector has two components: direction andmagnitude. Because ⃑⃑⃑⃑ is perpendicular to the plane defined by ⃑⃑⃑⃑⃑ and ⃑⃑⃑⃑⃑ , the direction of⃑⃑⃑⃑ is defined by either (⃑⃑⃑⃑⃑ ⃑⃑⃑⃑⃑ ) or (⃑⃑⃑⃑⃑ ⃑⃑⃑⃑⃑ ) By further inspection of Fig 2.15, onecan see that the direction of ⃑⃑⃑⃑ is in fact (⃑⃑⃑⃑⃑ ⃑⃑⃑⃑⃑ ). In order to calculate this crossproduct, one must first calculate vectors ⃑⃑⃑⃑⃑ and ⃑⃑⃑⃑⃑ . The following equations can be usedto calculate ⃑⃑⃑⃑⃑ and ⃑⃑⃑⃑⃑ |22
  23. 23. ⃑⃑⃑⃑⃑ ( ) ⃑⃑⃑⃑⃑ ( )Therefore, (⃑⃑⃑⃑⃑ ⃑⃑⃑⃑⃑ ) can be defined by the following equation: ̂ ̂ ̂ (⃑⃑⃑⃑⃑ ⃑⃑⃑⃑⃑ ) | | [( )( ) ( )( )] ̂ [( )( ) ( )( )] ̂ [( )( ) ( )( )] ̂For simplicity of future equations, let us denote the x, y, and z components of (⃑⃑⃑⃑⃑ ⃑⃑⃑⃑⃑ ) as respectively, such that [( )( ) ( )( )] [( )( ) ( )( )] [( )( ) ( )( )]and (⃑⃑⃑⃑⃑ ⃑⃑⃑⃑⃑ ) ̂ ̂ ̂Now that we have calculated the direction of ⃑⃑⃑⃑ , we are ready to define ⃑⃑⃑⃑ because wealready know that it’s magnitude is d, as can be seen from Fig 2.14, ⃑⃑⃑⃑ can be defined as theunit vector in the direction of (⃑⃑⃑⃑⃑ ⃑⃑⃑⃑⃑ ) multiplied by the magnitude of ⃑⃑⃑⃑ , which is d.The unit vector of a vector can be calculated by dividing each component of a vector by thesquare root of the sum of the square of the components, as one can see below: ̂ ̂ ̂ ̂ √Therefore, we can define ⃑⃑⃑⃑ with the following equation: |23
  24. 24. ̂ ̂ ̂ ⃑⃑⃑⃑ √From Fig 2.15 we can see that ⃑⃑⃑⃑ . Therefore, we can define P with the followingequation where all the values on the right side are originally known or have beenpreviously calculated. ̂ ̂ ̂ ( ) √So far we have completed the calculations for the 3D position of the center of theultrasound probe scanner. Next we have to calculate the plane of the 2D image associatedwith each 3D position. The four verticies of this plan are labeled m, n, o and q. A plane canbe defined by two non-collinear vectors. The two vectors that will define the plane of the2D ultrasound image, according to Fig 2.15, are ⃑⃑⃑⃑⃑⃑ and ⃑⃑⃑⃑ We will not know ⃑⃑⃑⃑⃑⃑ but wewill know ⃑⃑⃑⃑⃑ . According to our design, as one can see in Fig 2.15, ⃑⃑⃑⃑⃑⃑ ‖⃑⃑⃑⃑⃑ Therefore, wecan define the plane of the 2D ultrasound image using two known vectors, ⃑⃑⃑⃑⃑ and ⃑⃑⃑⃑ .More specifically, one can define a plane geometrically using two non-collinear vectors.However, we must know the algebraic equation of the 2D ultrasound plane so that we caninput it into a program for 3D reconstruction. To define a plane algebraically, one mustknow a normal vector to the plane and the coordinates of at least one point on the plane. If [ ] is the normal vector to a plane, and ( ) is a point on the plane, then theplane can be defined by the following equation: such that .We can define our normal vector by the following equation: ⃑⃑⃑⃑ ⃑⃑⃑⃑⃑ We can assignpoint P to ( ) Therefore, we can define the 2D ultrasound plane using the aboveequation for a plane such that a, b, and c are the ̂ ̂ and ̂ values of ⃑⃑⃑⃑ ⃑⃑⃑⃑⃑ , respectively,andSo far we have determined how to locate the center of the 2D ultrasound scanner in spaceand how to determine the plane of the image. However, in reality the 2D image isn’t aninfinite plane. In fact, all the images will have a certain length and width is determined bythe specific probe that we will use, which is undetermined at this point. However, once wediscover the dimensions of each image, we can easily calculate the four endpoints of the 2Drectangle or square. |24
  25. 25. 2.2.2.8 Image AcquisitionTwo low cost web cameras will be set up a fixed distance from each other, with theultrasound probe at their focal point. On top of the ultrasound probe will be three differentcolored balls, all the same size and an equal distance apart, in a triangular formation. Thesize, color, and distance of the balls will be dependent on the ability of the cameras todifferentiate them from one another, as well as the capability of LabVIEW to recognize theballs as separate objects. It is most desirable to have the balls as small as possible and asclose to each other as possible to decrease the amount of crowding of tools that thephysician is using. This apparatus will be fixed to the ultrasound probe, so after thelocation of these balls is calibrated, by recognizing where these balls are, the location of theultrasound itself will be known.A LabVIEW program will have each camera take images of the same ultrasound probetracking system at the same time with the same frame rate as that of the ultrasound. Theultrasound produces a MP4 movie, so we need to have a camera image corresponding toeach frame recorded by the ultrasound for proper reconstruction. Vision Acquisitionsoftware will be used to provide us with the x and y positions of the spheres in the image.Using these we can use stereo triangulation calculations to determine the z coordinate,providing us with the exact 3-D position of the ultrasound probe. 2.2.2.9 Image ReconstructionThe ultrasound technology produces its images in the form of a DICOM (Digital Imagingand Communications in Medicine) or MP4movie. This movie will be disassembled intomultiple 2-D images using a MP4 to JPG converter program. Once we have one JPG imagefor each frame of the MP4 movie, we will have an image of the tracking systemcorresponding to one ultrasound image. These single 2-D images will then be reassembledinto a 3D image using Biomedical Source Code 3.0. Once a 3-D image exists, it is possible totake slices at any angle to better understand the specifics of the imaged region for eachpatient. |25
  26. 26. 3 Realistic Constraints 3.1 EngineeringThere are multiple components to this design, spread across a wide range of engineeringdisciplines. The largest component of our project involves computer programming, ormore specifically bioinformatics. The main program we will use which is available in ourlaboratories is National Instrument’s LabVIEW 2010. This program will be used for imagerecognition, image comparison, stereo triangulation calculations, and to import theseparate JPEG images corresponding to the ultrasound movie.We will need to use National Instrument’s Vision Acquisition software which is compatiblewith LabVIEW, but we do not have access to this software and will need to obtain licensure.If we are able to obtain a student license, our expectation is that it will be for a limitedamount of time, so we will need to do as much of our testing as possible in a short period oftime. Hopefully when the project works the hospital will purchase the software and be ableto have full access to it for use with our design.To disassemble the MP4 movie which is obtained from the ultrasound software alreadyexisting in the hospital, we will need to use FFMPEG software. The movie produced will bedissected into individual JPEG images based on the frame rate of the ultrasound movie. Wealso do not have this program at our disposal, so we will need to obtain licensure for thisprogram. If we receive a student license, our expectation is that it will be for a limitedamount of time. We will need to do as much of our testing as possible in a short period oftime. When the design is finished, the hospital will need to purchase this software todisassemble the MP4 movie obtained during each procedure. 3.2 EconomicalOur intention is to design a prototype which will be made freely available to hospitalsacross the country. There is currently a product which will show 3-D images of an areabeing imaged with ultrasound, but it costs approximately $200,000. We are designing aproduct for under $1000 which will do essentially the same thing. One of the mainmotivations for our project is to make it as economically efficient as possible, so themajority of the components are separate pieces and we need to construct them ourselves.This gives rise to additional work on our part as well as the potential for a greater amountof time devoted to actually constructing the project.Also, we do not have in our budget enough money to purchase licensure for the additionalnecessary programs, so we have to rely on free student versions which have a time limit onthem. This will result in an increased necessity to work quickly in the time we have theseprograms available. 3.3 ManufacturabilityThe mechanical components to this design were all purchased separately, and the majorityof them are from 80/20 Inc. This ensures us that these components will be compatible |26
  27. 27. with each other and will be easily assembled. Because of economic restraints we wereunable to send our design to a company and have them build exactly what we need, somuch of the construction will take place in the design lab. For the components that werenot purchased by 80/20 Inc, they were designed and built in the engineering machine shop.For someone who does not have access to such a shop, it would be very difficult toconstruct these components, such as the casing for the camera.The ultrasound probe which we are studying and designing the tracking system for costs$5000, which is far out of our budget. There is one currently used clinically at HartfordHospital, which we have limited access to. We only have access to the probe during non-business hours, and we cannot take the probe to school with us to secure the trackingsystem to it. All of the design will be based on measurements taken at the hospital, so wewill need to leave room for expected errors in measurement and design. Once hospitals inthe future use our design, they will be able know for sure which measurements to use, afterwe finally assemble the tracking system to the probe. 3.4 EthicalWhen working in any environment with patients, it is necessary to abide by all HIPPA lawsof confidentiality. With our design, there are a lot of new images which will be producedand processed associated with patients. We will need to ensure all information is stored ona secure network. If the computer used to process the images and reconstruct them to a 3-D image is on the same network that is currently used for all other hospital documents, ourunderstanding and expectation is that it will be safe. For all preliminary testing, we willuse images of our own brachial plexuses, so we will not have to be concerned withconfidentiality for our own personal computers or the laboratory computers. If someonewere to use our design in its initial stages, they would need to obtain an MP4 movie froman outside source. This is the case unless they had access to an ultrasound probe to imagethemselves, because the hospital, by law, is not allowed to distribute images of patients ifthere is an identifying label on the image. 3.5 Health and SafetyThe ultrasound probe is currently located at Hartford Hospital, and is used on a daily basisclinically. Because patients are involved in these procedures and require this tool, we areunable to take the probe to the lab. With research and experiments requiring use of ahuman, we need to be extra cautious of safety concerns.The optimal distance for the cameras is four feet away from the patient, so this leaves verylittle room for clinicians to move around in the extra space left in the room. Nothing canbein between the cameras and the patient to ensure that nothing obstructs the view of thecameras. Everything that is done in the operating room will now have to be restricted to anarea 24 square feet smaller than was available prior to using this product. 3.6 Social and Political |27
  28. 28. BrainLAB, a Munich, Germany company which specializes in technology for neurosurgerycurrently markets VectorVision. In April 1997, BrainLAB received 510(k) clearance fromthe Food and Drug Administration to market this product. VectorVision uses threereflective balls arranged in an equilateral triangle, and is imaged by two cameras. This isthe platform for image-guided surgery, especially neurosurgical and orthopedicprocedures. The system does not have wires and can be integrated with any instrumentscurrently used in the operating procedure. Using VectorVision, surgeons can follow themovements of their instruments on the computer screen in real-time during surgicalprocedures. The basis of BrainLAB’s product is the starting point for our project, but wehope to achieve the same goal at a significantly smaller fraction of the cost.Our design uses the same idea of three spheres arranged in an equilateral triangle imagedby two cameras and stereo triangulation calculations. For this reason we are unable topatent our design. If we added a new component innovation we would be able to apply fora patent, but this is not our intention. We intend to use BrainLAB’s design as a platformand provide this design for free to the public. |28
  29. 29. 4 Safety IssuesThe mechanical component of this project will be assembled by the members of our group,which naturally gives rise to safety concerns. Two of the members have been certified inMachine Shop Safety and are able to use the engineering machine shop. With usingprotective devices and procedures, we hope that in using these metal pieces to constructfixture, we will not encounter any hazards. We ordered three 12’ (3.66m) instead of any20’ (6.07 m) to eliminate potential hazards arising from delivering or working with suchlarge pieces of metal.Our constructed project will be placed inside of an operating room, consuming space whichnormally is free for physicians and other personnel to move around in. There is a highlikelihood that someone could bump into or knock over this fixture, giving rise to hugemedical concerns. It is possible that we surround the metal with a protective padding sothat if someone did come in contact with it they wouldn’t get hurt. To limit the possibilityof knocking the fixture over, we have ensured that the feet of the structure are largeenough so that a lot of force would need to be applied to the structure to tip it over.The ultrasound probe is currently used clinically with patients on a daily basis in a sterileoperating room. When we are able to view and use the probe, we will need to ensure thatwe properly disinfect our hands and the ultrasound probe before and after we use it. If wedon’t change into sterile clothes we will need to be extra cautious of the contact made withthe probe, as well as other instruments in the operating room. All of the components thatwe construct in the laboratory at school will need to be properly sterilized before theyenter the operating room, as is the case with any surgical material.The program which disassembles the MP4 movie produced by the ultrasound software intoindividual JPEG images as well as the program which we will write to reconstruct theseimages into a 3-D image both use a lot of processing memory. If the computer used to runthese programs is not strong enough, the program will run slow and the computer will heatup. This is a concern because there is a potential for internal components of the computerto fry if they get hot enough. We will try to alleviate this issue by using a computer withfast enough processing speed for these programs, as well as a good cooling system. |29
  30. 30. 5 Impact of Engineering SolutionsOur design is intended to provide a cost effective method for clinicians to view images in 3-D what they already have access to in 2-D, specifically of the brachial plexus. With ourdesign, they will be able to use the MP4 movies already produced by the technology theyhave, and view the anatomy in 3-D.Being provided with this tool will be useful, especially for small hospitals that don’t havethe ability to purchase expensive products that will reconstruct a 3-D image. Any hospitalthat uses anesthesia via the brachial plexus will be specifically interested in our design, as itis designed for that purpose. These aren’t the only hospitals though. Because our design isvery general, it can be used for any ultrasound procedure. As long as the ultrasound setupwith the tracking device is used, and is imaged by both cameras, our design can be used forany ultrasound procedure. Virtually every hospital in this country uses ultrasoundtechnology in some respect, ranging from echocardiograms to ultrasounds of fetusesduring pregnancy. With our designs each of these procedures can now be viewed in 3-Dinstead of 2-D. Especially in terms of pregnancy ultrasounds where people take homecopies of these images, parents will be able to see their child in three dimensions. If thisbecomes common, the view and expectation of ultrasounds in general will change, initiallyfor hospitals that use it. As more and more hospitals use the design which will be madeavailable to them for free, this will change globally. |30
  31. 31. 6 Life-Long LearningThroughout the course of this semester, we have had to learn a lot of new techniques toaccomplish various tasks. The majority of these pertain to the design and planning stages ofproject development.The first and most essential task was recognizing each team members qualities andlearning styles to ensure that we can work together effectively. Based on these qualitiesvarious tasks have been divided to ensure that each person is contributing in the areasmost suited to their interests and strengths. Each of us contribute a significant amount towriting components of this project, as it is necessary as engineers to be able tocommunicate their ideas in an effective manner.We have all had to learn how to develop a timeline for a project and have realized that notalways are tasks completed at the desired speed. This is one issue in particular thatdrastically affects their ability to complete projects.This project required various oral presentations with use of PowerPoint as a resource.Through these presentations our public speaking skills have been tested, and have grownas a result. The ability to express orally though processes, especially in terms of problemsolving particularly in the realm of engineering is an extremely useful quality, useful in anyarea of interest or career.The future of this project involves much more structure in terms of outlining tasks andcompleting them in a timely manner. It is expected that not everything will go exactly asplanned, which is why in our timeline the majority of the most time consuming tasksassociated with research and programming have been clustered early in the next semester.This leaves ample time before the due date for changes and complications that may arise. Itis expected that through all of these processes we will become more efficient and consciousengineers, able to take these skills into the work force. |31
  32. 32. 7 Budget and Timeline 7.1 BudgetWe have been budgeted $1000 by the UConn School of Engineering to complete thisproject. All planning and purchasing have been made and projected keeping this budget inmind. It is expected that we will be able to complete this project well under budget.Purchase requisitions are located in the appendix which indicate how much we have spenton what products in particular. 7.2 TimelineTask Name Duration Start FinishCameras 100 days Mon 9/20/10 Fri 2/4/11 Determine which web camera is most effective and cost 1 day Mon 9/20/10 Mon 9/20/10efficient for project Fill order requisition form for cameras 4 days Tue 9/21/10 Fri 9/24/10 Verify that ordered parts arrive 5 days Mon 10/4/10 Fri 10/8/10 Determine necessary alignment of cameras for stereo 5 days Mon 9/20/10 Fri 9/24/10triangulation Machine the camera bracket 27 days Mon 10/4/10 Tue 11/9/10 Wed Attach cameras to camera beam via camera connection 2 days Mon 11/8/10 11/10/10 Determine the optimal distance between the two cameras 4 days Tue 1/18/11 Fri 1/21/11 Determine the optimal height of the cameras 5 days Mon 1/31/11 Fri 2/4/11 Determine the optimal distance between the cameras and 5 days Mon 1/31/11 Fri 2/4/11the patientCamera Bracket 32 days Mon 9/27/10 Tue 11/9/10 Determine mechanical design for support of cameras based 5 days Mon 9/27/10 Fri 10/1/10on necessary alignment Accurately measure camera dimensions 1 day Mon 10/4/10 Mon 10/4/10 Determine most efficient and cost effective material for Mon 2 days Tue 10/12/10camera mounting design 10/11/10 Acquire material for camera mount 1 day Tue 10/12/10 Tue 10/12/10 Mon Machine camera mount 7 days Tue 10/26/10 10/18/10 Machine camera connection to mount 10 days Mon 10/4/10 Fri 10/15/10 Connect camera to camera connection 2 days Mon 11/8/10 Tue 11/9/10 Connect mount to camera connection (using a pin as a 2 days Mon 11/8/10 Tue 11/9/10hinge)Tracking Pyramid 85 days Mon 9/27/10 Sun 1/23/11 Determine optimal orientation of three spheres to each 5 days Mon 9/27/10 Fri 10/1/10other Determine optimal orientation of three spheres to 5 days Mon 9/27/10 Fri 10/1/10 |32
  33. 33. ultrasound probe Determine optimal size of spheres 10 days Mon 12/6/10 Fri 12/17/10 Determine the optimal colors of the three spheres 10 days Mon 12/6/10 Fri 12/17/10 Determine optimal distance of spheres from each other 10 days Mon 12/6/10 Fri 12/17/10 Determine optimal distance of spheres from probe 10 days Mon 12/6/10 Fri 12/17/10 Determine optimal design for attachment of tracking Mon 5 days Fri 11/19/10pyramid to ultrasound probe 11/15/10 Determine the optimal material for the spheres 5 days Mon 12/6/10 Fri 12/10/10 Determine the optimal material to connect the spheres to 5 days Mon 12/6/10 Fri 12/10/10the ultrasound probe Fill order requisition form for parts for tracking pyramid 5 days Tue 12/7/10 Sat 12/11/10 Mon Verify that ordered parts arrive 5 days Fri 12/17/10 12/13/10 Build tracking pyramid from many optimizations 5 days Tue 1/18/11 Sun 1/23/11 Test various background colors 5 days Mon 12/6/10 Fri 12/10/10Support Structure 10 days Sat 10/9/10 Fri 10/22/10 Determine mechanical design for support of cameras based Wed Mon 4 dayson necessary alignment 10/13/10 10/18/10 Determine most effective and cost efficient material for 3 days Tue 10/12/10 Thu 10/14/10support Determine amount of beam material needed (T-slotted Wed 4 days Sat 10/9/10aluminum framing) 10/13/10 Wed Determine various mounting hardware for support 4 days Sat 10/16/10 10/13/10 Mon Fill order requisition form for material from 80/20 2 days Sun 10/17/10 10/18/10 Mon Wed Verify that ordered parts arrive 3 days 10/18/10 10/20/10 Determine mechanical design for camera mounts to fit Wed 3 days Fri 10/22/10necessary alignment restrictions 10/20/10 Label heights in one inch increments on support beams 2 days Determine the optimal height of the cameras 2 daysCAD 9 days Tue 10/19/10 Fri 10/29/10 Wed CAD of Camera Mount 3 days Fri 10/29/10 10/27/10 CAD of Camera 3 days CAD of Support Structure 4 days Tue 10/19/10 Fri 10/22/10 Wed CAD of Tracking Pyramid 4 days Sun 10/24/10 10/20/10 CAD of Ultrasound Probe 4 days CAD of all together 7 daysUltrasound Probe 1 day Thu 10/28/10 Thu 10/28/10 Accurately measure ultrasound probe for accurate 1 day Thu 10/28/10 Thu 10/28/10 |33
  34. 34. attachment of tracking pyramid Practice using ultrasound in Hartford Hospital 1 day Thu 10/28/10 Thu 10/28/10 Understand how the Sonosite M-Turbo Ultrasound machine 1 day Thu 10/28/10 Thu 10/28/10works Understand how clinicians acquire ultrasound images 1 day Learn how to use the various applications of the Sonosite 1 daysystem MonLabVIEW 15 days Tue 11/9/10 11/29/10 Obtain license for LabVIEW Vision Assistant 2010 (LVA) 1 day Tue 11/9/10 Tue 11/9/10 Wed Wed Determine how to acquire two images in LVA simultaneously 1 day 11/10/10 11/10/10 Acquire two images simultaneously 2 days Thu 11/11/10 Fri 11/12/10 Mon Ensure the two images have the same frame rate 2 days Tue 11/16/10 11/15/10 Wed Wed Determine how to detect the three spheres in the images 1 day 11/10/10 11/10/10 Use known distances of object and acquire images of object Wedto determine the value of K in the stereo triangulation 2 days Thu 11/18/10 11/17/10equations Test various background colors 2 days Thu 11/11/10 Fri 11/12/10 Implement the mathematical equations determined earlier 1 dayinto the main LabVIEW program Practice following the tracking pyramid with the two Mon Wed 3 dayswebcams 11/15/10 11/17/10 Have the LabVIEW program output four endpoints of theimages in 3D space corresponding to each 2D ultrasound 1 dayimage Practice acquiring the tracking pyramid images and Mon 3 days Thu 11/18/10ultrasound images simultaneously 11/22/10 Mon Troubleshoot main LabVIEW program whenever necessary 5 days Tue 11/23/10 11/29/10 After main LabVIEW program, use the development suite to 3 days Tue 11/16/10 Thu 11/18/10create a stand-alone version MonMATLAB (Alternative to LabVIEW ) 15 days Tue 11/9/10 11/29/10 Determine how to use image acquisition toolbox in MatLAB Determine how to use the image processing toolbox Learn how to acquire two images Record from two cameras simultaneously Record from two cameras at the same frame rate Learn how to begin and end recording simultaneously Write an M-File that can record and stop recording from thetwo cameras simultaneously at the same frame rate |34
  35. 35. Troubleshoot MATLAB program whenever necessary MonStereo Triangulation 10 days Tue 11/9/10 11/22/10 Determine best method to detect 3D location of object in 1 day Tue 11/9/10 Tue 11/9/10space from two images Determine the mathematical calculations necessary in order Wed 2 days Thu 11/11/10to find the 3D position of the spheres 11/10/10 Modify the general stereo triangulation equaions to fit ourspecific model using a constant to convert from LabVIEWs 1 day Fri 11/12/10 Fri 11/12/10image to real-world distances Determine the value of K by imaging objects at known Wed Mon 4 daysdistances 11/17/10 11/22/10Client and Relevant Meetings 1 day Tue 11/9/10 Tue 11/9/10 Initial meeting with Dr. McIsaac 1 day Follow-up meeting with Dr. McIsaac 1 day Meeting with Taylor and Justin for knowledge on previous 1 daywork Meeting with Dr. McIsaac to practice with the Ultrasound 1 daytechnology Meeting with Dr. Defaria (NI Representative) for LabVIEW 1 daylicences Final meeting with Dr. McIsaac to approve project 1 dayProject Assignments 1 day? Tue 11/9/10 Tue 11/9/10 Project statement Project specifications Project proposal Alternative designs Optimal designs Fall sesmester final report Fall semester final presentation Weekly reports Spring semester final report Spring semester final presentation |35
  36. 36. 8 Team Members Contributions to the Project 8.1 Michael GoldenMichael brainstormed the majority of the ideas for various structural and mechanicalcomponents of this project. There have been many proposed ideas for the camera casing aswell as the camera support. These designs were initially hand drawn but were then transferredto CAD drawings in Visio. All CAD drawings in any presentations or papers were all constructedby Michael.Michael conducted the research into which materials are most effective for use in this aspect ofthe project and completed the necessary financial paperwork including product requisitionforms. When the group decided on a final optimal design, Michael ordered the variousnecessary parts and worked closely with the machine shop staff to construct all mechanicalparts. The camera casing was constructed from material that was available in the machineshop while waiting for the 80/20 Inc. T-slotted aluminum to be shipped. Once the aluminumarrived, Michael cut the pieces to desired lengths and constructed the support seen in thefigure 2.11. 8.2 Khayriyyah MunirKhayriyyah has taken on as her main tasks the majority of the administrative tasks associatedwith this project. There have been multiple reports due and presentations to present to whichKhayriyyah has devoted most of her efforts. This project also requires a lot of contact withvarious representatives of companies, aside from the client. She has been in contact with therepresentatives at Sonosite to try to receive specifications and dimensions of the probe. Shehas also set up meetings with various professors and graduates of UCONN to acquire moreideas about the best method to go about reconstructing two dimensional images into a threedimensional image. Khayriyyah plans to exert the majority of her future effort to 3Dreconstruction research and programming. This component of the project is quickly underway,especially now that the team members have LabVIEW on their personal computers and can testvarious third party programs for compatibility and efficiency. 8.3 Omid Nasser BigdeliOmid has taken on as his main contribution the mathematical components of this project. Thisincluded determining various designs for the configuration of the spheres and their orientationto the probe. The members of the group decided together, of these various designs whichwere optimal to use in the project. Omid also researched and determined which cameras aresuitable for our project, and decided upon the best camera. He completed all necessarypaperwork including the purchase requisition form and obtained the cameras. Omid has alsoexplored various programming options, including image acquisition and recognition in matlab.It is expected that Omid will contribute significantly to the LabVIEW program for imageacquisition and recognition. |36
  37. 37. 9 ConclusionThis project is intended to ease the process of administering anesthesia foranesthesiologists in the brachial plexus of patients prior to surgery. Currentlyanesthesiologists at Hartford Hospital use ultrasound technology with a Sonosite SLAprobe. Our client, Dr. Joseph McIsaac, is the chief trauma anesthesiologist at HartfordHospital and is sponsoring this project.The design of the project is to produce a 3D image, using two dimensional images producedby the ultrasound machine. A tracking pyramid consisting of three differently coloredspheres will be attached to the ultrasound probe. Using two web cameras in conjunctionwith LabVIEW’s image acquisition technology, we will take pictures of this trackingpyramid and calculate the position and movement of the probe using stereo triangulationequations. The images taken by each of these cameras will be synced with the imagesproduced by the ultrasound machine to ensure that the ultrasound images used are onlythose that correspond with the movement of the probe. The 2D ultrasound images will bereconstructed into a 3D image using reconstruction software that is compatible withLabVIEW.Using LabVIEW’s Developer Suite, a stand-alone program will be made so that any userwith a computer can make use of this device. We will provide the design and informationpertaining to this project entirely free to the public so that others can make use of thistechnology as well. It is hoped that with such a low cost, even small clinics and hospitalsthat cannot afford the products which currently exist (priced above $100k) will be able toimage in three dimensions what they now image in two dimensions.10 References[1] Sonosite Products--Micromaxx. Transducers. SLA Probe<http://www.sonosite.com/products/micromaxx/transducers/>[2] 80/20 Inc. Product Catalog < http://www.8020.net/>[3] Iocchi, Luca. Stereo Vision: Triangulation. 6 April 1998. <http://www.dis.uniroma1.it/~iocchi/stereo/triang.html>11 AcknowledgementsSponsor- We would like to acknowledge firstly Dr. Jay McIsaac for his proposal for this - project.He has provided much guidance and plenty of ideas for the most successful route to take tocomplete this project in a timely manner.Professor- We would like to acknowledge Dr. John Enderle for advising us in our processthroughout this semester. His contribution has enabled us to adequately plan for and follow in |37
  38. 38. an effective manner our project guideline, while leaving room and expectations for adjustmentsas necessary.Summer Interns -We would also like to acknowledge two Avon High School robotics teammembers, Taylor Amarel and Justin Yost, who contributed a significant amount of LabVIEWcode for this project. They volunteered at Hartford Hospital in the summer of 2010 in theanesthesiology department specifically working on coding for this project.Machine Shop Engineers – We would like to acknowledge Peter Glaude and Serge Doyon fortheir assistance and guidance in designing and constructing all mechanical supports and devicesnecessary for this project.Teaching Assistants – We would like to acknowledge Marek Wartenberg and Emily Jacobs forthe lengthy hours they have contributed to assist us in the various components of this project,particularly the EKG project which taught us various circuitry and LabVIEW skills.BME Administrative Staff – We would like to thank Kerry Wenzler for ordering everythingnecessary for the project and notifying us when things arrive. |38
  39. 39. 12 Appendix 12.1 Purchase RequisitionsPURCHASE ORDER REQUISITION - UCONN BME SENIOR DESIGN LABInstructions: Students are to fill out boxed areas with white backgroundEach Vendor will require a different purchase requisition Date: 10.22.2010 Team # Team 3 Student Name: Michael Golden Total Expenses 288.25 Ship to: University of Connecticut Lab Admin only: Biomedical Engineering FRS # U-2247, 260 Glenbrook Road Student Initial Budget Storrs, CT 06269-2247 Student Current Budget Attn: Project Sponsor Project Name: 3D Ultrasound Reconstruction ONLY ONE COMPANY PER REQUISITION Unit Catalog # Description Unit QTY Price Amount 1010-145 1X1 BI-SLOT OPPOSITTE T-SLOT 145" PROFILE 1 3 $33.35 $100.05 3321 1/4-20X.5 FBHSCS & TNUT 1 75 $0.50 $37.50 3393 1/4-20X.5 BHSCS & TNUT 1 20 $0.40 $8.00 4013 10S 6 HOLE INSIDE CORNER BRACKET 1 2 $5.20 $10.40 4112 10S 7 HOLE TEE JOINING PLATE FOR 1010 1 2 $8.15 $16.30 4141 10S 4 HOLE TEE JOINING PLATE 1 2 $5.80 $11.60 4145 10S 4 HOLE 45 DEG. ANGLE JOINING PLATE 1 8 $5.10 $40.80 10S SINGLE FLANGE LINEAR BEARING BRAKE 6415 KIT 1 2 $38.60 $77.20 6850 10S L-HANDLE LINEAR BRAKE 1 2 $9.55 $19.10Comments Price Quote Shipping $0.00 File Name: Total: $320.95 Yes or No Vendor Accepts Purchase Orders?Vendor: 80/20 Inc.Address: 1701 South 400 East Columbia City, IN 46725 Authorization:Phone: 260.248.8030Contact Name: ______________________________ |39
  40. 40. PURCHASE ORDER REQUISITION - UCONN BME SENIOR DESIGN LABInstructions: Students are to fill out boxed areas with white backgroundEach Vendor will require a different purchase requisition Date: September 14, 2010 Team # 3 Student Name: Omid Nasser Bigdeli Total Expenses $0.00 Ship to: University of Connecticut Lab Admin only: Biomedical Engineering FRS # U-2247, 260 Glenbrook Road Student Initial Budget Storrs, CT 06269-2247 Student Current Budget Attn: Project Sponsor Project Name: 3D Ultrasound Project ONLY ONE COMPANY PER REQUISITION Unit Catalog # Description Unit QTY Price Amount Logitech Webcam Pro 9000 2 $59.99 $119.98 $0.00 $0.00 $0.00 $0.00 $0.00 $0.00 $0.00 $0.00 $0.00 $0.00 $0.00 $0.00 $0.00 $0.00 http://www.amazon.com/Logitech-Webcam-9000-Comments Built-Microphone/dp/B002M78ECK Price Quote Shipping $0.00 File Name: Total: $119.98 Yes or No Vendor Accepts Purchase Orders?Vendor:Address: Authorization:Phone:Contact Name: ______________________________ |40

×