• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
First fare 2011 effectively using the camera
 

First fare 2011 effectively using the camera

on

  • 702 views

 

Statistics

Views

Total Views
702
Views on SlideShare
702
Embed Views
0

Actions

Likes
0
Downloads
9
Comments
0

0 Embeds 0

No embeds

Accessibility

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    First fare 2011 effectively using the camera First fare 2011 effectively using the camera Presentation Transcript

    • Dennis C. Erickson ~ Senior Mentor for Teams 1510 and 2898 1  
    • Using  one  or  more  Video  Cameras  on   a  robot  allows  for  target  finding  ,  direction  to  the  target  and  distance  to   the  target     Cameras  can  use  “Masking”  an/or  “Shape   Recognition”   2  
    • 3  
    • • Object  (Trailer)  detection  • Use  to  find  and  determine        friend  or  foe   • Use  to  determine  distance  and  direction   to  flag  (good  distance  measurement  from  2   feet  and  up)   • Can  be  used  with  servos  to  pan  and  tilt   the  camera  to  extend  the  visual  bound   • Masking  provides  image  of  only  flag  (can   mask  other  defined  objects  as  well)   4  
    • Note  an  example  of  masking  that  was  used  in  the  2009  competition.  The  “Flag  Location”  gauge  shows  not  only  the  direction  to  the  target  but  also  the  relative  size  of  the  target  which  is  used  to  determine  the  distance  from  the  robot.  
    • Viewing  a  target  with  a  defined  mathematical  representation    In  the  2010  competition  software  was  designed  to  find  a  “bulls-­‐eye”    located  above  the  goal.       This  LabVIEW  code  shows  how   the  video  image  is  analyzed  and   the  SubVI  returns  the  target   information  including    position,   the  score  and  from  the  minor  to   major  radius  ratio,  the  angle   offset  
    • Viewing  a  target  using  Color,  Threshold,  Filtering  and  Particle  Analysis  .    Note  the  3  poles  with  Retroreflective  tape  
    • Illuminating  the  scene  with  a  red  LED  array  shows  only  the  tape      
    • Particle  Analysis  “discovers”  the  tape  and  software  determines  the  position  of  the  tapes  
    • In  2011  the  goal  was  to  launch  a  small  robot  when  the  light  came  on  at  the  bottom  of  the  pole.    The  camera  could  have  sensed  the  light  much  faster  and  automatically  launched  the  robot  before  a  human  could.    If  the  camera  software  failed  then  the  human  would  be  the  backup  
    • The  One  other  method  used  to  locate  a  target  is  to  use  frame  grabbing  to  compare  a  known  image  (complex  shape)  with  a  “stored”  replica  of  the  target.    This  might  include  a  face,  a  ball,  a  hockey  puck,  etc.    This  might  be  used  in  a  future  game  as  it  seems  that  cameras  are  playing  an  accelerated  part  in  game  playing  
    • Dennis C. Erickson - dcerickson1@comcast.net