Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

2D Thinning

1,845 views

Published on

We present a concurrent implementation of a powerful topological thinning operator. This operator is able to act directly over grayscale image without modifying their topology. We introduce a new parallelization methodology combining SDMstrategy and thread's coordination basis which allows efficient parallelism for a large class of topological operators including skletonisation, crest restoring, 2D and 3D object smoothing and watershed. Distributed work during thinning process is done by a variant number of threads. Tests on 2D grayscale image (512*512), using shared memory parallel machines (SMPM) equipped with an octo-core processor (Xeon E5405 running at a cadency of 2Ghz), showed an enhancement of 6.2 with a maximum achieved cadency of 125images/s using 8 thread

Published in: Education, Technology
  • Be the first to comment

  • Be the first to like this

2D Thinning

  1. 1. 1<br />2D Parallel Thinning Algorithm<br />Based on topological operator<br />R. MAHMOUDI – A3SI LAB.<br />
  2. 2. 2<br />Summary<br />Parallel thinning operator<br />Future work<br />Discussion<br />
  3. 3. 3<br />Parallel thinning operator<br />Future work<br />Discussion<br />R. MAHMOUDI – A3SI Laboratory– 2009 April<br />
  4. 4. 4<br />Parallel thinning operator<br />1. Theoretical background<br />Filtered thinning method that allows to selectively simplify the topology, based on a<br /> local contrast parameter λ.<br />(b) filtered skeleton <br /> with λ = 10.<br />(a) After Deriche <br />gradient operator<br />
  5. 5. 5<br />Parallel thinning operator<br />1. Parallelization strategy (1)<br />Definesearch area<br />Startparallelcharacterization <br />Create new shared data structure<br />End parallelcharacterization <br />Mergemodifiedsearch area<br />Restart process until stability<br />
  6. 6. 6<br />Parallel thinning operator<br />1. Parallelization strategy (2)<br />SDM-Strategy<br />(Divide and conquer principle)<br />Up level<br />DATA PARALLELISM<br />MIXED<br />PARALLELISM<br />Down level<br />THREAD PARALLELISM<br />
  7. 7. 7<br />Parallel thinning operator<br />1. Parallelization strategy (3)<br />
  8. 8. 8<br />Parallel thinning operator<br />2. Coordination of threads (1)<br />Thread 1<br />Thread 2<br />First implementation using a lock-based shared FIFO queue.<br />Lock()<br />Unlock()<br />Push()<br />Fail<br />Success<br />Blocked<br />
  9. 9. 9<br />Parallel thinning operator<br />2. Coordination of threads (2)<br />Thread 1<br />Thread 2<br />Lock() and access semaphore<br />Unlock() and leave semaphore<br />Semaphore<br />Push()<br />Second implementation using a private-shared concurrent FIFO queue<br />
  10. 10. 10<br />Parallel thinning operator<br />3. Performance testing (1)<br />
  11. 11. 11<br />Parallel thinning operator<br />3. Performance testing (2)<br />First implementation using a lock-based shared FIFO queue.<br />
  12. 12. 12<br />Parallel thinning operator<br />3. Performance testing (3)<br />Second implementation using a private-shared concurrent FIFO queue<br />
  13. 13. 13<br />Parallel thinning operator<br />4. Conclusion<br />Non-specific nature of the proposed <br />parallelization strategy.<br />Threads coordination and communication <br />during computing dependently parallel read/write<br /> for managing cache-resident data <br />1<br />2<br />
  14. 14. 14<br />Parallel thinning operator<br />Future work<br />Discussion<br />
  15. 15. 15<br />Future work<br />1. Extension<br />SDM - Strategy<br />Performance enhancement (speed up)<br />Efficiency (work distribution)<br />Cache miss<br />ParallelThinning Operator<br />IMBRICATE <br />TWO<br />Operators<br />Crest restoring <br />
  16. 16. 16<br />Future work<br />2. New parallel topological watershed<br />% Achievement<br />Parallelwatershed Operator<br />SDM - Strategy<br />Performance enhancement (speed up)<br />Efficiency (work distribution)<br />Cache miss<br />80%<br />
  17. 17. 17<br />Parallel thinning operator<br />Future work<br />Discussion<br />
  18. 18. 18<br />Discussion<br />Introduce future programming model<br /> (make it easy to write programs that execute efficiently on highly parallel C.S)<br />Introduce new “Draft”to design and evaluate parallel programming models <br />(instead of old benchmark)<br />Maximize programmer productivity, future programming model must be more human-centric<br />(than the conventional focus on hardware or application)<br />
  19. 19. More details<br />www.mramzi.net<br />19<br />
  20. 20. 20<br />

×