Tesseract is modernizing its OCR capabilities by cleaning up code, adding newer technologies like OpenCL, and improving its classifier. OpenCL has accelerated Tesseract's processing pipeline by 36% so far. The plug-n-play classifier API allows new classifiers to be implemented. Tesseract is also exploring eliminating polygonal approximations and baseline normalization requirements. An LSTM integration is coming that will be fully integrated and allow 2D inputs and visualization.
Introduction to Machine Learning for newcomers. It will show you some basic concepts like what is supervised learning, unsupervised learning, classification, regression, under/overfitting, clustering, anomaly detection, and how to have some measures. It will illustrates examples through scikit-learn and tensorflow code
이 발표에서는 TensorFlow의 지난 1년을 간단하게 돌아보고, TensorFlow의 차기 로드맵에 따라 개발 및 도입될 예정인 여러 기능들을 소개합니다. 또한 2017년 및 2018년의 머신러닝 프레임워크 개발 트렌드와 방향에 대한 이야기도 함께 합니다.
In this talk, I look back the TensorFlow development over the past year. Then discusses the overall development direction of machine learning frameworks, with an introduction to features that will be added to TensorFlow later on.
Introduction to the new Tensorflow 2.x and the Coral AI Edge TPU hardware. The presentation introduces Tensorflow main features such as Sequential and Functional APIs, mobile support with Tensorflow Lite, web support with TensorflowJS and Google Cloud support with TFX.
In addition, the presentation introduces the new edge TPU architecture coming from Coral AI, including its main hardware features and description of the compiling flow.
TensorFlow meetup: Keras - Pytorch - TensorFlow.jsStijn Decubber
Slides from the TensorFlow meetup hosted on October 9th at the ML6 offices in Ghent. Join our Meetup group for updates and future sessions: https://www.meetup.com/TensorFlow-Belgium/
Towards Safe Automated Refactoring of Imperative Deep Learning Programs to Gr...Raffi Khatchadourian
Efficiency is essential to support responsiveness w.r.t. ever-growing datasets, especially for Deep Learning (DL) systems. DL frameworks have traditionally embraced deferred execution-style DL code—supporting symbolic, graph-based Deep Neural Network (DNN) computation. While scalable, such development is error-prone, non-intuitive, and difficult to debug. Consequently, more natural, imperative DL frameworks encouraging eager execution have emerged at the expense of run-time performance. Though hybrid approaches aim for the “best of both worlds,” using them effectively requires subtle considerations to make code amenable to safe, accurate, and efficient graph execution. We present our ongoing work on automated refactoring that assists developers in specifying whether and how their otherwise eagerly-executed imperative DL code could be reliably and efficiently executed as graphs while preserving semantics. The approach, based on a novel imperative tensor analysis, will automatically determine when it is safe and potentially advantageous to migrate imperative DL code to graph execution and modify decorator parameters or eagerly executing code already running as graphs. The approach is being implemented as a PyDev Eclipse IDE plug-in and uses the WALA Ariadne analysis framework. We discuss our ongoing work towards optimizing imperative DL code to its full potential.
Training at AI Frontiers 2018 - Lukasz Kaiser: Sequence to Sequence Learning ...AI Frontiers
Sequence to sequence learning is a powerful way to train deep networks for machine translation, various NLP tasks, but also image generation and recently video and music generation. We will give a hands-on tutorial showing how to use the open-source Tensor2Tensor library to train state-of-the-art models for translation, image generation, and a task of your choice!
Introduction to Machine Learning for newcomers. It will show you some basic concepts like what is supervised learning, unsupervised learning, classification, regression, under/overfitting, clustering, anomaly detection, and how to have some measures. It will illustrates examples through scikit-learn and tensorflow code
이 발표에서는 TensorFlow의 지난 1년을 간단하게 돌아보고, TensorFlow의 차기 로드맵에 따라 개발 및 도입될 예정인 여러 기능들을 소개합니다. 또한 2017년 및 2018년의 머신러닝 프레임워크 개발 트렌드와 방향에 대한 이야기도 함께 합니다.
In this talk, I look back the TensorFlow development over the past year. Then discusses the overall development direction of machine learning frameworks, with an introduction to features that will be added to TensorFlow later on.
Introduction to the new Tensorflow 2.x and the Coral AI Edge TPU hardware. The presentation introduces Tensorflow main features such as Sequential and Functional APIs, mobile support with Tensorflow Lite, web support with TensorflowJS and Google Cloud support with TFX.
In addition, the presentation introduces the new edge TPU architecture coming from Coral AI, including its main hardware features and description of the compiling flow.
TensorFlow meetup: Keras - Pytorch - TensorFlow.jsStijn Decubber
Slides from the TensorFlow meetup hosted on October 9th at the ML6 offices in Ghent. Join our Meetup group for updates and future sessions: https://www.meetup.com/TensorFlow-Belgium/
Towards Safe Automated Refactoring of Imperative Deep Learning Programs to Gr...Raffi Khatchadourian
Efficiency is essential to support responsiveness w.r.t. ever-growing datasets, especially for Deep Learning (DL) systems. DL frameworks have traditionally embraced deferred execution-style DL code—supporting symbolic, graph-based Deep Neural Network (DNN) computation. While scalable, such development is error-prone, non-intuitive, and difficult to debug. Consequently, more natural, imperative DL frameworks encouraging eager execution have emerged at the expense of run-time performance. Though hybrid approaches aim for the “best of both worlds,” using them effectively requires subtle considerations to make code amenable to safe, accurate, and efficient graph execution. We present our ongoing work on automated refactoring that assists developers in specifying whether and how their otherwise eagerly-executed imperative DL code could be reliably and efficiently executed as graphs while preserving semantics. The approach, based on a novel imperative tensor analysis, will automatically determine when it is safe and potentially advantageous to migrate imperative DL code to graph execution and modify decorator parameters or eagerly executing code already running as graphs. The approach is being implemented as a PyDev Eclipse IDE plug-in and uses the WALA Ariadne analysis framework. We discuss our ongoing work towards optimizing imperative DL code to its full potential.
Training at AI Frontiers 2018 - Lukasz Kaiser: Sequence to Sequence Learning ...AI Frontiers
Sequence to sequence learning is a powerful way to train deep networks for machine translation, various NLP tasks, but also image generation and recently video and music generation. We will give a hands-on tutorial showing how to use the open-source Tensor2Tensor library to train state-of-the-art models for translation, image generation, and a task of your choice!
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
2. Tesseract Tutorial: DAS 2014 Tours France
Code Cleanup
● Conversion to C++ completed
● Thread-compatibility: can run multiple instances in separate
threads
● Multi-language (single and mixed) capability:
copes with jpn, heb, hin, and mixes such as hin+eng
● Removed many hard-coded limits, eg on character set, dawgs
● New beam search
● ResultIterator for accessing the internal data
● More generic classifier API for plug-n-play experiments
● Removed lots of dead code, including IMAGE class.
3. Tesseract Tutorial: DAS 2014 Tours France
OpenCL activity (Partnership with AMD)
Motivation:
● Accelerated Processing Units (APUs) and integrated GPUs are now
common place across Desktop, Mobile and Server platforms. GPUs have
~10x compute capability compared to CPUs
● OpenCL is a platform independent open-source parallel programming
language that exploits GPU capabilities
● Tesseract processing pipeline consists of several parallel-friendly steps
mainly in image preprocessing & Character / Word recognition
● Hence Tesseract OCR processing has been accelerated using OpenCL
4. Tesseract Tutorial: DAS 2014 Tours France
Open CL Progress (Data provided by AMD team)
Speed-up so far 36%, with the class pruner still to do!
Times shown in seconds on 6
page color business letter on
AMD A10-7850K Kaveri
machine with OpenCL1.2
ClassPruner, feature extraction,
MasterMatcher still to do.
OpenCL2.0 coming.
Module Native
CPU
OpenCL Speed-up
ratio
Tiff Image Read + color mapping 3.837 2.546 1.507
RGB to gray 5.157 0.691 7.463
Histogram 4.423 0.384 11.518
Thresholding 5.474 0.268 20.425
Morphology (layout analysis) 4.950 0.323 15.325
Master Matcher (knn classifier) 25.954 9.249 2.806
Other (work in progress) 74.209 76.626 0.968
Total 124.004 91.168 1.360
5. Tesseract Tutorial: DAS 2014 Tours France
Plug-n-Play Classifier API (Highlights)
class ShapeClassifier {
…
virtual int UnicharClassifySample(const TrainingSample& sample, Pix* page_pix,
int debug, UNICHAR_ID keep_this,
GenericVector<UnicharRating>* results);
virtual int ClassifySample(const TrainingSample& sample, Pix* page_pix,
int debug, UNICHAR_ID keep_this,
GenericVector<ShapeRating>* results);
● Either UnicharClassifySample or ClassifySample may be
implemented.
● Difference is whether to return a UNICHAR_ID or indirect through a
Shapetable.
6. Tesseract Tutorial: DAS 2014 Tours France
Input data for Classifier
class TrainingSample : public ELIST_LINK {
Pix* GetSamplePix(int padding, Pix* page_pix) const;
UNICHAR_ID class_id() const;
int font_id() const;
const INT_FEATURE_STRUCT* features() const;
const MicroFeature* micro_features() const;
Provides access to everything from an image of the character through
to the current tesseract features.
7. Tesseract Tutorial: DAS 2014 Tours France
Algorithmic Modernization
● Cube added Convolutional NN, but improvement was
disappointing.
● It would be nice to eliminate the polygonal approximation…
● It would be nice to eliminate the need for accurate baseline
normalization
=> New classifier experiments
8. Tesseract Tutorial: DAS 2014 Tours France
Cube
● Standard Convolutional Neural Network character classifier.
● Uses over-segmentation and (a different) beam search for
character segmentation.
● Implemented as an alternative word recognizer.
● Achieves about 50% reduction in errors on Hindi when run together
with Tesseract’s word recognizer.
● Improvement for other languages disappointing: few % for major
slow-down (3x).
● Training code unmaintained, unused.
9. Tesseract Tutorial: DAS 2014 Tours France
Eliminate the Polygonal Approximation?
Pixel edge step outlines -> Polygonal approximation
10. Tesseract Tutorial: DAS 2014 Tours France
Challenge: Eliminate the Polygonal Approximation!
● Allow feature extraction from greyscale by eliminating the
dependence on the polygonal approximation.
● Make it faster and more accurate for CJK, Indic, and OSD.
● Keep everything else as constant as possible:
○ Same feature definition
○ Same segmentation search/word recognizer
○ Same training data, but add scope for significantly more fonts.
11. Tesseract Tutorial: DAS 2014 Tours France
Idea 1: Modernize. Apply boosting to Class Pruner
Result:
● Improves accuracy on Training data.
● Generalization hard to achieve.
● Spreading difficult to incorporate into boosting framework.
12. Tesseract Tutorial: DAS 2014 Tours France
Idea 2: Pure Logic is Superior to Emotion, Captain
Q: If Tesseract is a kNN classifier, why doesn't it achieve 100% accuracy on its
training data?
A: It clusters its training samples (by font) and uses the cluster means only =>
any individual sample may not be nearest to the correct cluster center.
Q: How can we achieve 100% accuracy on the training data?
A: Pure Logic. OR together all the training samples in the class pruner.
Q: Doesn't that make it accept anything as anything?
A: Not with proper shape clustering.
13. Tesseract Tutorial: DAS 2014 Tours France
Binary Pruner in Action
Quantized Features Matched against multiple
fonts
Matched against a single
font
14. Tesseract Tutorial: DAS 2014 Tours France
How about Generalization?
● Early experiments showed the BinaryPruner less accurate on
unseen fonts when trained on 200 fonts than the ClassPruner
trained on 32 fonts.
● Adding a 'halo' classifier working on the near neighbors of the
training samples fixed the problem - at a cost:
Character subst errs -2.5% (UNLV)
Flat word drops -0.15%
Bulk drops +2.3%
CPU +110%
15. Tesseract Tutorial: DAS 2014 Tours France
Non-Obvious observation:
Despite being designed over 20 years ago, the current Tesseract
classifier is incredibly difficult to beat with so-called modern methods.
(Without cheating by changing features or upping the number of
training fonts)
Why?
16. Tesseract Tutorial: DAS 2014 Tours France
Statistical Dependence Strikes Again
● Small features to large proto matching
accounts well for large-scale features
● Binary quantization of feature space is
a loser
● HMMs, DBNs and LSTMs have a
better chance than other methods
17. Tesseract Tutorial: DAS 2014 Tours France
LSTM Integration
● LSTM code based on OCROpus Python implementation.
● Expanded capabilities including 2-D, and input of Tesseract
features, as well as images.
● Fully integrated with Tesseract at the line level. (May change)
● Visualization with existing Viewer API.
● Training code included (unlike cube).
● Parallelized with openMP.
● Likely will get OpenCL implementation too.
● Coming in next release of Tesseract.