NUS-ISS Learning Day 2019-Pandas in the cloudNUS-ISS
Presented by Mr Lee Chuk Munn, Chief, StackUp Programme and Mr Prasanna Veerapandi (Bala), Associate Lecturer & Consultant, Software Systems Practice,NUS-ISS, at NUS-ISS Learning Day 2019
This document discusses Python documentation tools including docstrings, pydoc, IPython, doctest, and Sphinx. Docstrings provide documentation for modules, classes, and methods and can be accessed via the __doc__ attribute. Pydoc generates documentation from docstrings. IPython provides an enhanced interactive Python shell. Doctests embed examples in docstrings to test documentation. Sphinx can generate documentation from docstrings and external files in multiple formats.
TensorFlow Dev Summit 2018 Extended: TensorFlow Eager ExecutionTaegyun Jeon
TensorFlow's eager execution allows running operations immediately without building graphs. This makes debugging easier and improves the development workflow. Eager execution can be enabled with tf.enable_eager_execution(). Common operations like variables, gradients, control flow work the same in eager and graph modes. Code written with eager execution in mind is compatible with graph-based execution for deployment. Eager execution provides benefits for iteration and is useful alongside TensorFlow's high-level APIs.
Network Analysis with networkX : Real-World Example-1Kyunghoon Kim
This document discusses various natural language processing and network analysis techniques including morphological analysis using MeCab, replacing Windows installation with RESTful web services, using online morpheme analyzers, lambda functions in Python, ordered dictionaries, and practicing text ranking algorithms. It provides code examples of using these techniques and links to additional resources.
This document provides information about priority queues and binary heaps. It defines a binary heap as a nearly complete binary tree where the root node has the maximum/minimum value. It describes heap operations like insertion, deletion of max/min, and increasing/decreasing keys. The time complexity of these operations is O(log n). Heapsort, which uses a heap data structure, is also covered and has overall time complexity of O(n log n). Binary heaps are often used to implement priority queues and for algorithms like Dijkstra's and Prim's.
The document describes the k-means++ seeding algorithm for initializing k-means clustering. It presents the k-means++ algorithm, provides an implementation in MLDemos, and evaluates it on test and real datasets. The results show k-means++ yields a significant reduction in clustering error compared to random initialization, providing better separation of clusters. However, the document also notes there are many seeding techniques and some may work better than k-means++ for certain datasets.
The document discusses heap sort, which is a sorting algorithm that uses a heap data structure. It works in two phases: first, it transforms the input array into a max heap using the insert heap procedure; second, it repeatedly extracts the maximum element from the heap and places it at the end of the sorted array, reheapifying the remaining elements. The key steps are building the heap, processing the heap by removing the root element and allowing the heap to reorder, and doing this repeatedly until the array is fully sorted.
NUS-ISS Learning Day 2019-Pandas in the cloudNUS-ISS
Presented by Mr Lee Chuk Munn, Chief, StackUp Programme and Mr Prasanna Veerapandi (Bala), Associate Lecturer & Consultant, Software Systems Practice,NUS-ISS, at NUS-ISS Learning Day 2019
This document discusses Python documentation tools including docstrings, pydoc, IPython, doctest, and Sphinx. Docstrings provide documentation for modules, classes, and methods and can be accessed via the __doc__ attribute. Pydoc generates documentation from docstrings. IPython provides an enhanced interactive Python shell. Doctests embed examples in docstrings to test documentation. Sphinx can generate documentation from docstrings and external files in multiple formats.
TensorFlow Dev Summit 2018 Extended: TensorFlow Eager ExecutionTaegyun Jeon
TensorFlow's eager execution allows running operations immediately without building graphs. This makes debugging easier and improves the development workflow. Eager execution can be enabled with tf.enable_eager_execution(). Common operations like variables, gradients, control flow work the same in eager and graph modes. Code written with eager execution in mind is compatible with graph-based execution for deployment. Eager execution provides benefits for iteration and is useful alongside TensorFlow's high-level APIs.
Network Analysis with networkX : Real-World Example-1Kyunghoon Kim
This document discusses various natural language processing and network analysis techniques including morphological analysis using MeCab, replacing Windows installation with RESTful web services, using online morpheme analyzers, lambda functions in Python, ordered dictionaries, and practicing text ranking algorithms. It provides code examples of using these techniques and links to additional resources.
This document provides information about priority queues and binary heaps. It defines a binary heap as a nearly complete binary tree where the root node has the maximum/minimum value. It describes heap operations like insertion, deletion of max/min, and increasing/decreasing keys. The time complexity of these operations is O(log n). Heapsort, which uses a heap data structure, is also covered and has overall time complexity of O(n log n). Binary heaps are often used to implement priority queues and for algorithms like Dijkstra's and Prim's.
The document describes the k-means++ seeding algorithm for initializing k-means clustering. It presents the k-means++ algorithm, provides an implementation in MLDemos, and evaluates it on test and real datasets. The results show k-means++ yields a significant reduction in clustering error compared to random initialization, providing better separation of clusters. However, the document also notes there are many seeding techniques and some may work better than k-means++ for certain datasets.
The document discusses heap sort, which is a sorting algorithm that uses a heap data structure. It works in two phases: first, it transforms the input array into a max heap using the insert heap procedure; second, it repeatedly extracts the maximum element from the heap and places it at the end of the sorted array, reheapifying the remaining elements. The key steps are building the heap, processing the heap by removing the root element and allowing the heap to reorder, and doing this repeatedly until the array is fully sorted.
Network Analysis with networkX : Real-World Example-2Kyunghoon Kim
This document discusses using a morpheme analyzer tool called Umorpheme to analyze Korean language text. It explains how to install Umorpheme using pip and provides sample code to analyze a sentence in Korean and output the morphological analysis including part of speech tags. The document encourages practicing with TextRank, a technique for extracting key phrases from text.
The document discusses priority queues and binary heaps. It explains that priority queues store tasks based on priority level and ensure the highest priority task is at the head of the queue. Binary heaps are the underlying data structure used to implement priority queues. The key operations on a binary heap are insert and deleteMin. Insert involves adding an element and percolating it up the heap, while deleteMin removes the minimum element and percolates the replacement down. Both operations have O(log n) time complexity. The document provides examples and pseudocode for building a heap from a list of elements in O(n) time using a buildHeap method.
Heap data structures can be used for sorting and memory management. Heapsort uses a max heap to sort an array by repeatedly replacing the root with the last element and heapifying the reduced heap. Heaps are also used to manage memory dynamically by allocating and resizing memory blocks on the heap using functions like malloc() and realloc(). Priority queues, which can be implemented efficiently using binary heaps, are used for applications that require fast retrieval of the highest or lowest priority element, such as scheduling tasks.
[4DEV][Łódź] Ivan Vaskevych - InfluxDB and Grafana fighting together with IoT...PROIDEA
They promise that IoT (Internet of Things) will conquer the world. But what will tackle billions of bytes that flow into our servers every hour?
First released in 2013, InfluxDB is used by eBay, Cisco, IBM and other big companies. It’s a production proven time-series storage.
During this talk we're going to get acquainted with it and see how InfluxDB can help to solve your problems.
We’ll see how to quickly install it on Amazon Web Services platform and how it scales.
And for the dessert, we’re going to draw pretty Grafana graphs using InfluxDB data.
The document discusses heaps and heapsort. It defines max heaps and min heaps as complete binary trees where each node's key is greater than or less than its children's keys. It describes operations on heaps like insertion, deletion of the max/min element, and creation of an empty heap. Algorithms for insertion and deletion into max heaps are provided. Heapsort is described as building a max heap of the input array and then repeatedly extracting the max element to sort the array.
The document discusses different types of heaps and heap algorithms. It describes binary min-heaps and max-heaps, including their properties and implementations using arrays. Basic heap operations like insert, delete, and build heap are explained along with their time complexities. Applications of heaps like priority queues and selection algorithms are covered. More advanced heap types like leftist heaps, skew heaps and binomial queues are also mentioned.
Textbook Solutions refer https://pythonxiisolutions.blogspot.com/
Practical's Solutions refer https://prippython12.blogspot.com/
A library is a collection of the modules that caters together to specific type of needs. Smaller handleable units are modules. Standard library modules. Modularity reduced complexity to some extend. A package is a directory that contains sub packages and modules in it along with
Scala Style by Adform Research (Saulius Valatka)Vasil Remeniuk
This document summarizes feedback from a code review session, organized into the following sections: project structure, tests, naming conventions, public APIs, the CumulativePrefixTree data structure, readability, types, catching throwables in Vertica UDFs, and potential topics for future Scala sessions. The reviewer provides suggestions on improving test names, naming conventions, using case classes, reducing tuples, catching failures instead of throwables, and emphasizing types over utils to improve code quality, readability and idiomatic Scala.
The document discusses metrics and monitoring concepts, including the need for standardized, self-describing metrics to make them easier to understand, query, and work with. It provides examples of implementations that aim to structure metrics according to these principles by including descriptive tags and metadata alongside time-series data. The conclusion advocates adopting a "metrics 2.0" approach to gain benefits like reducing the manual effort required to interpret and use metrics for tasks like debugging and alerting.
introduction to Python by Mohamed Hegazy , in this slides you will find some code samples , these slides first presented in TensorFlow Dev Summit 2017 Extended by GDG Helwan
Heap sort uses a heap data structure that maintains the max-heap or min-heap property. It involves two main steps: 1) building the heap from the input array using the BUILD-MAX-HEAP procedure in O(n) time, and 2) repeatedly extracting the maximum/minimum element from the heap and inserting it into the sorted portion using the DELHEAP procedure, running in O(n log n) time overall. The key operation is MAX-HEAPIFY, which maintains the max-heap property in O(log n) time during heap operations like insertion and deletion.
The document discusses the heap sort algorithm which has two steps: 1) Build a max heap from the input data in O(n) time by transforming it into a complete binary tree that satisfies the heap property. 2) Perform n deleteMax operations to extract the maximum element from the heap and place it in the sorted output, doing this in O(log(n)) time per operation for an overall time of O(nlog(n)).
This document describes heap data structures and algorithms like heap sort. It defines a max heap and min heap. It explains the build heap, heapify, insertion and deletion algorithms. Build heap transforms an array into a max heap by applying heapify to each node from bottom to top. Heapify maintains the heap property when a node is added or removed. Heap sort works by building a max heap from the input array and then extracting elements from the root to sort the array in descending order.
This document provides an overview of stacks as a data structure. It defines stacks as linear structures that store data in a last-in, first-out manner. Key points covered include common stack operations like push and pop, complexity analysis, examples of where stacks are used, and C++ code for implementing a stack class with methods like push, pop, peek, and isEmpty.
MBrace is a programming model and cluster infrastructure for large-scale distributed computing inspired by F# asynchronous workflows. It provides a declarative way to compose cloud computations using a monadic programming model. It runs on .NET and provides fault tolerance, elasticity, and multitasking capabilities. Performance tests on Azure showed MBrace can perform comparably to Hadoop for algorithms like distributed grep and k-means clustering.
MBrace is a programming model and cluster infrastructure for effectively defining and executing large scale computation in the cloud. Based on the .NET framework, it builds upon and extends F# asynchronous workflows.
https://skillsmatter.com/skillscasts/5157-mbrace-large-scale-distributed-computation-with-f
Heap Sort in Design and Analysis of algorithmssamairaakram
Brief description of Heap Sort and its types.it includes Binary Tree and its types. analysis and algorithm of Heap Sort. comparison b/w Heap,Qucik and Merge Sort.
A Tale of Three Deep Learning Frameworks: TensorFlow, Keras, & PyTorch with B...Databricks
We all know what they say – the bigger the data, the better. But when the data gets really big, how do you mine it and what deep learning framework to use? This talk will survey, with a developer’s perspective, three of the most popular deep learning frameworks—TensorFlow, Keras, and PyTorch—as well as when to use their distributed implementations.
We’ll compare code samples from each framework and discuss their integration with distributed computing engines such as Apache Spark (which can handle massive amounts of data) as well as help you answer questions such as:
As a developer how do I pick the right deep learning framework?
Do I want to develop my own model or should I employ an existing one?
How do I strike a trade-off between productivity and control through low-level APIs?
What language should I choose?
In this session, we will explore how to build a deep learning application with Tensorflow, Keras, or PyTorch in under 30 minutes. After this session, you will walk away with the confidence to evaluate which framework is best for you.
Network Analysis with networkX : Real-World Example-2Kyunghoon Kim
This document discusses using a morpheme analyzer tool called Umorpheme to analyze Korean language text. It explains how to install Umorpheme using pip and provides sample code to analyze a sentence in Korean and output the morphological analysis including part of speech tags. The document encourages practicing with TextRank, a technique for extracting key phrases from text.
The document discusses priority queues and binary heaps. It explains that priority queues store tasks based on priority level and ensure the highest priority task is at the head of the queue. Binary heaps are the underlying data structure used to implement priority queues. The key operations on a binary heap are insert and deleteMin. Insert involves adding an element and percolating it up the heap, while deleteMin removes the minimum element and percolates the replacement down. Both operations have O(log n) time complexity. The document provides examples and pseudocode for building a heap from a list of elements in O(n) time using a buildHeap method.
Heap data structures can be used for sorting and memory management. Heapsort uses a max heap to sort an array by repeatedly replacing the root with the last element and heapifying the reduced heap. Heaps are also used to manage memory dynamically by allocating and resizing memory blocks on the heap using functions like malloc() and realloc(). Priority queues, which can be implemented efficiently using binary heaps, are used for applications that require fast retrieval of the highest or lowest priority element, such as scheduling tasks.
[4DEV][Łódź] Ivan Vaskevych - InfluxDB and Grafana fighting together with IoT...PROIDEA
They promise that IoT (Internet of Things) will conquer the world. But what will tackle billions of bytes that flow into our servers every hour?
First released in 2013, InfluxDB is used by eBay, Cisco, IBM and other big companies. It’s a production proven time-series storage.
During this talk we're going to get acquainted with it and see how InfluxDB can help to solve your problems.
We’ll see how to quickly install it on Amazon Web Services platform and how it scales.
And for the dessert, we’re going to draw pretty Grafana graphs using InfluxDB data.
The document discusses heaps and heapsort. It defines max heaps and min heaps as complete binary trees where each node's key is greater than or less than its children's keys. It describes operations on heaps like insertion, deletion of the max/min element, and creation of an empty heap. Algorithms for insertion and deletion into max heaps are provided. Heapsort is described as building a max heap of the input array and then repeatedly extracting the max element to sort the array.
The document discusses different types of heaps and heap algorithms. It describes binary min-heaps and max-heaps, including their properties and implementations using arrays. Basic heap operations like insert, delete, and build heap are explained along with their time complexities. Applications of heaps like priority queues and selection algorithms are covered. More advanced heap types like leftist heaps, skew heaps and binomial queues are also mentioned.
Textbook Solutions refer https://pythonxiisolutions.blogspot.com/
Practical's Solutions refer https://prippython12.blogspot.com/
A library is a collection of the modules that caters together to specific type of needs. Smaller handleable units are modules. Standard library modules. Modularity reduced complexity to some extend. A package is a directory that contains sub packages and modules in it along with
Scala Style by Adform Research (Saulius Valatka)Vasil Remeniuk
This document summarizes feedback from a code review session, organized into the following sections: project structure, tests, naming conventions, public APIs, the CumulativePrefixTree data structure, readability, types, catching throwables in Vertica UDFs, and potential topics for future Scala sessions. The reviewer provides suggestions on improving test names, naming conventions, using case classes, reducing tuples, catching failures instead of throwables, and emphasizing types over utils to improve code quality, readability and idiomatic Scala.
The document discusses metrics and monitoring concepts, including the need for standardized, self-describing metrics to make them easier to understand, query, and work with. It provides examples of implementations that aim to structure metrics according to these principles by including descriptive tags and metadata alongside time-series data. The conclusion advocates adopting a "metrics 2.0" approach to gain benefits like reducing the manual effort required to interpret and use metrics for tasks like debugging and alerting.
introduction to Python by Mohamed Hegazy , in this slides you will find some code samples , these slides first presented in TensorFlow Dev Summit 2017 Extended by GDG Helwan
Heap sort uses a heap data structure that maintains the max-heap or min-heap property. It involves two main steps: 1) building the heap from the input array using the BUILD-MAX-HEAP procedure in O(n) time, and 2) repeatedly extracting the maximum/minimum element from the heap and inserting it into the sorted portion using the DELHEAP procedure, running in O(n log n) time overall. The key operation is MAX-HEAPIFY, which maintains the max-heap property in O(log n) time during heap operations like insertion and deletion.
The document discusses the heap sort algorithm which has two steps: 1) Build a max heap from the input data in O(n) time by transforming it into a complete binary tree that satisfies the heap property. 2) Perform n deleteMax operations to extract the maximum element from the heap and place it in the sorted output, doing this in O(log(n)) time per operation for an overall time of O(nlog(n)).
This document describes heap data structures and algorithms like heap sort. It defines a max heap and min heap. It explains the build heap, heapify, insertion and deletion algorithms. Build heap transforms an array into a max heap by applying heapify to each node from bottom to top. Heapify maintains the heap property when a node is added or removed. Heap sort works by building a max heap from the input array and then extracting elements from the root to sort the array in descending order.
This document provides an overview of stacks as a data structure. It defines stacks as linear structures that store data in a last-in, first-out manner. Key points covered include common stack operations like push and pop, complexity analysis, examples of where stacks are used, and C++ code for implementing a stack class with methods like push, pop, peek, and isEmpty.
MBrace is a programming model and cluster infrastructure for large-scale distributed computing inspired by F# asynchronous workflows. It provides a declarative way to compose cloud computations using a monadic programming model. It runs on .NET and provides fault tolerance, elasticity, and multitasking capabilities. Performance tests on Azure showed MBrace can perform comparably to Hadoop for algorithms like distributed grep and k-means clustering.
MBrace is a programming model and cluster infrastructure for effectively defining and executing large scale computation in the cloud. Based on the .NET framework, it builds upon and extends F# asynchronous workflows.
https://skillsmatter.com/skillscasts/5157-mbrace-large-scale-distributed-computation-with-f
Heap Sort in Design and Analysis of algorithmssamairaakram
Brief description of Heap Sort and its types.it includes Binary Tree and its types. analysis and algorithm of Heap Sort. comparison b/w Heap,Qucik and Merge Sort.
A Tale of Three Deep Learning Frameworks: TensorFlow, Keras, & PyTorch with B...Databricks
We all know what they say – the bigger the data, the better. But when the data gets really big, how do you mine it and what deep learning framework to use? This talk will survey, with a developer’s perspective, three of the most popular deep learning frameworks—TensorFlow, Keras, and PyTorch—as well as when to use their distributed implementations.
We’ll compare code samples from each framework and discuss their integration with distributed computing engines such as Apache Spark (which can handle massive amounts of data) as well as help you answer questions such as:
As a developer how do I pick the right deep learning framework?
Do I want to develop my own model or should I employ an existing one?
How do I strike a trade-off between productivity and control through low-level APIs?
What language should I choose?
In this session, we will explore how to build a deep learning application with Tensorflow, Keras, or PyTorch in under 30 minutes. After this session, you will walk away with the confidence to evaluate which framework is best for you.
Certification Study Group -Professional ML Engineer Session 2 (GCP-TensorFlow...gdgsurrey
What We Will Discuss:
Reviewing progress in the machine learning certification journey
𝗦𝗽𝗲𝗰𝗶𝗮𝗹 𝗔𝗱𝗱𝗶𝘁𝗶𝗼𝗻 - Lightening talk on Training an AI Voice Conversion Model Using Google Colab by Adam Berg
Content Review by Vasudev Maduri
Data Preparation and Processing
Solution Architecture with TensorFlow Extended (TFX)
Data Ingestion Challenges and Solutions
Sample Question Review
Previewing next steps and topics, including course completions and material reviews.
Terraform modules provide reusable, composable infrastructure components. The document discusses restructuring infrastructure code into modules to make it more reusable, testable, and maintainable. Key points include:
- Modules should be structured in a three-tier hierarchy from primitive resources to generic services to specific environments.
- Testing modules individually increases confidence in changes.
- Storing module code and versions in Git provides versioning and collaboration.
- Remote state allows infrastructure to be shared between modules and deployments.
Boosting machine learning workflow with TensorFlow 2.0Jeongkyu Shin
TensorFlow 2.0 is the latest release aimed at user convenience, API simplicity, and scalability across multiple platforms. In addition, TensorFlow 2.0, along with a variety of new projects in the TensorFlow ecosystem, TFX, TF-Agent, and TF federated, can help you quickly and easily create a wide variety of machine learning models in more environments. This talk will introduce TensorFlow 2.0 and discusses how to develop and optimize machine learning workflows based on TensorFlow 2.0 and projects within the various TensorFlow ecosystems.
This slide was presented at GDG DevFest Songdo on November 30, 2019.
The document discusses setting up and using Keras and TensorFlow libraries for machine learning. It provides instructions on installing the libraries, preparing data, defining a model with sequential layers, compiling the model to configure the learning process, training the model on data, and evaluating the trained model on test data. A sample program is included that uses a fashion MNIST dataset to classify images into 10 categories using a simple sequential model.
This document provides an overview of running an image classification workload using IBM PowerAI and the MNIST dataset. It discusses deep learning concepts like neural networks and training flows. It then demonstrates how to set up TensorFlow on an IBM PowerAI trial server, load the MNIST dataset, build and train a basic neural network model for image classification, and evaluate the trained model's accuracy on test data.
It talks about native compilation technology, why it is required, what it is?
Also how we can apply this technology to compile table and procedure to achieve considerable performance gain with very minimal changes.
These are the slides which were used by Kumar Rajeev Rastogi of Huawei for his presentation at pgDay Asia 2016. He presented great idea about Native Compilation to improve CPU efficiency.
From JVM to .NET languages, from minor coding idioms to system-level architectures, functional programming is enjoying a long overdue surge in interest. Functional programming is certainly not a new idea and, although not apparently as mainstream as object-oriented and procedural programming, many of its concepts are also more familiar than many programmers believe. This talk examines functional and declarative programming styles from the point of view of coding patterns, little languages and programming techniques already familiar to many programmers.
The document discusses CNN Lab 256 and various labs involving image classification using ImageNet and MNIST datasets. Lab 2 focuses on image classification using ImageNet, which contains over 14 million images across 20,000 categories. The script classify_image.py is used to classify images using a pre-trained model. Retraining the model on a custom dataset is also discussed. Lab 5 involves classifying handwritten digits from the MNIST dataset using a convolutional neural network model defined in TensorFlow. The model achieves an accuracy of over 99% after training for 15,000 epochs in batches of 100 images.
The document provides an overview and agenda for an introduction to running AI workloads on PowerAI. It discusses PowerAI and how it combines popular deep learning frameworks, development tools, and accelerated IBM Power servers. It then demonstrates AI workloads using TensorFlow and PyTorch, including running an MNIST workload to classify handwritten digits using basic linear regression and convolutional neural networks in TensorFlow, and an introduction to PyTorch concepts like tensors, modules, and softmax cross entropy loss.
This document discusses using DL4J and DataVec to build production-ready deep learning workflows for time series and text data. It provides an example of modeling sensor data with recurrent neural networks (RNNs) and character-level text generation with LSTMs. Key points include:
- DL4J is a deep learning framework for Java that runs on Spark and supports CPU/GPU. DataVec is a tool for data preprocessing.
- The document demonstrates loading and transforming sensor time series data with DataVec and training an RNN on the data with DL4J.
- It also shows vectorizing character-level text data from beer reviews with DataVec and using an LSTM in DL4J to generate new
PyTorch is an open-source machine learning library for Python. It is primarily developed by Facebook's AI research group. The document discusses setting up PyTorch, including installing necessary packages and configuring development environments. It also provides examples of core PyTorch concepts like tensors, common datasets, and constructing basic neural networks.
From Tensorflow Graph to Tensorflow EagerGuy Hadash
This document discusses moving from TensorFlow's graph mode to eager execution mode. Eager execution evaluates operations immediately without first describing the execution graph. This provides an intuitive interface, fast development iterations, easier debugging, and natural control flow. The document covers best practices for data pipelines, building models, custom layers, and text classification in eager mode. Control flow can now be handled using Python control structures rather than TensorFlow control ops like tf.while_loop.
Aspect-based sentiment analysis is a text analysis technique that breaks down text into aspects (attributes or components of a product or service), and then scores the sentiment level (positive, negative or neutral) of each aspect. In this talk we'll walk through a production pipeline for training large Aspect Based Sentiment Analysis model in python with the Intel NLP Architect package based on the following open sourced code https://github.com/microsoft/nlp-recipes/tree/master/examples/sentiment_analysis/absa
Similar to NUS-ISS Learning Day 2019-Deploying AI apps using tensor flow lite in mobile devices (20)
Designing Impactful Services and User Experience - Lim Wee KheeNUS-ISS
In this engaging talk, we explore crafting impactful user-centric services, revealing the design principles that drive exceptional experiences. From empathetic customer journeys to innovative interfaces, learn how design can create meaningful connections, inspiring you to revolutionise your approach and drive lasting change in user satisfaction and brand success.
Upskilling the Evolving Workforce with Digital Fluency for Tomorrow's Challen...NUS-ISS
In today's digital age, the key to true transformation lies in our people. This talk will highlight the importance of digital fluency, emphasizing that everyone in an organization is now a digital professional. By synergizing the fundamental digital skills ranging from an agile mindset to making data-informed decisions and design thinking, we will discuss how a digitally skilled workforce can propel organizations to drive digital transformation with new heights of value creation. Though widespread workforce upskilling presents its challenges, this talk offers innovative organizational learning approaches that may pave the way to success. Join us to find out how to shape the future of your organization where success is defined not just by technology but by a workforce fully equipped with digital competencies, ready to take on whatever the future holds.
How the World's Leading Independent Automotive Distributor is Reinventing Its...NUS-ISS
In this captivating session, we'll unveil the profound impact of AI, poised to revolutionise the business landscape. Prepare to shift your perspective, as we transition from the lens of a data scientist to the visionary mindset of a product manager. We're about to demystify the captivating world of Generative AI, dispelling myths and illuminating its remarkable potential. We will also delve into the pioneering applications that Inchcape is leading, pushing the boundaries of what's achievable. Join us for an exhilarating journey into the future of AI, where professionalism meets unparalleled excitement, and innovation takes center stage!
The Importance of Cybersecurity for Digital TransformationNUS-ISS
In the rapidly evolving landscape of digital transformation, the importance of cybersecurity cannot be overstated. As organizations embrace digital technologies to enhance their operations, innovate, and connect with customers in new and dynamic ways, they simultaneously become more vulnerable to cyber threats.
This talk will discuss the importance of having a well thought through approach in dealing with cybersecurity in the form of a strategy that lays out the various programmes and initiatives that will underpin a secure and resilient digital transformation journey. Not surprisingly, having a pool of well-trained cybersecurity personnel is one of the key ingredient in a cyber strategy as exemplified in Singapore's own national cybersecurity strategy.
Architecting CX Measurement Frameworks and Ensuring CX Metrics are fit for Pu...NUS-ISS
Join us for a deep dive into the art of architecting Customer Experience (CX) measurement frameworks and ensuring that CX metrics are precisely tailored for their intended purpose. In this engaging session, you'll walk away with actionable insights and a tangible plan for refining your measurement strategies. Discover how to craft CX measurement frameworks that align seamlessly with your business objectives, ensuring that your metrics deliver meaningful and robust insights. Whether you're seeking to enhance customer satisfaction, optimise processes, or drive innovation, this session will provide you with potential approaches and practical steps to bolster the effectiveness and relevance of your CX metrics. It's your blueprint for creating a customer-centric roadmap to success.
Understanding GenAI/LLM and What is Google Offering - Felix GohNUS-ISS
With the recent buzz on Generative AI & Large Language Models, the question is to what extent can these technologies be applied at work or when you're studying and how easy is it to manage/develop your own models? Hear from our guest speaker from Google as he shares some insights into how industries are evolving with these trends and what are some of Google's offerings from Duet AI in Google Workspace to the GenAI App Builder on Google Cloud.
Digital Product-Centric Enterprise and Enterprise Architecture - Tan Eng TszeNUS-ISS
Enterprises striving to unlock value through digital products face a pivotal shift towards product-centric management, a transformation that carries its share of challenges. To navigate this journey successfully, close collaboration between Enterprise Architects and Digital Product Managers is essential. Together, they can craft the ideal strategy to deliver digital products on a grand scale. Join us in this session as we shed light on the critical interactions and activities that foster synergy between Enterprise Architects and Digital Product Managers. Discover how this collaboration paves the way for effective product-centric management, enabling enterprises to harness the full potential of their digital offerings.
Emerging & Future Technology - How to Prepare for the Next 10 Years of Radica...NUS-ISS
We find ourselves in an era of exponential growth and transformation. The relentless pace of technological advancement is reshaping our world at a rate never seen before, making it increasingly challenging to stay abreast of these rapid developments. Join us for an insightful talk where we embark on a journey to explore the most significant technology trends set to unfold over the next decade. These trends promise to be nothing short of seismic, with the power to reshape every facet of our lives, from the way we work and learn to how we forge relationships and structure our society. Prepare to be enlightened as we delve into a future where the very fabric of our existence is on the brink of transformation. This talk is your compass to navigate the uncharted territory of tomorrow's world, and it's an opportunity you won't want to miss.
Beyond the Hype: What Generative AI Means for the Future of Work - Damien Cum...NUS-ISS
1. The document discusses the impacts of generative AI on the future of work.
2. While AI is not sentient and will not take over the world, many jobs are at risk of automation, especially clerical roles where around 26 million jobs could be lost.
3. At the same time, AI has the potential to make work easier by automating up to 80% of white collar tasks and allowing quick creation of documents, images, videos and apps using simple prompts.
4. The future of AI looks set to see it become the next foundational technology, with potential for uncontrolled innovation if artificial general intelligence is achieved in just 5 years and a "technology singularity" in 25 years.
Supply Chain Security for Containerised Workloads - Lee Chuk MunnNUS-ISS
Containers have emerged as an indispensable component of modern cloud-native applications, serving diverse roles from development environments to application distribution and deployment on platforms like Azure's App Service and Kubernetes. In this presentation, we will delve into a suite of powerful tools designed to ensure the adoption of best practices in container management. You'll gain insights into how to scan container images rigorously, identifying and mitigating vulnerabilities effectively. We'll also explore the art of generating comprehensive software bill of materials (SBOM) for your containers and the significance of signing container images for enhanced security. The ultimate goal of this presentation is to empower you with the knowledge and skills necessary to seamlessly integrate these tools and practices into your CI (Continuous Integration) pipelines. By the end of this session, you'll be well-equipped to fortify your container workflows, delivering secure and robust cloud-native applications that thrive in today's dynamic digital landscape.
The future is always uncertain. To be truly future-ready, companies need the ability to quickly learn and adapt and to foster a culture of continuous curiosity and experimentation. But how can we facilitate rapid learning throughout the organisation? What will the future of learning look like for you? How can we ensure our organisations become engines of growth through learning?
The future is always uncertain. To be truly future-ready, companies need the ability to quickly learn and adapt and to foster a culture of continuous curiosity and experimentation. But how can we facilitate rapid learning throughout the organisation? What will the future of learning look like for you? How can we ensure our organisations become engines of growth through learning?
Site Reliability Engineer (SRE), We Keep The Lights On 24/7NUS-ISS
There are many phases in the software development cycle, from requirements to development and testing, but at the tail of the process, is an often overlooked aspect: deployment and delivery. With the paradigm shift of delivering on-site software to offering software-as-a-service, Site Reliability Engineering is beginning to take a greater role in product delivery.
This session aims to give a glimpse of the work that goes into site reliability engineering (SRE) and effort that goes into keeping a service going 24/7.
Product Management in The Trenches for a Cloud ServiceNUS-ISS
More often than not, people’s perception of Product Management is usually centred around the definition, management and prioritisation of software features and functionality. While that is largely true, it is also one of many things that a Product Manager needs to focus on, given limited time and resources.
This session aims to provide an unfiltered view of how Product Management looks like in the context of Enterprise Cloud Applications development, the challenges confronting Product Managers, and the tradeoff decisions to be made in order to overcome these challenges.
All this, while shipping a working product with each release that will surprise and delight the end user.
Overview of Data and Analytics Essentials and FoundationsNUS-ISS
As companies increasingly integrate data across functions, the boundaries between marketing, sales and operations have been blurring. This allows them to find new opportunities that arise by aligning and integrating the activities of supply and demand to improve commercial effectiveness. Instead of conducting post-hoc analyses that allow them to correct future actions, companies generate and analyze data in near real-time and adjust their operations processes dynamically. Transitioning from static analytics outputs to more dynamic contextualized insights means analytics can be delivered with increased relevance closer to the point of decision.
This talk will cover the analytics journey from descriptive, predictive and prescriptive analytics to derive actionable and timely insights to improve customer experience to drive marketing, salesforce and operations excellence.
With the use of Predictive Analytics, companies are able to predict future trends based on existing available data. The actionable business predictions can help companies achieve cost savings, higher revenue, better resource allocation and efficiency. Predictive analytics has been used in various sectors such as banking & finance, sales & marketing, logistics, retail, healthcare, F&B, etc. for various purposes.
Get set to learn more about the different stages of predictive analytics modelling such as data collection & preparation, model development & evaluation metrics, and model deployment considerations will be discussed.
In this digital transformation era, we have seen the rise of digital platforms and increased usages of devices particularly in the area of wearables and the Internet of Things (IoT). Given the fast pace change to the IoT landscape and devices, data has become one of the important source of truth for analytics and continuous streaming of data from sensors have also emerged as one of the fuel that revolutionise the emergence of IoT. These includes health telematics, vehicle telematics, predictive maintenance of equipment, manufacturing quality management, consumer behaviour, and more. With this, we will give you an introduction on how to leverage the power of data science and machine learning to understand and explore feature engineering of IoT and sensor data.
Master of Technology in Software EngineeringNUS-ISS
This document provides information about the Master of Technology in Software Engineering program at NUS. The program focuses on designing scalable, smart, and secure software systems and products. It offers both part-time and full-time study structures, with the part-time program taking 2 years and full-time taking 1 year. Students can choose a structured route taking set courses each semester, or a flexible route completing graduate certificates at their own pace over 5-7 years. General admission requirements include a bachelor's degree in engineering or science with a minimum GPA, 2 years of work experience, and passing an entrance test and interview. Important application dates for the 2023 start are also provided.
Master of Technology in Enterprise Business AnalyticsNUS-ISS
This document provides information about the Master of Technology in Enterprise Business Analytics program at NUS-ISS. It discusses what data science is, who should take the program, sample job profiles of graduates, the courses taught in the program, and the stackable certificate structure. The program can be completed through a structured route of taking certificates back-to-back over 2 years part-time or 1 year full-time, or a flexible route of taking courses anytime over 7 years to earn the Master of Technology degree. Admission requires a bachelor's degree, minimum GPA, English proficiency, 2 years of work experience, and passing an entrance test and interview.
Diagnosing Complex Problems Using System ArchetypesNUS-ISS
In today’s VUCA world, we are faced with problems coming in fast and furious. In order to resolve such problems quickly, we need to first understand the problems. One of the techniques to understand complex problem is through the use of system archetypes. System archetypes are patterns of behaviour of a system. Let’s us explore some of the system archetypes in this session as well as tips on how to resolve them.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
AI 101: An Introduction to the Basics and Impact of Artificial Intelligence
NUS-ISS Learning Day 2019-Deploying AI apps using tensor flow lite in mobile devices
1. Deploying AI Apps using
TensorFlow Lite in Mobile
Devices
Colab - Keras - h5 - tflite - Android
#ISSLearningDay
Mr. Prasanna Veerapandi (Bala), NUS-ISS
2 Aug 2018
2. Agenda
• Intro to Deep Learning
• Build & Train CNN Models - colab
• Export Model to tflite
• Deploy tflite in android app
• Live Demo
• From Train Model ….. to Deploy in device
#ISSLearningDay
3. About me
• My name is Bala (Prasanna Veerapandi)
• I teach Python for Data , Ops & Things
#ISSLearningDay
Google : NUS ISS PYTHON
4. Intro to Deep Learning
SGD, NN, Back Prop, Arch, CNN, Keras, tflite
#ISSLearningDay
16. Import
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.callbacks import LearningRateScheduler
from tensorflow.keras.initializers import Constant
from tensorflow.keras.layers import Input, Conv2D, BatchNormalization
from tensorflow.keras.layers Activation, Dropout, Flatten, Dense
from tensorflow.keras.models import Model
from tensorflow.keras.utils import to_categorical
23. Compile the model
# Get an Optimizer
adam = keras.optimizers.Adam(lr=0.001)
model.compile(loss='categorical_crossentropy',
optimizer=adam,
metrics=['accuracy'])
24. Lets Train the model
history = model.fit(x_train,
y_train, batch_size=128,
epochs=2, verbose=1,
validation_data=(x_test, y_test))
25. Export Model to tflite
Keras, .h5, tflite converter
#ISSLearningDay
26.
27. Export h5 to tflite
model.save('mnist.h5')
converter = tf.lite.TFLiteConverter.from_keras_model_file('mnist.h5')
tflite_model = converter.convert()
with open('mnist.tflite', 'wb') as f:
f.write(tflite_model)
34. Configure input Shapes
public class MnistDigitClassifier {
private static final String MODEL_NAME = "mnist.tflite";
private static final int BATCH_SIZE = 1;
public static final int IMG_HEIGHT = 28;
public static final int IMG_WIDTH = 28;
private static final int NUM_CHANNEL = 1;
private static final int NUM_CLASSES = 10;
private final Interpreter.Options options = new Interpreter.Options();
private final Interpreter mInterpreter;
private final ByteBuffer mImageData;
private final int[] mImagePixels = new int[IMG_HEIGHT * IMG_WIDTH];
private final float[][] mResult = new float[1][NUM_CLASSES];
}
35. new Interpreter()
public class MnistDigitClassifier {
public MnistDigitClassifier(Activity activity) throws IOException {
mInterpreter = new Interpreter(loadModelFile(activity), options);
mImageData = ByteBuffer.allocateDirect(
4 * BATCH_SIZE * IMG_HEIGHT * IMG_WIDTH * NUM_CHANNEL);
mImageData.order(ByteOrder.nativeOrder());
}
}
36. Predict : Classify
public class MnistDigitClassifier {
public Result classify(Bitmap bitmap) {
convertBitmapToByteBuffer(bitmap);
mInterpreter.run(mImageData, mResult);
Result r = new Result(mResult[0], timeCost);
r.setLabel(String.valueOf(r.getNumber()));
return r;
}