Nesse webinar será apresentado o passo a passo de como criar projetos com Machine Learning utilizando ferramentas de terceiros como Sensi ML e Edge Impulse.
Tópicos que serão apresentados:
Kits de desenvolvimento para Machine Learning:
EV18H79A: SAMD21 ML Evaluation Kit with TDK 6-axis MEMS
EV45Y33A: SAMD21 ML Evaluation Kit with BOSCH IMU
SAMC21 xPlained Pro evaluation kit (ATSAMC21-XPRO) plus its QT8 xPlained Pro Extension Kit (AC164161)
Ferramentas de desenvolvimento:
MPLAB X
Data Visualizer
Ambiente de terceiros: Sensi ML e Edge Impulse
Coleta de dados
Como desenvolver um projeto usando Machine Learning sem conhecimentos específicos sobre o assunto e com conhecimentos sobre Machine Learning.
Pitfalls of machine learning in productionAntoine Sauray
Going from POC to production with Machine Learning can lead to many unexpected problems. We explore some of them in this presentation at the Nantes Machine Learning Meetup.
Oleksii Moskalenko "Continuous Delivery of ML Pipelines to Production"Fwdays
Here in DS team in WIX we want to help to create stunning sites by applying recent achievement of AI research to production. Since Data Science engineering practices are still not fully shaped we found out that it is crucial to bring the best practices from software engineering - give Data Scientist ability to deliver models fast without loss in quality and computation efficiency to stay competitive in this overhyped market. To achieve this we are developing our own infrastructure for creating pipelines and deploying them to production with minimum (to none) engineer involvement.
This talk will cover initial motivation, solved technical issues and lessons learned while building such ML delivery system.
Website: https://fwdays.com/en/event/data-science-fwdays-2019/review/continuous-delivery-of-ml-pipelines-to-production
IMAGE CAPTURE, PROCESSING AND TRANSFER VIA ETHERNET UNDER CONTROL OF MATLAB G...Christopher Diamantopoulos
This implemented DSP system utilizes TCP socket communication. Upon message reception, it decides the appropriate process to be executed based on cases which can be categorized as follows:
1) image capture
2) image transfer
3) image processing
4) sensor calibration
A user-friendly MATLAB GUI, named DIPeth, facilitates the system's control.
Pitfalls of machine learning in productionAntoine Sauray
Going from POC to production with Machine Learning can lead to many unexpected problems. We explore some of them in this presentation at the Nantes Machine Learning Meetup.
Oleksii Moskalenko "Continuous Delivery of ML Pipelines to Production"Fwdays
Here in DS team in WIX we want to help to create stunning sites by applying recent achievement of AI research to production. Since Data Science engineering practices are still not fully shaped we found out that it is crucial to bring the best practices from software engineering - give Data Scientist ability to deliver models fast without loss in quality and computation efficiency to stay competitive in this overhyped market. To achieve this we are developing our own infrastructure for creating pipelines and deploying them to production with minimum (to none) engineer involvement.
This talk will cover initial motivation, solved technical issues and lessons learned while building such ML delivery system.
Website: https://fwdays.com/en/event/data-science-fwdays-2019/review/continuous-delivery-of-ml-pipelines-to-production
IMAGE CAPTURE, PROCESSING AND TRANSFER VIA ETHERNET UNDER CONTROL OF MATLAB G...Christopher Diamantopoulos
This implemented DSP system utilizes TCP socket communication. Upon message reception, it decides the appropriate process to be executed based on cases which can be categorized as follows:
1) image capture
2) image transfer
3) image processing
4) sensor calibration
A user-friendly MATLAB GUI, named DIPeth, facilitates the system's control.
Legion is a runtime machine learning platform streamlining the model development process from exploration to production deployment through automation of data workflows, continous delivery, and quality assurance. The project is released under open-source Apache Software License.
SigOpt at MLconf - Reducing Operational Barriers to Model TrainingSigOpt
In this talk at MLconf NYC, Alexandra Johnson, platform engineering lead at SigOpt, discusses common operational challenges with scaling model training and how solutions are designed to
Kostiantyn Bokhan, N-iX. CD4ML based on Azure and KubeflowIT Arena
Kostiantyn Bokhan, a technical lead at N-IX, focuses on data science projects. He leads data science projects in several areas: Computer vision, NLP, and signal processing as well as consults clients regarding digital transformations with AI. When free, he conducts research in the deep machine learning area. Kostiantyn has been an associate professor and faculty member of several universities since 2002. His research focuses on machine learning, deep learning, signal, and image processing. He received a PhD degree in network and telecommunications systems with research in digital signal processing in 2013. He has served on the scientific committees and review boards of several conferences.
Speech Overview:
Applying machine learning to make business applications and services intelligent is more than just training models and serving them. It requires implementing end-to-end and continuously repeatable cycles of training, testing, deploying, monitoring, and operating the models. Continuous delivery for machine learning (CD4ML) is a technique that enables reliable end-to-end cycles of development, deploying, and monitoring machine learning models. There are a lot of tools and frameworks that can be used to implement CD4ML. One of them is Kubeflow. Our experience of using Kubeflow for implementing CD4ML for the manufacturing area based on Azure Kubernetes service will be described in this speech.
The new buzz world in the world of Agile is "DevOps". So what exactly is devOps and Why do we need it? When development got married to deployment (sys-admin/operations) ; what is born is a new advanced species which is known to us today as "DevOps"
Scilab Technical Talk at NTU, TP and HCMUT (Dr Claude Gomez)TBSS Group
A very comprehensive set of slides presented by CEO, Scilab Enterprise, Dr Claude Gomez. TBSS-Scilab Singapore Center is the partner of Scilab Enterprise in Singapore and TBSS Khai Kinh Co. Ltd. is the partner in Vietnam. Both companies are the TBSS Group of Companies.
How to use Apache TVM to optimize your ML modelsDatabricks
Apache TVM is an open source machine learning compiler that distills the largest, most powerful deep learning models into lightweight software that can run on the edge. This allows the outputed model to run inference much faster on a variety of target hardware (CPUs, GPUs, FPGAs & accelerators) and save significant costs.
In this deep dive, we’ll discuss how Apache TVM works, share the latest and upcoming features and run a live demo of how to optimize a custom machine learning model.
How we scale up our architecture and organization at DailymotionStanislas Chollet
End of 2016, Dailymotion revamped the whole company, in that slide, we will explain you how we have used the DevOps mindset as an enabler to scale up our engineering team and our architecture.
"Deployment for free": removing the need to write model deployment code at St...Stefan Krawczyk
At Stitch Fix we have a dedicated Data Science organization called Algorithms. It has over 130+ Full Stack Data Scientists that build & own a variety of models. These models span from your classic prediction & classification models, through to time-series forecasts, simulations, and optimizations. Rather than hand-off models for productionization to someone else, Data Scientists own and are on-call for that process; we love for our Data Scientists to have autonomy. That said, Data Scientists aren’t without engineering support, as there’s a Data Platform team dedicated to building tooling, services, and abstractions to increase their workflow velocity. One data science task that we have been speeding up is getting models to production and increasing their usability and stability. This is a necessary task that can take a considerable chunk of a Data Scientist’s time, either in terms of developing, or debugging issues; historically everyone largely carved their own path in this endeavor, which meant many different approaches, implementations, and little to leverage across teams.
In this talk I’ll cover how the Model Lifecycle team on Data Platform built a system dubbed the “Model Envelope” to enable “deployment for free”. That is, no code needs to be written by a data scientist to deploy any python model to production, where production means either a micro-service, or a batch python/spark job. With our approach we can remove the need for data scientists to have to worry about python dependencies, or instrumenting model monitoring since we can take care of it for them, in addition to other MLOps concerns.
Specifically the talk will cover:
* Our API interface we provide to data scientists and how it decouples deployment concerns.
* How we approach automatically inferring a type safe API for models of any shape.
* How we handle python dependencies so Data Scientists don’t have to.
* How our relationship & approach enables us to inject & change MLOps approaches without having to coordinate much with Data Scientists.
Machine learning infrastructure solve data scientists' problems using infrastructure tools. This talk shows the case study of building SigOpt Orchestrate, an ML infrastructure tool. The talk highlights how data scientists' concerns as user mapped to solutions with some of today's most popular infrastructure tools.
To learn more about SigOpt Orchestrate: https://sigopt.com/orchestrate
Originally given as a talk for UC Berkeley's Women in Electrical Engineering and Computer Science group on January 24, 2019.
Evaluating GPU programming Models for the LUMI SupercomputerGeorge Markomanolis
It is common in the HPC community that the achieved performance with just CPUs is limited for many computational cases. The EuroHPC pre-exascale and the coming exascale systems are mainly focused on accelerators, and some of the largest upcoming supercomputers such as LUMI and Frontier will be powered by AMD Instinct accelerators. However, these new systems create many challenges for developers who are not familiar with the new ecosystem or with the required programming models that can be used to program for heterogeneous architectures. In this paper, we present some of the more well-known programming models to program for current and future GPU systems. We then measure the performance of each approach using a benchmark and a mini-app, test with various compilers, and tune the codes where necessary. Finally, we compare the performance, where possible, between the NVIDIA Volta (V100), Ampere (A100) GPUs, and the AMD MI100 GPU.
Presentation of a paper accepted in Supercomputing Frontiers Asia 2022
Why we don’t use the Term DevOps: the Journey to a Product Mindset - Destinat...Henning Jacobs
While the adoption of DevOps makes teams move faster with reduced dependency on central operations, it can constrain teams who lack the skills to self-manage the full application and infrastructure stack.
The way to overcome this challenge is creating an internal platform and treating it as a world-class product offering. “Applying product management to internal platforms means establishing empathy with internal consumers (read: developers) and collaborating with them on the design. Platform product managers establish roadmaps and ensure the platform delivers value to the business and enhances the developer experience”, via ThoughtWorks Technology Radar.
In this talk, Henning Jacobs will walk you through how Zalando adopted a customer-first mindset with regards to its developer tooling. He will show the effect on developer satisfaction when internal platforms are given the same respect as external product offerings. Henning will furthermore tell his story about how Zalando moved from a classical infrastructure team to a product mindset with strong focus on building a world-class developer experience. Henning shares both their learnings and challenges going through this transition, and the impact it has on the daily life of Zalando’s customers (developers).
This talk was given in Aarhus on 4th of June 2019.
Understand the Trade-offs Using Compilers for Java ApplicationsC4Media
Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2QCmmJ0.
Mark Stoodley examines some of the strengths and weaknesses of the different Java compilation technologies, if one was to apply them in isolation. Stoodley discusses how production JVMs are assembling a combination of these tools that work together to provide excellent performance across the large spectrum of applications written in Java and JVM based languages. Filmed at qconsf.com.
Mark Stoodley joined IBM Canada to build Java JIT compilers for production use and led the team that delivered AOT compilation in the IBM SDK for Java 6. He spent the last five years leading the effort to open source nearly 4.3 million lines of source code from the IBM J9 Java Virtual Machine to create the two open source projects Eclipse OMR and Eclipse OpenJ9, and now co-leads both projects.
Open Standards for ADAS: Andrew Richards, Codeplay, at AutoSens 2016Andrew Richards
Building autonomous vehicles: How do we build the software and platforms that enable the intelligence for self-driving cars and all the intermediate levels of autonomy?
We don't (yet) know the right algorithms or approach, so how do we start developing the software in a way that can deliver the safety, performance, power consumption and correctness to enable ADAS to full autonomy?
Webinar: Controle de motores BLDC e de indução trifásicoEmbarcados
Pensando nas aplicações que necessitam de controle de motor e na complexidade envolvida, a Microchip tem desenvolvido diversos componentes, algoritmos e ferramentas para ajudar no desenvolvimento para o produto ir para o mercado em perfeito funcionamento e com menor tempo de desenvolvimento. Nesse webinar apresentaremos como você pode desenvolver facilmente aplicações para controle de motores BLCD e de indução trifásico usando os algoritmos FOC, Zero-Speed/Maximum-Torque Control, six step e v/f, com os softwares e hardware da Microchip.
Assista a gravação em: https://embarcados.com.br/webinar-controle-de-motores-bldc-e-de-inducao-trifasico/
Neste webinar serão abordadas topologias e aplicações nas quais o uso de FPGA é vantajoso em relação ao uso de processadores. Serão apresentadas comparações de designs equivalentes em FPGA e em software, apontando os cenários em que o uso de cada uma das tecnologias tem melhor performance.
More Related Content
Similar to Webinar: Começando seus trabalhos com Machine Learning utilizando ferramentas e demoboards da Microchip
Legion is a runtime machine learning platform streamlining the model development process from exploration to production deployment through automation of data workflows, continous delivery, and quality assurance. The project is released under open-source Apache Software License.
SigOpt at MLconf - Reducing Operational Barriers to Model TrainingSigOpt
In this talk at MLconf NYC, Alexandra Johnson, platform engineering lead at SigOpt, discusses common operational challenges with scaling model training and how solutions are designed to
Kostiantyn Bokhan, N-iX. CD4ML based on Azure and KubeflowIT Arena
Kostiantyn Bokhan, a technical lead at N-IX, focuses on data science projects. He leads data science projects in several areas: Computer vision, NLP, and signal processing as well as consults clients regarding digital transformations with AI. When free, he conducts research in the deep machine learning area. Kostiantyn has been an associate professor and faculty member of several universities since 2002. His research focuses on machine learning, deep learning, signal, and image processing. He received a PhD degree in network and telecommunications systems with research in digital signal processing in 2013. He has served on the scientific committees and review boards of several conferences.
Speech Overview:
Applying machine learning to make business applications and services intelligent is more than just training models and serving them. It requires implementing end-to-end and continuously repeatable cycles of training, testing, deploying, monitoring, and operating the models. Continuous delivery for machine learning (CD4ML) is a technique that enables reliable end-to-end cycles of development, deploying, and monitoring machine learning models. There are a lot of tools and frameworks that can be used to implement CD4ML. One of them is Kubeflow. Our experience of using Kubeflow for implementing CD4ML for the manufacturing area based on Azure Kubernetes service will be described in this speech.
The new buzz world in the world of Agile is "DevOps". So what exactly is devOps and Why do we need it? When development got married to deployment (sys-admin/operations) ; what is born is a new advanced species which is known to us today as "DevOps"
Scilab Technical Talk at NTU, TP and HCMUT (Dr Claude Gomez)TBSS Group
A very comprehensive set of slides presented by CEO, Scilab Enterprise, Dr Claude Gomez. TBSS-Scilab Singapore Center is the partner of Scilab Enterprise in Singapore and TBSS Khai Kinh Co. Ltd. is the partner in Vietnam. Both companies are the TBSS Group of Companies.
How to use Apache TVM to optimize your ML modelsDatabricks
Apache TVM is an open source machine learning compiler that distills the largest, most powerful deep learning models into lightweight software that can run on the edge. This allows the outputed model to run inference much faster on a variety of target hardware (CPUs, GPUs, FPGAs & accelerators) and save significant costs.
In this deep dive, we’ll discuss how Apache TVM works, share the latest and upcoming features and run a live demo of how to optimize a custom machine learning model.
How we scale up our architecture and organization at DailymotionStanislas Chollet
End of 2016, Dailymotion revamped the whole company, in that slide, we will explain you how we have used the DevOps mindset as an enabler to scale up our engineering team and our architecture.
"Deployment for free": removing the need to write model deployment code at St...Stefan Krawczyk
At Stitch Fix we have a dedicated Data Science organization called Algorithms. It has over 130+ Full Stack Data Scientists that build & own a variety of models. These models span from your classic prediction & classification models, through to time-series forecasts, simulations, and optimizations. Rather than hand-off models for productionization to someone else, Data Scientists own and are on-call for that process; we love for our Data Scientists to have autonomy. That said, Data Scientists aren’t without engineering support, as there’s a Data Platform team dedicated to building tooling, services, and abstractions to increase their workflow velocity. One data science task that we have been speeding up is getting models to production and increasing their usability and stability. This is a necessary task that can take a considerable chunk of a Data Scientist’s time, either in terms of developing, or debugging issues; historically everyone largely carved their own path in this endeavor, which meant many different approaches, implementations, and little to leverage across teams.
In this talk I’ll cover how the Model Lifecycle team on Data Platform built a system dubbed the “Model Envelope” to enable “deployment for free”. That is, no code needs to be written by a data scientist to deploy any python model to production, where production means either a micro-service, or a batch python/spark job. With our approach we can remove the need for data scientists to have to worry about python dependencies, or instrumenting model monitoring since we can take care of it for them, in addition to other MLOps concerns.
Specifically the talk will cover:
* Our API interface we provide to data scientists and how it decouples deployment concerns.
* How we approach automatically inferring a type safe API for models of any shape.
* How we handle python dependencies so Data Scientists don’t have to.
* How our relationship & approach enables us to inject & change MLOps approaches without having to coordinate much with Data Scientists.
Machine learning infrastructure solve data scientists' problems using infrastructure tools. This talk shows the case study of building SigOpt Orchestrate, an ML infrastructure tool. The talk highlights how data scientists' concerns as user mapped to solutions with some of today's most popular infrastructure tools.
To learn more about SigOpt Orchestrate: https://sigopt.com/orchestrate
Originally given as a talk for UC Berkeley's Women in Electrical Engineering and Computer Science group on January 24, 2019.
Evaluating GPU programming Models for the LUMI SupercomputerGeorge Markomanolis
It is common in the HPC community that the achieved performance with just CPUs is limited for many computational cases. The EuroHPC pre-exascale and the coming exascale systems are mainly focused on accelerators, and some of the largest upcoming supercomputers such as LUMI and Frontier will be powered by AMD Instinct accelerators. However, these new systems create many challenges for developers who are not familiar with the new ecosystem or with the required programming models that can be used to program for heterogeneous architectures. In this paper, we present some of the more well-known programming models to program for current and future GPU systems. We then measure the performance of each approach using a benchmark and a mini-app, test with various compilers, and tune the codes where necessary. Finally, we compare the performance, where possible, between the NVIDIA Volta (V100), Ampere (A100) GPUs, and the AMD MI100 GPU.
Presentation of a paper accepted in Supercomputing Frontiers Asia 2022
Why we don’t use the Term DevOps: the Journey to a Product Mindset - Destinat...Henning Jacobs
While the adoption of DevOps makes teams move faster with reduced dependency on central operations, it can constrain teams who lack the skills to self-manage the full application and infrastructure stack.
The way to overcome this challenge is creating an internal platform and treating it as a world-class product offering. “Applying product management to internal platforms means establishing empathy with internal consumers (read: developers) and collaborating with them on the design. Platform product managers establish roadmaps and ensure the platform delivers value to the business and enhances the developer experience”, via ThoughtWorks Technology Radar.
In this talk, Henning Jacobs will walk you through how Zalando adopted a customer-first mindset with regards to its developer tooling. He will show the effect on developer satisfaction when internal platforms are given the same respect as external product offerings. Henning will furthermore tell his story about how Zalando moved from a classical infrastructure team to a product mindset with strong focus on building a world-class developer experience. Henning shares both their learnings and challenges going through this transition, and the impact it has on the daily life of Zalando’s customers (developers).
This talk was given in Aarhus on 4th of June 2019.
Understand the Trade-offs Using Compilers for Java ApplicationsC4Media
Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2QCmmJ0.
Mark Stoodley examines some of the strengths and weaknesses of the different Java compilation technologies, if one was to apply them in isolation. Stoodley discusses how production JVMs are assembling a combination of these tools that work together to provide excellent performance across the large spectrum of applications written in Java and JVM based languages. Filmed at qconsf.com.
Mark Stoodley joined IBM Canada to build Java JIT compilers for production use and led the team that delivered AOT compilation in the IBM SDK for Java 6. He spent the last five years leading the effort to open source nearly 4.3 million lines of source code from the IBM J9 Java Virtual Machine to create the two open source projects Eclipse OMR and Eclipse OpenJ9, and now co-leads both projects.
Open Standards for ADAS: Andrew Richards, Codeplay, at AutoSens 2016Andrew Richards
Building autonomous vehicles: How do we build the software and platforms that enable the intelligence for self-driving cars and all the intermediate levels of autonomy?
We don't (yet) know the right algorithms or approach, so how do we start developing the software in a way that can deliver the safety, performance, power consumption and correctness to enable ADAS to full autonomy?
Similar to Webinar: Começando seus trabalhos com Machine Learning utilizando ferramentas e demoboards da Microchip (20)
Webinar: Controle de motores BLDC e de indução trifásicoEmbarcados
Pensando nas aplicações que necessitam de controle de motor e na complexidade envolvida, a Microchip tem desenvolvido diversos componentes, algoritmos e ferramentas para ajudar no desenvolvimento para o produto ir para o mercado em perfeito funcionamento e com menor tempo de desenvolvimento. Nesse webinar apresentaremos como você pode desenvolver facilmente aplicações para controle de motores BLCD e de indução trifásico usando os algoritmos FOC, Zero-Speed/Maximum-Torque Control, six step e v/f, com os softwares e hardware da Microchip.
Assista a gravação em: https://embarcados.com.br/webinar-controle-de-motores-bldc-e-de-inducao-trifasico/
Neste webinar serão abordadas topologias e aplicações nas quais o uso de FPGA é vantajoso em relação ao uso de processadores. Serão apresentadas comparações de designs equivalentes em FPGA e em software, apontando os cenários em que o uso de cada uma das tecnologias tem melhor performance.
Webinar: Especificação de Componentes PassivosEmbarcados
Nessa apresentação, descreveremos brevemente uma visão geral do portfólio do Grupo Yageo. Também daremos um mergulho mais profundo em alguns grupos de produtos, como capacitores de alumínio e polímero de tântalo, bem como na linha de magnéticos, sensores e atuadores.
Webinar: Projeto de hardware utilizando Conversores DC/DCEmbarcados
O objetivo do webinar é apresentar os conversores DC/DC da Vishay, especialmente projetados para sistemas embarcados, baseados em microprocessadores e/ou FPGAs. Em geral, estes sistemas exigem baixas tensões e altas correntes. Os CIs conversores da Vishay foram projetados para cumprir essa função com alta eficiência e facilidade de projeto.
Apresentação da nova linha de microcontroladores Cortex-M da Microchip e o framework MCC Harmony. O principal foco será nova linha PIC32CX é voltada para aplicações de conectividade e segurança. Apresentando como implementar uma comunicação Ethernet TCP com segurança.
Um grande desafio no desenvolvimento de sistema Linux embarcado é garantir a reprodutibilidade do trabalho. Um sistema Linux possui milhares de opções de customização e, somado a isso, cada fabricante e chip possui suas especificações.
Foi para resolver esses e outros problemas que o Yocto Project foi criado pela The Linux Foundation.
Nesse webinar vamos abordar o uso do Yocto Projeto no dia a dia, os conceitos básicos de sua arquitetura e em quais tipos de aplicações onde seu uso é mais indicado.
Para mais informações: https://embarcados.com.br/desvendando-o-yocto-project/
Webinar: Bancada de eletrônica profissionalEmbarcados
Durante o webinar, discutiremos a importância da escolha correta dos instrumentos e ferramentas para a bancada de um profissional de eletrônica e os cuidados necessários para melhor aproveitamento e manutenção desses instrumentos.
Apresentaremos os itens indispensáveis para ter em sua bancada de eletrônica profissional e os critérios para a escolha dos mesmos.
Não perca a oportunidade de conhecer equipamentos de qualidade com excelente custo/beneficio e entender por que é importante investir em equipamentos adequados, aprendendo como dimensionar os itens de acordo com suas necessidades.
Webinar: Como projetar sensores de baixo consumo utilizando microcontroladore...Embarcados
Participe do nosso Webinar e descubra como os Microcontroladores PIC e AVR podem simplificar projetos de sensores de baixa potência.
Durante a apresentação, abordaremos os periféricos analógicos integrados e as atualizações mais recentes desses dispositivos, além de como eles interagem com um sistema maior. Também destacaremos os recursos de baixo consumo de energia presentes nos dispositivos PIC e AVR mais recentes.
Para facilitar o desenvolvimento com esses microcontroladores, mostraremos exemplos práticos de placas de demonstração e ferramentas disponíveis no mercado.
Se você quer aprender a desenvolver projetos de sensores de baixa potência com Microchip, este é o webinar certo para você! Inscreva-se agora mesmo e participe dessa oportunidade única de ampliar seus conhecimentos aprendendo com um especialista da Microchip. Venha tirar suas dúvidas ao vivo no webinar.
O que você aprenderá nesse webinar:
Microcontroladores de 8 bits da Microchip? PIC e AVR
Periféricos analógicos integrados e atualizações recentes
Como esses dispositivos interagem com um sistema maior
Recursos de baixo consumo de energia em dispositivos PIC e AVR recentes
Placas de demonstração
Ferramentas
Como podemos começar a desenvolver com Microchip?
Webinar: Desvendando o seguidor de linha: sensores, montagem e programação co...Embarcados
Neste webinar, discutiremos as etapas envolvidas na construção de um robô seguidor de linha. Exploraremos os principais sensores utilizados em robôs seguidores de linha, como o sensor infravermelho e o sensor de reflexão, e como eles são usados para detectar e seguir uma linha, incluindo um teste comparativo entre eles para a utilização do PID. Também abordaremos os materiais necessários para a construção do robô, incluindo a placa PCB e a impressão 3D, e como eles são usados para construir um robô preciso e robusto.
Além disso, iremos explorar as diversas ferramentas e técnicas disponíveis para simplificar o trabalho colaborativo em projetos de robótica, incluindo o uso de plataformas de compartilhamento de código e documentação.
O que você aprenderá neste webinar
Você aprenderá sobre as etapas para construção de um robô seguidor de linha, os principais sensores, materiais e técnicas utilizados. Também conhecerá ferramentas de trabalho colaborativo em projetos de robótica.
Apresentação
Diogo Lacerda
Técnico em Mecatrônica e graduando de Física de Materiais pela Universidade de Pernambuco. Começou a carreira profissional como professor de robótica e depois migrou para o ramo de sistema embarcados. Atualmente trabalha no CESAR com modelagem e impressão 3D, utilizando a manufatura aditiva como ferramenta para desenvolver protótipos para circuitos eletrônicos e mecanismos de forma mais barata e rápida. Fez parte da equipe de robótica CESAR-Voxar Labs, que é bicampeão na categoria @home pela Competição Brasileira de Robótica (CBR). Faz parte da rede colaborativa http://robolivre.com/ desde 2012, onde ministra oficinas e palestras sobre robótica e Arduino.
Webinar Gravado: Um Estudo sobre a I2C e o Futuro com a I3CEmbarcados
Para saber mais e assistir ao video, acesse: https://embarcados.com.br/webinar-um-estudo-sobre-a-i2c-e-o-futuro-com-a-i3c/
Durante o webinar, você aprenderá sobre as características e o funcionamento básico da I2C, suas aplicações e os desafios enfrentados ao projetar com essa interface. Além disso, você conhecerá as inovações e melhorias trazidas pela I3C e aprenderá como interoperar com a I2C.
Será feita também uma comparação entre os padrões e suas implementações físicas e lógicas, o que ajudará a entender melhor as diferenças e vantagens de cada um deles.
Não perca a oportunidade de aprimorar seus conhecimentos sobre a I2C e a I3C. Inscreva-se agora mesmo no nosso webinar e saiba tudo sobre essa interface e seu futuro!
Apresentação
Huéliquis Fernandes - Business Development Manager
Huéliquis é um experiente profissional da indústria de semicondutores. Nos últimos 25 anos trabalhou para a Future Electronics, Motorola/Freescale, ST Microelectronics e Renesas.
Yoshinori Kanno - Field Application Engineer
Yoshinori é formado em Engenharia Eletrônica e possui mestrado em Processamento Digital de Sinais. Nos últimos 20 anos, ele trabalhou na Philips/NXP e em distribuidores globais.
Matthew Sauceda - Sr. Principal Applications Engineer - Nexperia
Matthew Sauceda is a Sr Principal Applications Engineer for Nexperia. He holds a M.S in Electrical Engineering specializing in Analog and Mixed Signal VLSI design. His work experience includes 10+ years in semiconductor field through work as application/system/ hardware design in Texas Instruments, and Advanced Micro Devices. In his spare time he enjoys hobbies such as fishing, traveling, and woodworking.
Descrição do Webinar
Nesse webinar você conhecera as soluções da Infineon e a família de Microcontroladores Traveo T2G. Iremos abordar quais os pontos diferenciais nessa linha de Microcontroladores ARM Cortex M4 e M7 e quais itens a Infineon pode lhe oferecer para facilitar o desenvolvimento. Iremos apresentar o ecossistema de parceiros, ferramentas de desenvolvimento e aplicações foco da linha Traveo T2G e demonstrar o porquê ele tem sido o líder em aplicações automotivas e industriais, quando são necessários requisitos de low power, conectividade e segurança para Over-the-Air (OTA).
O que você aprenderá nesse webinar:
Após esse webinar você entendera quais os requisitos básicos e diferenciais da família de Microcontroladores Traveo T2G. Também conhecera o ecossistema e como começar a desenvolver seus projetos utilizando a família Traveo T2G, desenvolvida para sistemas automotivos e industriais que requerem desempenho, low power, conectividade e segurança com suporte técnico e vendas no Brasil.
Webinar: Introdução à Reconfiguração dinâmica parcial em FPGAsEmbarcados
Nesse webinar foi apresentado sobre reconfiguração dinâmica parcial em FPGAs Xilinx! Regina abordou sobre os conceitos e termos básicos dessa tecnologia, seu fluxo de implementação, prós e contras, aspectos relevantes e aplicações.
Webinar: Microprocessadores 32 bits, suas principais aplicações no mercado br...Embarcados
Junte-se a nós para saber mais sobre as soluções de microprocessador (MPU) da Microchip e como o System in Package (SiP) e o System on Modules (SoM) podem simplificar drasticamente o projeto da sua PCB e reduzir o tempo de lançamento no mercado. Os produtos System in Package (SiP) com DRAM integrada simplificam o projeto de PCB, melhoram a robustez geral de EMI do seu sistema, removem o problema de fornecimento de DRAM e problemas de software e podem, em última análise, reduzir os custos gerais do sistema abrindo a porta para projeto de PCB de 4 camada para sua aplicação. As soluções SOM da Microchip fornecem uma plataforma de hardware qualificada projetada para longa vida útil e ajudam você a acelerar suas primeiras construções de produção com os SOMs e otimizar o custo de BOM, passando posteriormente para soluções chip-down em volumes maiores usando os arquivos de design e suporte fornecidos pela Microchip. Você também aprenderá sobre a estratégia principal do Linux da Microchip com suporte de longo prazo e um caminho fácil para uma solução gráfica de baixo custo. Os MPUs Microchip são adequados para uma variedade de aplicações, incluindo aquelas nos setores de consumo, automotivo, industrial e médico.
Neste Webinar apresentaremos as principais soluções em Timming devices (Ressonadores e Cristas) Murata, suas tecnologias, materiais utilizados, aplicações, como identificar possíveis falhas e como utilizar ferramenta de seleção da Murata Simsurfing.
Tópicos do Webinar
Tecnologia Timing Devices
Ressonadores e Cristais
Vantagens dos Ressonadores Murata
Vantagens dos Cristais Murata
Como identificar possíveis falhas nos Ressonadores e Cristais.
Matching - como identificar melhor Cristal de acordo com microprocessador.
Ferramenta Murata Simsurfing
Aplicações
Webinar: Silicon Carbide (SiC): A tecnologia do futuro para projetos de potênciaEmbarcados
Descubra as vantagens de utilizar MosFETs e Gate Drivers com SiC.
O webinar abordará a tecnologia SiC, suas vantagens e aplicações no mercado brasileiro, com destaque para a relação entre SiC e carros elétricos. A Microchip oferece produtos relacionados à tecnologia SiC, como Mosfets, Gate Drivers, demoboards e reference designs. O webinar será uma ótima oportunidade para conhecer mais sobre essa tecnologia promissora e entender o que a Microchip tem a oferecer nesse segmento.
Webinar: Por que dominar sistema operacional Linux deveria ser a sua prioridade?Embarcados
O sistema operacional Linux tem sido cada vez mais utilizado em diferentes setores da indústria, especialmente na área de sistemas embarcados. Hoje o Linux embarcado é utilizado em dispositivos eletrônicos de diversas áreas, como automação industrial, automotiva, agrícola, medica, aeroespacial, de comunicação e de entretenimento.
Com a crescente demanda por profissionais qualificados em Linux Embarcado, é importante entender por que dominar esse sistema operacional deve ser sua prioridade como desenvolvedor de sistemas embarcados.
No webinar, discutiremos as principais razões pelas quais você deve investir em seu desenvolvimento em Linux e como isso pode abrir portas para novas oportunidades profissionais. Também serão abordados os principais recursos e funcionalidades do Linux, além de dicas práticas para aprimorar suas habilidades como desenvolvedor.
Webinar: Estratégias para comprar componentes eletrônicos em tempos de escassezEmbarcados
Neste webinar, abordaremos a situação atual do mercado mundial de componentes para auxiliar as empresas de manufatura na elaboração de estratégias eficientes para a programação de compras.
Forneceremos informações valiosas para profissionais que atuam em departamentos de produto, compras e suprimentos. Além disso, apresentaremos como utilizar sua lista de materiais para realizar compras consolidadas e programadas, bem como outras ferramentas úteis para o processo de aquisição de materiais e suprimentos.
Webinar: ChatGPT - A nova ferramenta de IA pode ameaçar ou turbinar a sua car...Embarcados
Ninguém esperava isso. Com diferentes níveis de espanto, admiração ou surpresa, as pessoas descobriram o chatGPT e sua utilidade. Seria algo tão novo assim, sem nenhum precedente? E, o mais importante, será o fim do trabalho dos programadores? O que isso pode significar para quem está hoje, na bancada? Este webinar explora o tema, tentando apontar e auxiliar os desenvolvedores a tomar as melhores decisões nesse novo cenário.
Objetivo do Webinar
Debater pontos positivos e importantes sobre estas novas tecnologias. Avaliar o risco que trazem e o benefício que geram. Abraçar uma novidade que aumenta a produtividade de forma significativa, sem preconceitos ou restrições sem fundamento sempre colocam os praticantes em vantagem competitiva.
Webinar: Power over Ethernet (PoE) e suas aplicações no mercado brasileiroEmbarcados
Neste webinar vamos explorar os seguintes temas:
O que é Power over Ethernet (PoE)
Por que os desenvolvedores devem selecionar os dispositivos do sistema Microchip PoE
Como o portfólio de dispositivos e sistemas PoE da Microchip garante que o cliente tenha a solução adequada para cada situação
Exemplos de onde a Microchip implantou com sucesso suas soluções PoE
O que considerar ao pensar no desenvolvimento do projeto do PoE
Webinar: Utilizando o Yocto Project para automatizar o desenvolvimento em Lin...Embarcados
Nesse webinar conheceremos o Yocto Project, um conjunto de ferramentas open-source que possuem o objetivo de facilitar o desenvolvimento de distribuições e sistemas Linux. Também vamos entender como utilizar a ferramenta pode auxiliar na automatização do desenvolvimento de sistemas Linux Embarcado.
https://embarcados.com.br/webinar-utilizando-o-yocto-project-para-automatizar-o-desenvolvimento-em-linux-embarcado/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
2. A Leading Provider of Smart, Connected and Secure Embedded Control Solutions
November 2022
Rodrigo Britto
Starting with Microchip
Machine Learning
3. 2
MPLAB® X IDE
• Free download, simple installation
• Rich debug and programming environment for
all PIC® and dsPIC® devices
• Runs on Windows®, Linux® and MAC® OS
• Software development kit (SDK) for custom
automated testing, manufacturing frameworks
• Strong ecosystem of plugins for many custom
features; visualization, code analysis, etc.
• Supports all Microchip, many third party
compilers
• Supports code generation for software libraries
• Complete simulation engine for most devices
4. 3
MPLAB® XC C Compilers
• Three compiler types to chose from
• MPLAB XC8 = 8-bit PIC® devices
• MPLAB XC16 = 16-bit PIC and dsPIC® devices
• MPLAB XC32 = 32-bit PIC devices
• Compatible everywhere
• Windows®, Linux® and macOS®
• Optimization levels
• Free and PRO
• 70% of available optimizations in Free version
• Continuing improvements to Free and PRO
• Most flexible licensing in the industry
• Workstation, network server, site, subscription and dongle
licenses
5. 4
MPLAB® Code Configurator (MCC)
FREE easy-to-use graphical
programming tool:
Easily configure and use peripherals
Generates efficient C code for your project
Supports 8- and 16-bit devices
Minimizes reliance on datasheet
www.microchip.com/mcc
6. 5
MPLAB® Harmony
• Modular Software Framework
• A Graphical User Interface (GUI) tool
that takes the guesswork out of
configuring drivers and middleware.
• Direct resale by Microchip for
third-party libraries
• Microchip provides first line of
support
• MPLAB Harmony components
• Third-party solutions
• Comprehensive web portal
• Compatible with 32-bit PIC® MCUs
8. 7
Machine Learning Demos
TensorFlow Lite for Microcontroller Demos with Harmony 3.0 Plugin
(Training with your dataset in Google Colab, Converting & Deploying a
Model, Using the TF-Lite Runtime Engine for Inference)
Hello World – a simple TF model predicting the value of a sine function
Digit Recognition – identification of digits 0 to 9 written on a touchscreen
Edge Impulse Speech Recognition using with SAM E54 Curiosity Ultra
Keyword Spotting (also called “Micro Speech” in TensorFlow Lite Demos)
SensiML Tools Workflow with the SAMD21 ML Evaluation Kit
Gesture Recognition with an IMU (gyro/accelerometer)
*** Note: Click on any of the hyperlinks above to quickly navigate to the corresponding lab exercise section
9. 8
Eval Kits for Machine Learning Development
• SAMD21 ML Evaluation Kit with TDK 6-axis MEMS (Part # EV18H79A)
• https://www.microchip.com/en-us/development-tool/EV18H79A
• SAM E54 Curiosity Ultra Development Board (Part Number: DM320210)
• https://www.microchip.com/en-us/development-tool/DM320210
• SAM E70 Xplained Ultra Development Board (Part Number: DM320113)
• https://www.microchip.com/en-us/development-tool/DM320113
• PIC32 Audio Codec Daughter Card (Part # AC328904)
• https://www.microchip.com/en-us/development-tool/AC328904
• Stereo 3.5 mm microphone (for speech input)
• SAM E51 Integrated Graphics and Touch Development Board (Part # EV14C17A)
• https://www.microchip.com/en-us/development-tool/EV14C17A
10. 9
H3 TF-Lite “Hello World”
Using TensorFlow Lite for Microcontrollers and Harmony 3
A Simple Model for Prediction of a Sine of a Number
Return to Lab Exercises List
11. 10
What is TensorFlow Lite for Microcontrollers?
Training Workflow
• TensorFlow Lite is an open-source, production ready, cross-
platform framework for deploying ML on mobile devices
and embedded systems
• Compatible with the TensorFlow training environment.
• Designed to run Google ML models on microcontrollers with only
a few KB of memory
• Built to fit on embedded systems
• Very small binary footprint. Optimized for ARM Cortex Mx, takes
approx. 16 KB on the M3/M4 for the interpreter/runtime engine.
• No dynamic memory allocation
• No dependencies on complex parts of the standard C/C++
libraries
• No operating system dependencies, can run on bare metal
• Designed to be portable across a wide variety of systems
• Google Colab (https://colab.research.google.com/)
• Goolgle offers Free Cloud service with free GPU. Provides a free
Jupyter notebook environment called Colaboratory or “colab”
• In Colab we can write and execute code in Python to train and
deploy a model for TensorFlow Lite
Micro
12. 11
Harmony 3 Component for TensorFlow Lite Micro.
TFLM component
• Adds required source file for TensorFlow Lite for Microcontrollers runtime engine
• Provides option to use the optimized CMSIS-NN kernel for Cortex-M MCUs
• Provides option to use the example audio front end required for micro speech application
Data Log
Debug Log function to print out error messages from TF-Lite for uC runtime engine
Tools and Packages Required from H3 Repo
MCC plugin version 5.1.2, MPLAB X IDE v6.0, csp v3.10.0, core v3.10.0, bsp v3.10.0, dev_packs v3.10.0, CMSIS-
FreeRTOS v10.3.1, audio v3.5.1, gfx v3.9.5, touch v3.1, tflite-micro-apps
Public Harmony 3 Repository for TFLM - https://github.com/Microchip-MPLAB-Harmony/tflite-micro-apps
13. 12
Hello World using TensorFlow Lite and Harmony 3
▪ This tutorial shows how to create and train a 2.5 kB model that predicts the value of a sine
function. The model accepts values between 2 and 2π and outputs a single value between -
1 and 1. The model takes an input value, x, and predicts its sine, y. That is, y = sin(x), where
x is our input and y is the output of the model.
▪ Hardware Required
▪ SAM E51 IGAT Board, or, SAM E70 Xplained Ultra Eval Kit, SSD1963 LCD Controller
Graphics Card and High-Performance WGVGA Display Module
▪ Micro USB Cable to connect the Debug USB Port to the computer
0
Model
Input
3-layer, fully connected neural network
14. 13
Building and Running the Application - 1
▪ Downloading and building the application project
▪ Path of the application within the repository is apps/hello_world/firmware
▪ To create and train a new model using Google Colaboratory and integrating it into your
MPLAB-X project Click HERE to access and clone the Colab “Hello World” script. Select
Copy To Drive to duplicate and run the script from your own Google Drive.
▪ Select Runtime from the top menu Run all to execute the entire script in one step, or
individually execute each cell one by one.
15. 14
Building and Running the Application - 2
▪ After completing Run all, We can download
models directory in Colab as shown on the
right
▪ The models directory includes the 3 model
files: model.pb , model.tflite and model.cc
▪ Copy the contents from model.cc file from
models directory and replace it in model.cpp
in your MPLAB-X project. Specifically, we need
to copy and paste the g_model[] array and
g_model_len variable declarations into
your model.cpp file defined you MPLAB
project to update the model.
▪ Rebuild and Run the project. The sine wave
will be displayed on the screen.
16. 15
Creating a Sine Model – Step By Step In Colab - 1
▪ STEP 1 – Install TensorFlow 2.4.0
▪ STEP 2 – Import necessary Dependencies
17. 16
Creating a Sine Model – Step By Step In Colab - 2
▪ STEP 3 – Generate Dataset of random Sine values and plot the results
18. 17
Creating a Sine Model – Step By Step In Colab - 3
▪ STEP 4 – Add Gaussian noise to the output to generate a more realistic (real-world) model
19. 18
Creating a Sine Model – Step By Step In Colab - 4
▪ STEP 5 – Split apart the data for training and plot the results
▪ Training: 60%
▪ Validation: 20%
▪ Testing: 20%
20. 19
Creating a Sine Model – Step By Step In Colab - 5
▪ STEP 6 – Designing A Larger Model (adding a layer of 16 neurons) to improve the
performance of the model
21. 20
Creating a Sine Model – Step By Step In Colab - 6
▪ STEP 7 – Train and Validate the Model, with 500 Epochs
22. 21
Creating a Sine Model – Step By Step In Colab - 7
▪ STEP 8 – Generate a TensorFlow Lite Model with or without 8-bit Quantization
23. 22
Creating a Sine Model – Step By Step In Colab - 8
▪ STEP 9 – Generate a TensorFlow Lite for Microcontrollers C++ Model Source File
24. 23
Deploy the Model – Copy to MPLAB-X Project
▪ STEP 10 – DEPLOY THE MODEL. Copy the
contents from model.cc file from models
directory and replace it in model.cpp in your
MPLAB-X project. Specifically, we need to copy
and paste the g_model[] array and
g_model_len variable declarations into
your model.cpp file defined you MPLAB
project to update the model.
25. 24
Deploy the Model – Build Project and Run
▪ STEP 11 - Rebuild and Run the project. The sine wave will be displayed on the
screen.
27. 26
The SensiML Workflow: Data-Driven Rapid Model Creation
26
No Data Science or AI Expertise Required, Prototype Model Testing Without Coding
Data Capture Lab
Capture and annotate Data
Time: Hours to Weeks
(Depending on application
data collection complexity)
Skill: Domain Expertise
(As required to collect and
label events of interest)
Analytics Studio
Build, Train and Validate Models
Time: Minutes to Hours
(Depending on degree of model
control exerted)
Skill: None
(Full AutoML)
Basic ML Concepts
(Advanced UI tuning)
Python Programming
(Full pipeline control)
SensiML Knowledge
Embedded Inference Engine &
Test Validation Application
Time: Minutes to Weeks
(Depending on app code
integration needs)
Skill: None
(Binary firmware with auto
generated I/O wrapper code)
Embedding Programming
(Integration of SensiML library
or C source with user code)
28. 27
SensiML Edge AI Tools Workflow for Microchip Platforms
Knowledge
Pack
• Ready-to-run Binary
• Linkable Library
• Full Source
Map Sensors Label Data
Build Model
Generate Code and Test
SAMD21
Data Capture Lab
Analytics Studio
▪ The SensiML Workflow: Data-Driven Rapid Model Creation
▪ No Data Science or AI Expertise Required, Prototype Model Testing Without Coding
29. 28
SAMD21 ML Kit – Anatomy of the on-board IMU Sensor
• An inertial measurement unit (IMU) is a system composed of
sensors that relay information about a device’s movement
with an integrated accelerometer and gyroscope.
• Accelerometer. Measures changes in velocity (acceleration)
and position (velocity), as well as absolute orientation. The
accelerometer is the device in tablets and smartphones which
ensures the image on-screen remains upright regardless of
orientation. By itself, the accelerometer provides information
about the linear and rotational X-, Y-, and Z- directions. The
accelerometer allows the 3-axis of motion to be captured.
• Gyroscope. Measures changes in orientation (rotation) and
rotational velocity. Microelectromechanical gyroscopes, often
called gyro meters, are present in many consumer electronics
such as gaming controllers. A gyroscope provides information
about the rotational X- (roll), Y- (pitch), and Z- (yaw)
directions.
30. 29
This exercise requires the following installation software
▪ Windows 10 PC
▪ Microchip Technology
• Hardware
• SAMD21 Machine Learning Evaluation Kit with TDK ICM42688 IMU (EV18H79A)
• SAMD21 Machine Learning Evaluation Kit with Bosch BMI160 IMU (EV45Y33A)
• micro-USB cable (more than 1m length is recommended)
• Software
• MPLAB® X IDE (https://microchip.com/mplab/mplab-x-ide)
• MPLAB® XC32 compiler (https://microchip.com/mplab/compilers)
• MPLAB® Harmony 3 (https://www.microchip.com/harmony)
• Design Asset
• https://github.com/MicrochipTech/ml-samd21-iot-sensiml-gestures-demo/releases/tag/v0.2 (ml-samd21-iot-sensiml-gestures-
demo.zip)
▪ SensiML
• Create Your Free Account of SensiML Analytics Toolkit
“Community Edition” (https://sensiml.com/plans/community-edition/)
• SensiML Data Capture Lab for Windows 10 (https://sensiml.com/download/)
• SensiML Open-Gateway (https://github.com/sensiml/open-gateway)
• Note: On windows use Python 3.7 or 3.8.
• Data Asset (SAMD21 + TDK ICM42688 IMU) (MCHP_HO)
31. 30
SensiML Licensing Options
1. ROI benefit can be calculated https://sensiml.com/plans/#
2. Evaluation code is limited to 1000 inference results per embedded device power cycle
3. Risk-free Source Code Option: Purchase source code only after model is validated.
4. Additional users can be enabled at $99/user-month
5. MVGO = Motion, Vibration, Gesture, and Other sub-10kHz sensors
6. Audio = 10kHz-20kHz sensors; Ultra-High Rate (UHR) = >20 kHz sensors
7. Premier supportis offered in multiple tiers for direct toolkit support and consultation
Plan Editions: Compare Options SensiML | Plans
32. 31
Create Your Free Account of SensiML Analytics Toolkit
▪ Go to https://sensiml.com/plans/community-edition/
▪ Fill your information, check the box of “Terms & Conditions”
(Yes, read after “Terms & Conditions”)
▪ And then click “Create My Account”
▪ You will receive a confirmation email from SensiML.
33. 32
SensiML Data Capture Lab for Windows 10
▪ Download site:
• https://sensiml.com/download/
▪ SensiML Data Capture Lab can be downloaded from
• https://sensiml.cloud/downloads/SensiML_DataCaptureLab_Setup.exe
▪ Install, execute Data Capture Lab and login with your account.
34. 33
SensiML Open-Gateway
Cloud-Based Tool via Web Browser – Forwards Sensor Data Capture or Inference
Recognition Results via UART to SensiML Data Capture Lab
▪ You need to install Python 3.7 or 3.8.
• https://www.python.org/downloads/windows/
▪ SensiML Open-Gateway install instruction describes at SensiML’s GitHub
• https://github.com/sensiml/open-gateway
▪ Installer for Windows Application is ready – SensiML_OpenGateway_Setup.exe
• https://github.com/sensiml/open-gateway/releases/tag/v2022.3.3.0
▪ Confirm to run SensiML Open-Gateway in your environment, the preparation is finished.
Recommended
35. 34
Exercise: Introducing SensiML Toolkit Endpoint AI Workflow
1. Write FW for data collection in SAMD21 by MPLAB X and connect it to DCL.
2. Import the example data set into the DCL project.
3. Additional data collection and labeling.
4. Generate the algorithm by Analytics Studio.
5. Testing a Model Using the Data Capture Lab
6. Compile the downloaded library of Knowledge Pack with MPLAB X and write it to SAMD21
7. Verify the actual operation of gesture recognition using Open-Gateway
36. 35
Exercise: Introducing SensiML Toolkit Endpoint AI Workflow
1. Write FW for data collection in SAMD21 by MPLAB X and connect it to DCL.
2. Import the example data set into the DCL project.
3. Additional data collection and labeling.
4. Generate the algorithm by Analytics Studio.
5. Testing a Model Using the Data Capture Lab
6. Compile the downloaded library of Knowledge Pack with MPLAB X and write it to SAMD21
7. Verify the actual operation of gesture recognition using Open-Gateway
37. 36
SAMD21 Setup – 1 (Option #1)
1. Download ZIP file from https://github.com/MicrochipTech/ml-samd21-iot-
sensiml-gestures-demo/releases/download/v0.2/ml-samd21-iot-sensiml-
gestures-demo.zip
2. Plug your SAMD21 evaluation kit into your PC via USB. The SAMD21 should
automatically come up as a USB Flash drive.
3. Open the ml-samd21-iot-sensiml-gestures-demo.zip archive
downloaded previously and locate the gesture classifier demo HEX file
corresponding to your sensor make:
1. Bosch IMU: binaries/samd21-iot-sensiml-gestures-demo_bmi160.hex
2. TDK IMU: binaries/samd21-iot-sensiml-gestures-demo_icm42688.hex
4. Drag and drop the HEX file onto the SAMD21 USB drive to program the device.
38. 37
SAMD21 Setup – 1 (Option 2)
1. Download ZIP file from https://github.com/MicrochipTech/ml-samd21-iot-imu-data-logger
2. Plug your SAMD21 evaluation kit into your PC via USB.
3. Extract ZIP file and open the firmware/samd21_iot_imu.X project folder in MPLAB X.
4. Select the SAMD21_IOT_WG_ICM42688 Project Configuration in MPLAB X below.
5. Select SensiML Simple Stream by setting the DATA_STREAMER_FORMAT macro in firmware/src/app_config.h to
#define DATA_STREAMER_FORMAT DATA_STREAMER_FORMAT_SMLSS (line 63).
6. Once you're satisfied with your configuration, click the Make and Program Device button in the toolbar (see image
below for reference). (Note: At first time, you will be asked to select “Debugger”.)
39. 38
7. Confirm by the terminal software like Tera Term or Putty whether the FW could be programmed or not.
• Baud Rate 115200
• Data bits 8
• Stop bits 1
• Parity None
SAMD21 Setup – 2
40. 39
Data Capture Lab (DCL) Setup – 1
1. Open up DCL and create a new project from “New project” button.
2. Specify “Location” and “Name”.
3. Switch to “Capture” mode by clicking “Switch Modes” button.
41. 40
Data Capture Lab (DCL) Setup – 2
4. In capture mode, click “Connect” of “Sensor” at the bottom, “Sensor Configuration” is popped up. -> Click “Next”
5. In the “Select a Device Plugin” window select the SAMD21 ML Eval Kit item. -> Click “Next”
6. After selecting the device plugin, “Plugin Details” will appear; skip this by clicking “Next” to move forward to
“Sensor Properties”. On the properties page, select “Motion (ICM-42688-P)”, then click “Next”.
42. 41
Data Capture Lab (DCL) Setup – 3
7. Give a name to the sensor configuration in “Sensor Configuration”. Click “Save” to save the sensor configuration to
your project.
• If you plan on trying different configurations for your application, it's a good idea to include a summary of the sensor
configuration in the name, for example, bmi160-100hz-16g-2000dps.
8. In “SAMD21 ML Eval Kit”. -> Click “three dot”, click “Connection Setting”, click “Scan”, select port “COM” of
SAMD21, click “Done” and click “Connect”.
9. The SAMD21 ML Eval Kit should now be streaming to the DCL. (See next page)
• Check out SensiML's documentation to learn more about how to use the DCL for capturing and annotating data.
44. 43
Exercise: Introducing SensiML Toolkit Endpoint AI Workflow
1. Write FW for data collection in SAMD21 by MPLAB X and connect it to DCL.
2. Import the example data set into the DCL project.
3. Additional data collection and labeling.
4. Generate the algorithm by Analytics Studio.
5. Testing a Model Using the Data Capture Lab
6. Compile the downloaded library of Knowledge Pack with MPLAB X and write it to SAMD21
7. Verify the actual operation of gesture recognition using Open-Gateway
45. 44
Import the example data set into the DCL project – 1
1. Extract the MCHP_HO_SAMD21_TDK.zip*1 archive containing the gestures data set into a working directory.
MCHP_HO_SAMD21_TDK has captured and labeled dataset in csv format.
2. With your created project opened, navigate to the File menu and click the “Import from DCLI…” item as shown in below
3. Import “MCHP_HO_SAMD21_TDK.dcli” below. Select dcli file and click “Open”.
4. Click “Next” both Import and Sensor Columns
*1: Download ZIP file from the embedded link.
46. 45
Import the example data set into the DCL project – 2
5. Click “Done” Rename Sensor Columns and “Next” Import Settings
6. Select “SAMD21_TDK” and click “Select”.
7. When the file import is completed, “Import complete” green sign indicated at upper-right.
8. Click “Project Explorer” then you can see the imported files below.
9. “dataset” is uploaded to your SensiML cloud server automatically.
47. 46
Import the example data set into the DCL project – 3
9. The imported data sets were taken from SAMD21 with TDK.
• On_Desk:
• Idle_Position:
• Back_and_Forth:
• Left_and_Right:
• Up_and_Down:
• Circle:
SAMD21 kit put off on the desk.
Holding verfically by hand and not move
Holding verfically by hand and move it back and forth
Holding verfically by hand and move it left and right
Holding verfically by hand and move it up and down
Holding verfically by hand and move it circle – clock-wise
10. Double click to open the imported file and then the captured waveform, label and metadata information can be
confirmed and modified. -> See next page.
49. 48
Exercise: Introducing SensiML Toolkit Endpoint AI Workflow
1. Write FW for data collection in SAMD21 by MPLAB X and connect it to DCL.
2. Import the example data set into the DCL project.
3. Additional data collection and labeling. [Optional - NOT Required for Steps 4-7]
4. Generate the algorithm by Analytics Studio.
5. Testing a Model Using the Data Capture Lab
6. Compile the downloaded library of Knowledge Pack with MPLAB X and write it to SAMD21
7. Verify the actual operation of gesture recognition using Open-Gateway
50. 49
1. Click “Switch Mode”, click “Capture”, click “Connect” for re-connect between DCL and SAMD21
2. Modify your “File Metadata” from “+ Add Metadata”
3. Double click “Date” of “Project Properties”, click icon, input your new value (220311), click “Done”.
4. You can see added new value in the dropdown menu of “Date” in “File Metadata”.
Additional data collection and labeling – 1
51. 50
5. Click icon below, set up your file name in “File Name Template”, Click “Save”
For example, <Label> <Date> <Subject> are checked.
And you can select from “Space”, “Underscore”, and “Dash”
6. Click “Capture Settings”, check and specify “Max Record Time”, and “Save”.
For example, Max Record Time is 15 seconds below.
7. Click “Start Recording”, record your gesture, confirm and save “Save Confirmation”.
Additional data collection and labeling – 2
52. 51
8. Open new recorded file “Circle_2022-02-07_ichiro” from “Project Explorer”
Additional data collection and labeling – 3
53. 52
9. Open new recorded file “Circle_2022-02-07_ichiro” from “Project Explorer”
10. Attach “Label” in “Segments” tab of “Data Capture Properties”.
a. Make a right click on the waveform window, “blue (start) and red (end)” lines were indicated.
b. Adjust the position of “blue and red” lines by “left click and drag” to specify the “Start and Length”.
c. Right click and select “Edit” at “Segments” tab of “File Properties”.
Additional data collection and labeling – 4
Right click
here
Left click
And drag
54. 53
d. Select “Label” from “Select Labels” -> Click “Done” and confirm the updated “Label”.
e. Click “Save Changes” at the upper-left of DCL.
f. The saved dataset is updated automatically and synchronized with SensiML cloud sever.
g. Repeat from the “Begin Recording” to update “Label” until your expecting amount of the data collection.
11. More details about DCL, “Capturing Sensor Data”, “Labeling Your Data” and “Other Useful Features”
• https://sensiml.com/documentation/guides/getting-started/capturing-sensor-data.html
• https://sensiml.com/documentation/guides/getting-started/labeling-your-data.html
• https://sensiml.com/documentation/guides/getting-started/other-useful-features.html
Additional data collection and labeling – 5
55. 54
Exercise: Introducing SensiML Toolkit Endpoint AI Workflow
1. Write FW for data collection in SAMD21 by MPLAB X and connect it to DCL.
2. Import the example data set into the DCL project.
3. Additional data collection and labeling.
4. Generate the algorithm by Analytics Studio.
5. Testing a Model Using the Data Capture Lab
6. Compile the downloaded library of Knowledge Pack with MPLAB X and write it to SAMD21
7. Verify the actual operation of gesture recognition using Open-Gateway
56. 55
Generate the algorithm by Analytics Studio – Log-in
1. Log-in to your SensiML account via web browser (https://app.sensiml.cloud/auth/login/)
57. 56
Generate the algorithm by Analytics Studio –
Home -> Open Project
2. Open your project from “Open Project” by icon at the left of project name.
58. 57
Generate the algorithm by Analytics Studio – Project Summary
3. The project summary page gives you an overview of your project. Each tab also provides more information about
PROJECT DESCRIPTION, CAPTURES, QUERIES, PIPELINES, and KNOWLEDGE PACKS of your project.
4. Open “Prepare Data” screen to create a query.
59. 58
Generate the algorithm by Analytics Studio – Prepare Data
5. Querying Data
The query is used to select your sensor data from your project. If you need to filter out certain parts of your
sensor data based on metadata or labels, you can specify that here.
a) Query: MCHP_HO, (specify your unique name)
b) Session: Session_1, (session name was specified in DCL)
c) Label: Label, (you could select labels if you made other label(s) in DCL on same data)
d) Metadata: segment_uuid, For
(Differentiate the subset of captures that you want to work with for modeling)
e) Source: GyroscopeX, GyroscopeY, GyroscopeZ, AccelerometerX, AccelerometerY, AccelerometerZ
(You can select which sensor data use for modeling)
f) Query Filter: [For] in [train]
(File Metadata “train” will be used for modeling)
g) Plot: Segment
(Segment or Samples could be indicated)
6. Click “SAVE”
7. Click “Build Model” from the left menu
60. 59
Generate the algorithm by Analytics Studio – Build Model
6. Building Model
a) Click “BUILD MODEL”, add “Pipeline name” in “Create New Pipeline”, click “BUILD”.
b) Confirm “Query *” in “Input Query” and click “SAVE”, set “200” in “Window Size” and click “SAVE”.
c) Once you've entered the pipeline settings, click the “OPTIMIZE” button. This step will use AutoML techniques to
automatically select the best features and machine learning algorithm for the gesture classification task given your input
data. This process will usually take several minutes.
d) Once the Build Model optimization step is completed, confirm “AutoML Results” and navigate to “Explore Model”.
-> Next Page
61. 60
Generate the algorithm by Analytics Studio – Explore Model
7. In “Explore Model”, you can get more information about the models that were generated from “Build Model”.
https://sensiml.com/documentation/guides/getting-started/exploring-model-details.html
8. Once “Explore Model” is checked, navigate to “Test Model”.
62. 61
Generate the algorithm by Analytics Studio – Test Model
9. Select the pipeline that we created in the previous step. Select one of the models generated in the previous step.
10. This text uses “MCHP_HO_rank_4”.
11. Select the upside-down triangle icon in “For” column and select “Test” to filter the data so only the test samples
are selected.
12. Select the check box to select the test samples you want to test and click “COMPUTE SUMMARY”.
13. Click “RESULTS” to confirm the details of simulation result. -> Next page
63. 62
Generate the algorithm by Analytics Studio – Test Model
14. Once completed you will be presented with a table like is shown in below summarizing the classification results.
64. 63
Generate the algorithm by Analytics Studio – Download Model 1
15. Finally, navigate to “Download Model” tab to download your model.
16. Select “Microchip SAMD21 ML Eval Kit” in “HW Platform” and “Library” in “Format”. Click “DOWNLOAD”.
17. In the right side of this window has Knowledge Pack Information including “Device Profile Information” which are
“Estimated Memory Size” and “Estimated Latency”.
65. 64
Generate the algorithm by Analytics Studio – Download Model 2
18. Once completed “Downloading Knowledge Pack, please wait ...”, click “OK” to “Save File”.
66. 65
Generate the algorithm by Analytics Studio – Memory Usage
19. Device Profile Information
Estimated Memory Usage
SRAM Used:
The number of byte in RAM taken up the model on a device
Stack Size:
The estimated worst-case stack usage in bytes of the entire model on a device
Flash Used:
The number of bytes taken up by the model on a device
Estimated Latency
Feature Extraction Latency:
The estimated amount of clock cycles and time the model will spend in feature
generation.
Classifier Latency:
The estimated amount of clock cycles and time the model will spend in the
classifier algorithm.
Total Latency:
The estimated total number of clock cycles and time the model will take to
operate on a segment of data.
67. 66
Exercise: Introducing SensiML Toolkit Endpoint AI Workflow
1. Write FW for data collection in SAMD21 by MPLAB X and connect it to DCL.
2. Import the example data set into the DCL project.
3. Additional data collection and labeling.
4. Generate the algorithm by Analytics Studio.
5. Testing a Model Using the Data Capture Lab
6. Compile the downloaded library of Knowledge Pack with MPLAB X and write it to SAMD21
7. Verify the actual operation of gesture recognition using Open-Gateway
68. 67
Testing a Model Using the Data Capture Lab – 1
1. The Data Capture Lab has two ways to test a model on your dataset:
Running a Model During Data Collection: Connect to a model during data collection and get the model results in real-time
Running a Model in the Project Explorer: Run a model on any previously collected CSV or WAV files in your project
2. This lets you see how your model will perform on real data before flashing it to a device.
3. Running a Model During Data Collection -> Next Page
a) Switch to Capture mode
b) Connect to your device
c) Open the Test Model panel and click Connect
d) Select a Knowledge Pack
e) Select a Session. This is where the Knowledge Pack results will be saved
f) Connect to the Knowledge Pack
g) You will now see your model results in real-time overlapping capturing waveform
4. (Optional) You can click Start Recording and the Data Capture Lab will save the Knowledge Pack results to your
project. This lets you quickly add additional training data to your project.
5. (Optional) In the Save Confirmation screen you can edit or delete the Knowledge Pack results before saving the
results to your project.
71. 70
Exercise: Introducing SensiML Toolkit Endpoint AI Workflow
1. Write FW for data collection in SAMD21 by MPLAB X and connect it to DCL.
2. Import the example data set into the DCL project.
3. Additional data collection and labeling.
4. Generate the algorithm by Analytics Studio.
5. Testing a Model Using the Data Capture Lab
6. Compile the downloaded library of Knowledge Pack with MPLAB X and write it to SAMD21
7. Verify the actual operation of gesture recognition using Open-Gateway
72. 71
Knowledge Pack Integration – 1
1. Deployed your knowledge pack in the Library format, the archive should contain a complete, ready to compile,
MPLAB X project. Follow the steps below to compile your project:
a) Unzip a downloaded zip file at your working folder.
b) In MPLAB X, open “samd21-iot-sensiml-template.X” project folder under the firmware folder of the knowledge pack.
c) Select the Project Configuration option in the MPLAB X toolbar according to which sensor you're using. -> ICM42688
d) Select SensiML Simple Stream by setting the DATA_STREAMER_FORMAT macro in
Header Files/knowledgepack/knowledgepack_project/app_config.h to
#define DATA_STREAMER_FORMAT DATA_STREAMER_FORMAT_SMLSS (line 63).
e) Your project should now be ready to “Make and Program Device (samd21-iot-sensiml-template)”.
f) Connect your SAMD21 Eval Kit via USB cable and click “Make and Program Device”
73. 72
Knowledge Pack Integration – 2
2. Firmware Operation
The firmware behavior can be summarized as operating in one of three distinct states as reflected by the onboard
LEDs and described in the table below:
3. When operating normally, the firmware prints the classification prediction (classification ID number) and the
generated feature vector for each sample window over the UART port. To read the UART port use a terminal
emulator of your choice (e.g., MPLAB Data Visualizer's integrated terminal tool) with the following settings:
• Baud Rate 115200
• Data bits 8
• Stop bits 1
• Parity None
Status LED Behavior Description
Error Red (ERROR) LED lit Fatal error. (Do you have the correct sensor plugged in?
Buffer Overflow Yellow (DATA) and Red (ERROR) LED lit for 5 seconds Processing is not able to keep up with real-time: data buffer has been reset.
Running Yellow (DATA) LED flashingslowly Firmware is runningnormally.
1. Back_and_Forth
2. Circle
3. Idle_Position
4. Left_and_Right
5. On_Desk
6. Up_and_Down