For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/mvtec/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Olaf Munkelt, Co-founder and Managing Director at MVTec Software GmbH, presents the "Embedded Vision Made Smart: Introduction to the HALCON Embedded Machine Vision Library" tutorial at the May 2017 Embedded Vision Summit.
In this presentation, Munkelt demonstrates how easy it is to develop an embedded vision (identification) application based on the HALCON Embedded standard software library and get it running on a Raspberry Pi. The demonstration showcases the benefits of HALCON Embedded for industrial applications. Munkelt presents how HALCON Embedded allows users to quickly develop a machine vision application on a standard PC and thereby eases programming of an embedded system and shortens development time. Viewers will learn about HALCON Embedded’s speed and robustness, and also how MVTec’s support team provides advice and services for users.
HALCON Embedded, derived from MVTec’s renowned HALCON industrial vision software library, is portable to various microprocessors/DSPs, operating systems, and compilers. Thus, HALCON Embedded is available for numerous smart cameras and other embedded systems and enables system integrators, OEMs and developers of embedded vision applications to bring the full power of HALCON to embedded devices. Users benefit from the most comprehensive machine vision library on the market running on their embedded platforms, reducing development cost and effort.
This document describes the steps to convert a TensorFlow model to a TensorRT engine for inference. It includes steps to parse the model, optimize it, generate a runtime engine, serialize and deserialize the engine, as well as perform inference using the engine. It also provides code snippets for a PReLU plugin implementation in C++.
The document provides an overview of deep learning, including its past, present, and future. It discusses the concepts of artificial general intelligence, artificial superintelligence, and predictions about their development from experts like Hawking, Musk, and Gates. Key deep learning topics are summarized, such as neural networks, machine learning approaches, important algorithms and researchers, and how deep learning works.
** AI & Deep Learning Using TensorFlow - https://www.edureka.co/ai-deep-learning-with-tensorflow **
This Edureka tutorial will provide you with a detailed and comprehensive knowledge of TensorFlow Object detection and how it works. It will also provide you with the details on how to use Tensorflow to detect objects in deep learning method. Below are the topics covered in this tutorial:
1. What is Object Detection?
2. Industrial use of Object Detection
3. Object Detection Workflow
4. What is Tensorflow?
5. Object Detection using Tensorflow - Demo
6. Live Object Detection using Tensorflow- Demo
Docker 101 is a series of workshops that aims to help developers (or interested people) to get started with docker.
The workshop 101 is were the audience has the first contact with docker, from installation to manage multiple containers.
- Installing docker
- managing images (docker rmi, docker pull)
- basic commands (docker info, docker ps, docker images, docker run, docker commit, docker inspect, docker exec, docker diff, docker stop, docker start)
- Docker registry
- container life cycle (running, paused, stopped, restarted)
- Dockerfile
This document provides an agenda for a presentation on deep learning, neural networks, convolutional neural networks, and interesting applications. The presentation will include introductions to deep learning and how it differs from traditional machine learning by learning feature representations from data. It will cover the history of neural networks and breakthroughs that enabled training of deeper models. Convolutional neural network architectures will be overviewed, including convolutional, pooling, and dense layers. Applications like recommendation systems, natural language processing, and computer vision will also be discussed. There will be a question and answer section.
RoCEv2 is an extension of the original RoCE specification announced in 2010 that brought the benefits of Remote Direct Memory Access (RDMA) I/O architecture to Ethernet-based networks. RoCEv2 addresses the needs of today’s evolving enterprise data centers by enabling routing across Layer 3 networks. Extending RoCE to allow Layer 3 routing provides better traffic isolation and enables hyperscale data center deployments.
Watch the video presentation: http://insidehpc.com/2014/09/slidecast-ibta-releases-updated-specification-rocev2/
This document describes the steps to convert a TensorFlow model to a TensorRT engine for inference. It includes steps to parse the model, optimize it, generate a runtime engine, serialize and deserialize the engine, as well as perform inference using the engine. It also provides code snippets for a PReLU plugin implementation in C++.
The document provides an overview of deep learning, including its past, present, and future. It discusses the concepts of artificial general intelligence, artificial superintelligence, and predictions about their development from experts like Hawking, Musk, and Gates. Key deep learning topics are summarized, such as neural networks, machine learning approaches, important algorithms and researchers, and how deep learning works.
** AI & Deep Learning Using TensorFlow - https://www.edureka.co/ai-deep-learning-with-tensorflow **
This Edureka tutorial will provide you with a detailed and comprehensive knowledge of TensorFlow Object detection and how it works. It will also provide you with the details on how to use Tensorflow to detect objects in deep learning method. Below are the topics covered in this tutorial:
1. What is Object Detection?
2. Industrial use of Object Detection
3. Object Detection Workflow
4. What is Tensorflow?
5. Object Detection using Tensorflow - Demo
6. Live Object Detection using Tensorflow- Demo
Docker 101 is a series of workshops that aims to help developers (or interested people) to get started with docker.
The workshop 101 is were the audience has the first contact with docker, from installation to manage multiple containers.
- Installing docker
- managing images (docker rmi, docker pull)
- basic commands (docker info, docker ps, docker images, docker run, docker commit, docker inspect, docker exec, docker diff, docker stop, docker start)
- Docker registry
- container life cycle (running, paused, stopped, restarted)
- Dockerfile
This document provides an agenda for a presentation on deep learning, neural networks, convolutional neural networks, and interesting applications. The presentation will include introductions to deep learning and how it differs from traditional machine learning by learning feature representations from data. It will cover the history of neural networks and breakthroughs that enabled training of deeper models. Convolutional neural network architectures will be overviewed, including convolutional, pooling, and dense layers. Applications like recommendation systems, natural language processing, and computer vision will also be discussed. There will be a question and answer section.
RoCEv2 is an extension of the original RoCE specification announced in 2010 that brought the benefits of Remote Direct Memory Access (RDMA) I/O architecture to Ethernet-based networks. RoCEv2 addresses the needs of today’s evolving enterprise data centers by enabling routing across Layer 3 networks. Extending RoCE to allow Layer 3 routing provides better traffic isolation and enables hyperscale data center deployments.
Watch the video presentation: http://insidehpc.com/2014/09/slidecast-ibta-releases-updated-specification-rocev2/
This document describes a study that used convolutional neural networks (CNNs) for animal classification from images. The study proposed a novel method for animal face classification using CNN features. The CNN model was trained on images to classify animals into different classes. The model achieved over 90% accuracy on the test data. The authors concluded that CNNs are well-suited for image classification tasks like animal classification due to their ability to automatically extract relevant features from images. Future work could involve classifying other objects using this deep learning approach.
Gives a brief introduction of the emerging containerization technology, the difference in traditional VMs and Conatiners and the most popular one- Docker
The document discusses R-CNN, a framework for object detection in images using convolutional neural networks. It introduces R-CNN and its components, including region proposal using selective search, feature extraction from proposed regions using a CNN, and classifying regions using an SVM. Later developments like Fast R-CNN and Faster R-CNN improved upon R-CNN by making object detection faster and joint training end-to-end.
Deep generative models can be either generative or discriminative. Generative models directly model the joint distribution of inputs and outputs, while discriminative models directly model the conditional distribution of outputs given inputs. Common deep generative models include restricted Boltzmann machines, deep belief networks, variational autoencoders, generative adversarial networks, and deep convolutional generative adversarial networks. These models use different network architectures and training procedures to generate new examples that resemble samples from the training data distribution.
Imagen: Photorealistic Text-to-Image Diffusion Models with Deep Language Unde...Vitaly Bondar
1. This document describes Imagen, a new state-of-the-art photorealistic text-to-image diffusion model with deep language understanding.
2. Key contributions include using large frozen language models as effective text encoders, a new dynamic thresholding sampling technique for more photorealistic images, and an efficient U-Net architecture.
3. On various benchmarks including COCO FID and a new DrawBench, human evaluations found Imagen generates images that better align with text prompts and outperform other models including DALL-E 2.
This document provides an introduction to deep learning. It discusses the history of machine learning and how neural networks work. Specifically, it describes different types of neural networks like deep belief networks, convolutional neural networks, and recurrent neural networks. It also covers applications of deep learning, as well as popular platforms, frameworks and libraries used for deep learning development. Finally, it demonstrates an example of using the Nvidia DIGITS tool to train a convolutional neural network for image classification of car park images.
Power Point Presentation on object detection using tensorflow :
TensorFlow™ is an open source software library for high performance numerical computation. Its flexible architecture allows easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to mobile and edge devices. Originally developed by researchers and engineers from the Google Brain team within Google’s AI organization, it comes with strong support for machine learning and deep learning and the flexible numerical computation core is used across many other scientific domains.
Docker is a technology that uses lightweight containers to package applications and their dependencies in a standardized way. This allows applications to be easily deployed across different environments without changes to the installation procedure. Docker simplifies DevOps tasks by enabling a "build once, ship anywhere" model through standardized environments and images. Key benefits include faster deployments, increased utilization of resources, and easier integration with continuous delivery and cloud platforms.
This document summarizes a presentation on interpreting and explaining deep ReLU neural networks. It introduces a new technique called the ReLU DNN Unwrapper that can decompose a trained ReLU DNN into local linear models based on activation patterns. This enables the DNN to be interpreted through individual region-based explanations. The presentation describes a new open-source toolkit called Aletheia that implements this technique and provides functionality for interpretation, diagnostics, and simplification of ReLU DNNs. It also provides an example application to credit risk modeling, demonstrating how Aletheia can help identify issues and improve responsible, transparent use of neural networks for high-stake decisions.
NVIDIA compute GPUs and software toolkits are key drivers behind major advancements in machine learning. Of particular interest is a technique called "deep learning", which utilizes what are known as Convolution Neural Networks (CNNs) having landslide success in computer vision and widespread adoption in a variety of fields such as autonomous vehicles, cyber security, and healthcare. In this talk is presented a high level introduction to deep learning where we discuss core concepts, success stories, and relevant use cases. Additionally, we will provide an overview of essential frameworks and workflows for deep learning. Finally, we explore emerging domains for GPU computing such as large-scale graph analytics, in-memory databases.
https://tech.rakuten.co.jp/
Introduction to Convolutional Neural NetworksHannes Hapke
This document provides an introduction to machine learning using convolutional neural networks (CNNs) for image classification. It discusses how to prepare image data, build and train a simple CNN model using Keras, and optimize training using GPUs. The document outlines steps to normalize image sizes, convert images to matrices, save data formats, assemble a CNN in Keras including layers, compilation, and fitting. It provides resources for learning more about CNNs and deep learning frameworks like Keras and TensorFlow.
The document is a presentation about TensorFlow. It begins with an introduction that defines machine learning and deep learning. It then discusses what TensorFlow is, including that it is an open-source library for deep learning and ML, was developed by Google Brain, and uses data flow graphs to represent computations. The presentation explains benefits of TensorFlow like parallelism, distributed execution, and portability. It provides examples of companies using TensorFlow and demonstrates cool projects that can be built with it, like image classification, object detection, and speech recognition. Finally, it concludes that TensorFlow is helping achieve amazing advancements in machine learning.
Deep neural networks have boosted the convergence of multimedia data analytics in a unified framework shared by practitioners in natural language, vision and speech. Image captioning, lip reading or video sonorization are some of the first applications of a new and exciting field of research exploiting the generalization properties of deep neural representation. This tutorial will firstly review the basic neural architectures to encode and decode vision, text and audio, to later review the those models that have successfully translated information across modalities. The contents of this tutorial are available at: https://telecombcn-dl.github.io/2019-mmm-tutorial/.
This document provides an introduction to Docker. It discusses why Docker is useful for isolation, being lightweight, simplicity, workflow, and community. It describes the Docker engine, daemon, and CLI. It explains how Docker Hub provides image storage and automated builds. It outlines the Docker installation process and common workflows like finding images, pulling, running, stopping, and removing containers and images. It promotes Docker for building local images and using host volumes.
This document introduces Epoxy, an open source library from Airbnb for building complex RecyclerView adapters. Epoxy makes it easy to dynamically update adapter contents and includes features like view state handling and diffing. It encourages using a Model-View-ViewModel pattern where the view models (EpoxyModels) specify what views to display and how to bind data to them. EpoxyModels can be created manually or through annotations. The controller (EpoxyController) manages the models and notifies the adapter of changes.
Reproducible AI using MLflow and PyTorchDatabricks
Model reproducibility is becoming the next frontier for successful AI models building and deployments for both Research and Production scenarios. In this talk, we will show you how to build reproducible AI models and workflows using PyTorch and MLflow that can be shared across your teams, with traceability and speed up collaboration for AI projects.
The document summarizes Spark SQL, which is a Spark module for structured data processing. It introduces key concepts like RDDs, DataFrames, and interacting with data sources. The architecture of Spark SQL is explained, including how it works with different languages and data sources through its schema RDD abstraction. Features of Spark SQL are covered such as its integration with Spark programs, unified data access, compatibility with Hive, and standard connectivity.
arivis Vision4D - IMAGE ANALYSIS WHICH FITS YOUR DATA (2 pages, 2020)Johannes Amon
arivis Vision4D is the leading software for exploring and analysing large multi-dimensional image datasets created by confocal, Light Sheet, Multi-Photon or Electron Microscopy. arivis Vision4D can handle several hundred Gigabytes or terabytes of such image data as easily as if they were just a few megabytes in size.
this presentation help to understand about the basic of digital photogrammetry,, its also help for understand about the concept of digital photography software available now a days , and uses of various software in the field of RS and GIS.
This document describes a study that used convolutional neural networks (CNNs) for animal classification from images. The study proposed a novel method for animal face classification using CNN features. The CNN model was trained on images to classify animals into different classes. The model achieved over 90% accuracy on the test data. The authors concluded that CNNs are well-suited for image classification tasks like animal classification due to their ability to automatically extract relevant features from images. Future work could involve classifying other objects using this deep learning approach.
Gives a brief introduction of the emerging containerization technology, the difference in traditional VMs and Conatiners and the most popular one- Docker
The document discusses R-CNN, a framework for object detection in images using convolutional neural networks. It introduces R-CNN and its components, including region proposal using selective search, feature extraction from proposed regions using a CNN, and classifying regions using an SVM. Later developments like Fast R-CNN and Faster R-CNN improved upon R-CNN by making object detection faster and joint training end-to-end.
Deep generative models can be either generative or discriminative. Generative models directly model the joint distribution of inputs and outputs, while discriminative models directly model the conditional distribution of outputs given inputs. Common deep generative models include restricted Boltzmann machines, deep belief networks, variational autoencoders, generative adversarial networks, and deep convolutional generative adversarial networks. These models use different network architectures and training procedures to generate new examples that resemble samples from the training data distribution.
Imagen: Photorealistic Text-to-Image Diffusion Models with Deep Language Unde...Vitaly Bondar
1. This document describes Imagen, a new state-of-the-art photorealistic text-to-image diffusion model with deep language understanding.
2. Key contributions include using large frozen language models as effective text encoders, a new dynamic thresholding sampling technique for more photorealistic images, and an efficient U-Net architecture.
3. On various benchmarks including COCO FID and a new DrawBench, human evaluations found Imagen generates images that better align with text prompts and outperform other models including DALL-E 2.
This document provides an introduction to deep learning. It discusses the history of machine learning and how neural networks work. Specifically, it describes different types of neural networks like deep belief networks, convolutional neural networks, and recurrent neural networks. It also covers applications of deep learning, as well as popular platforms, frameworks and libraries used for deep learning development. Finally, it demonstrates an example of using the Nvidia DIGITS tool to train a convolutional neural network for image classification of car park images.
Power Point Presentation on object detection using tensorflow :
TensorFlow™ is an open source software library for high performance numerical computation. Its flexible architecture allows easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to mobile and edge devices. Originally developed by researchers and engineers from the Google Brain team within Google’s AI organization, it comes with strong support for machine learning and deep learning and the flexible numerical computation core is used across many other scientific domains.
Docker is a technology that uses lightweight containers to package applications and their dependencies in a standardized way. This allows applications to be easily deployed across different environments without changes to the installation procedure. Docker simplifies DevOps tasks by enabling a "build once, ship anywhere" model through standardized environments and images. Key benefits include faster deployments, increased utilization of resources, and easier integration with continuous delivery and cloud platforms.
This document summarizes a presentation on interpreting and explaining deep ReLU neural networks. It introduces a new technique called the ReLU DNN Unwrapper that can decompose a trained ReLU DNN into local linear models based on activation patterns. This enables the DNN to be interpreted through individual region-based explanations. The presentation describes a new open-source toolkit called Aletheia that implements this technique and provides functionality for interpretation, diagnostics, and simplification of ReLU DNNs. It also provides an example application to credit risk modeling, demonstrating how Aletheia can help identify issues and improve responsible, transparent use of neural networks for high-stake decisions.
NVIDIA compute GPUs and software toolkits are key drivers behind major advancements in machine learning. Of particular interest is a technique called "deep learning", which utilizes what are known as Convolution Neural Networks (CNNs) having landslide success in computer vision and widespread adoption in a variety of fields such as autonomous vehicles, cyber security, and healthcare. In this talk is presented a high level introduction to deep learning where we discuss core concepts, success stories, and relevant use cases. Additionally, we will provide an overview of essential frameworks and workflows for deep learning. Finally, we explore emerging domains for GPU computing such as large-scale graph analytics, in-memory databases.
https://tech.rakuten.co.jp/
Introduction to Convolutional Neural NetworksHannes Hapke
This document provides an introduction to machine learning using convolutional neural networks (CNNs) for image classification. It discusses how to prepare image data, build and train a simple CNN model using Keras, and optimize training using GPUs. The document outlines steps to normalize image sizes, convert images to matrices, save data formats, assemble a CNN in Keras including layers, compilation, and fitting. It provides resources for learning more about CNNs and deep learning frameworks like Keras and TensorFlow.
The document is a presentation about TensorFlow. It begins with an introduction that defines machine learning and deep learning. It then discusses what TensorFlow is, including that it is an open-source library for deep learning and ML, was developed by Google Brain, and uses data flow graphs to represent computations. The presentation explains benefits of TensorFlow like parallelism, distributed execution, and portability. It provides examples of companies using TensorFlow and demonstrates cool projects that can be built with it, like image classification, object detection, and speech recognition. Finally, it concludes that TensorFlow is helping achieve amazing advancements in machine learning.
Deep neural networks have boosted the convergence of multimedia data analytics in a unified framework shared by practitioners in natural language, vision and speech. Image captioning, lip reading or video sonorization are some of the first applications of a new and exciting field of research exploiting the generalization properties of deep neural representation. This tutorial will firstly review the basic neural architectures to encode and decode vision, text and audio, to later review the those models that have successfully translated information across modalities. The contents of this tutorial are available at: https://telecombcn-dl.github.io/2019-mmm-tutorial/.
This document provides an introduction to Docker. It discusses why Docker is useful for isolation, being lightweight, simplicity, workflow, and community. It describes the Docker engine, daemon, and CLI. It explains how Docker Hub provides image storage and automated builds. It outlines the Docker installation process and common workflows like finding images, pulling, running, stopping, and removing containers and images. It promotes Docker for building local images and using host volumes.
This document introduces Epoxy, an open source library from Airbnb for building complex RecyclerView adapters. Epoxy makes it easy to dynamically update adapter contents and includes features like view state handling and diffing. It encourages using a Model-View-ViewModel pattern where the view models (EpoxyModels) specify what views to display and how to bind data to them. EpoxyModels can be created manually or through annotations. The controller (EpoxyController) manages the models and notifies the adapter of changes.
Reproducible AI using MLflow and PyTorchDatabricks
Model reproducibility is becoming the next frontier for successful AI models building and deployments for both Research and Production scenarios. In this talk, we will show you how to build reproducible AI models and workflows using PyTorch and MLflow that can be shared across your teams, with traceability and speed up collaboration for AI projects.
The document summarizes Spark SQL, which is a Spark module for structured data processing. It introduces key concepts like RDDs, DataFrames, and interacting with data sources. The architecture of Spark SQL is explained, including how it works with different languages and data sources through its schema RDD abstraction. Features of Spark SQL are covered such as its integration with Spark programs, unified data access, compatibility with Hive, and standard connectivity.
arivis Vision4D - IMAGE ANALYSIS WHICH FITS YOUR DATA (2 pages, 2020)Johannes Amon
arivis Vision4D is the leading software for exploring and analysing large multi-dimensional image datasets created by confocal, Light Sheet, Multi-Photon or Electron Microscopy. arivis Vision4D can handle several hundred Gigabytes or terabytes of such image data as easily as if they were just a few megabytes in size.
this presentation help to understand about the basic of digital photogrammetry,, its also help for understand about the concept of digital photography software available now a days , and uses of various software in the field of RS and GIS.
arivis Vision4D - IMAGE ANALYSIS WHICH FITS YOUR DATA (4 pages, 2020)Johannes Amon
arivis Vision4D is the leading software for exploring and analysing large multi-dimensional image datasets created by confocal, Light Sheet, Multi-Photon or Electron Microscopy. arivis Vision4D can handle several hundred Gigabytes or terabytes of such image data as easily as if they were just a few megabytes in size.
Cognex provides machine vision and barcode reading solutions for a variety of applications including inspection, guidance, measurement, identification, and quality control. Their product portfolio includes In-Sight vision systems, Checker vision sensors, VisionPro vision software, and DataMan barcode readers. The DataMan 500 image-based barcode reader helped a snack food manufacturer improve read rates from 20-30% to 100%, saving them $250,000 annually by eliminating misshipments and reducing labor costs.
Cognex BarCode Readers and Vision systemsBeth Denner
MAJ Enterprises introduces Cognex barcode readers and vision systems that can help prevent production line errors and save time and money. Cognex offers machine vision systems for inspection, guidance, measurement, presence detection, and optical character recognition. They also provide barcode readers for 1D and 2D codes for identification across various applications. Cognex machine vision can instantly improve production processes by detecting issues early.
Presagis delivers simulation and graphics software, and services to defense and aeronautic organizations worldwide. We provide end-users, system integrators, developers, and manufacturers with advanced tools and dedicated services to help them achieve rich, immersive virtual environments, and helping design the cockpits of tomorrow.
ESA Automation produces automation solutions for industrial applications. It offers integrated PLC, HMI, motion control, and IT capabilities on a single device. The document discusses ESA's focus on innovation and creating customer-oriented solutions. It provides examples of automation applications ESA has developed for machine tools, packaging, ceramic processing, and other industries.
leewayhertz.com-HOW IS A VISION TRANSFORMER MODEL ViT BUILT AND IMPLEMENTED.pdfrobertsamuel23
Recent years have seen deep learning completely transform computer vision and image
processing. Convolutional neural networks (CNNs) have been the driving force behind
this transformation due to their ability to efficiently process large amounts of data,
enabling the extraction of even the smallest image features.
ParanaVision is a company that provides innovative solutions for image analysis, image coding, video processing, computer vision, and pattern recognition technologies. Their mission is to become a pioneering firm nationally and internationally in these fields. Notable projects include face recognition systems, people counter systems, video coding solutions, video summarization, surveillance systems, tracking systems, and converting 2D content to 3D.
The Idea Team Company is created by experienced professionals in design and development of software systems.
The Idea Team specializes in the development of software systems according to the individual client's requirements. The high professional level of our staff and extensive experience gained in solving practical problems in various areas allows developing and promoting effective IT solutions for all areas of business.
ROBOCORTEX INTERNSHIP : Augmented reality application on mobile deviceaugmented- reality.fr
Robocortex is a French company that provides augmented reality solutions for industrial maintenance using mobile devices and glasses. They are seeking an intern to design and develop software components for an augmented reality application that will demonstrate their AugmentedPro technology. The application will allow users to identify industrial equipment, view relevant information and instructions, collect data, and generate reports to assist with maintenance tasks. The intern will work with the R&D team to acquire knowledge, analyze features, design interfaces, integrate localization components, and document their contributions. Strong programming and problem-solving skills are required for this 6-9 month internship position located near Nice, France.
AXONIM Devices offers services of digital consumer devices design and new product development. Highly skilled engineering teams perform whole development projects starting from scratch suggested by our customer and future devices architecture, functional and structural models design.
We prepare full design documentation for the manufacturing of the device enclosure, 3D model, and design documentation of the digital device. Our specialists select and purchase components, run production and assembly of printed circuit boards (PCB), test the assembled device.
Having strong background in embedded systems design our developers execute all required tasks within development cycle: schematic design, PCB design, firmware design and development, FPGA design, digital signal processing, porting and adapting embedded operating systems for a given platform, BSP and driver development, application development, operator interface or user application development, device prototyping, manufacture support.
Archon VR provides a virtual reality control room environment for monitoring and controlling complex robotic infrastructure. Their solution integrates data feeds like video and telemetry in an interactive and scalable VR space, allowing operators to visualize information and collaborate. This reduces cognitive load compared to traditional control rooms. Their product is well-timed as robotics are becoming more autonomous and data-driven. Their potential customers are robotics and industrial companies managing distributed infrastructure. They will generate revenue from software licensing and professional services. The founding team has experience in VR development, robotics, business development, and artificial intelligence.
This document provides information on Cognex's machine vision and barcode reading products. It discusses their machine vision systems for inspection, guidance, measurement, and other applications. It also covers their In-Sight and Checker vision systems, as well as their DataMan barcode readers for 1D and 2D codes. Their DataMan 500 series is highlighted as providing high performance barcode identification for factory automation.
Matrox Design Assistant is a flowchart-based machine vision software that allows users to create vision applications without writing code. It provides an intuitive integrated development environment where applications are built by constructing a flowchart using visual tools for image processing, measurement, pattern matching, and more. The software also enables users to design a web-based operator interface. It supports a wide range of GigE and USB3 cameras and can deploy applications to Matrox vision systems and smart cameras.
Detection of medical instruments project- PART 2Sairam Adithya
this presentation is a continuation of the previous one. In this presentation, the work process for individual steps has been clearly explained with snippets of code taken from the source code. This is present along with output visualization, advantages and conclusion.
This document discusses embedded vision, which uses computer vision techniques in embedded systems. Embedded vision allows machines to understand their environment visually. It has become more feasible due to powerful yet efficient processors. Key points covered include programming devices for embedded vision, cameras and sensors, semiconductor components, case studies, and applications in industries like manufacturing, medical, automotive, security, and consumer goods. Embedded vision is used for automation, medical imaging, driver monitoring, physical security, and remote control applications.
Satish Lokkoju is seeking a challenging position that allows growth in skills like embedded systems, computer vision, and machine learning. He has a B.E. in electrical engineering and an M.Sc. in economics. His 5.5 years of experience includes developing video and audio algorithms at Samsung and an H.264 encoder at Squid Design Systems. Current projects involve image segmentation and facial tracking. He is proficient in C/C++, ARM, and DSP tools and holds a patent in coding unit partitioning.
Similar to "Embedded Vision Made Smart: Introduction to the HALCON Embedded Machine Vision Library," a Presentation from MVTec (20)
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/squeezing-the-last-milliwatt-and-cubic-millimeter-from-smart-cameras-using-the-latest-fpgas-and-drams-a-presentation-from-lattice-semiconductor-and-etron-technology-america/
Hussein Osman, Segment Marketing Director at Lattice Semiconductor, and Richard Crisp, Vice President and Chief Scientist at Etron Technology America, co-presents the “Squeezing the Last Milliwatt and Cubic Millimeter from Smart Cameras Using the Latest FPGAs and DRAMs” tutorial at the May 2024 Embedded Vision Summit.
Attaining the lowest power, size and cost for a smart camera requires carefully matching the hardware to the actual application requirements. General-purpose media processors may appear attractive and easy to use, but often include unneeded features which increase system size, weight, power and cost. “Right-sizing” the camera design for the application requirements can save significant power, cost, size and weight.
In this talk, Osman and Crisp show how you can leverage an advanced power-optimized FPGA incorporating a soft RISC-V core combined with a video-bandwidth, low-pin-count DRAM to cut power consumption roughly in half for endpoint smart cameras used in automotive, industrial and other applications. They examine techniques for reducing power, cost and size including system architecture, memory architecture, packaging, and signaling and termination schemes. They also explore techniques for enhancing system reliability.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/maximize-your-ai-compatibility-with-flexible-pre-and-post-processing-a-presentation-from-flex-logix/
Jayson Bethurem, Vice President of Marketing and Business Development at Flex Logix, presents the “Maximize Your AI Compatibility with Flexible Pre- and Post-processing” tutorial at the May 2024 Embedded Vision Summit.
At a time when IC fabrication costs are skyrocketing and applications have increased in complexity, it is important to minimize design risks and maximize flexibility. In this presentation, you’ll learn how embedding FPGA technology can solve these problems—expanding your market access by enabling more external interfaces, accelerating your compute envelope and increasing data security.
Embedded FPGA IP is highly efficient for pre- and post-processing data and can implement a variety of signal processing tasks such as image signal processing (defective pixel and color correction, for example), packet processing from network interfaces and signal processing from data converters (filtering). Additionally, this IP can manage data movement in and out of your AI engine as well as provide an adaptable protocol layer to connect to a variety of external interfaces, like USB and MIPI cameras. Flex Logix eFPGA IP is easy to integrate, high performing, lightweight and supported across more process nodes than any other supplier’s.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/addressing-tomorrows-sensor-fusion-and-processing-needs-with-cadences-newest-processors-a-presentation-from-cadence/
Amol Borkar, Product Marketing Director at Cadence, presents the “Addressing Tomorrow’s Sensor Fusion and Processing Needs with Cadence’s Newest Processors” tutorial at the May 2024 Embedded Vision Summit.
From ADAS to autonomous vehicles to smartphones, the number and variety of sensors used in edge devices is increasing: radar, LiDAR, time-of-flight sensors and multiple cameras are more and more common. And, as sensors have improved, the data rates associated with them have also increased. Traditionally, a dedicated processor has been utilized to process data from each sensor independently. Today, however, there is a growing need for a single, unified processor capable of processing multimodal sensor data utilizing both classical and AI algorithms and implementing sensor fusion for robust perception.
In this talk, Borkar introduces the new Vision 341 DSP and Vision 331 DSP from Cadence. These cores provide a versatile single-DSP solution for various workloads, including image sensing, radar, LiDAR and AI tasks.He explores the architecture of these new processors, highlights their performance and efficiency and outlines the associated developer tools and software building blocks.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/silicon-slip-ups-the-ten-most-common-errors-processor-suppliers-make-number-four-will-amaze-you-a-presentation-from-bdti/
Phil Lapsley, Co-founder and Vice President of BDTI, presents the “Silicon Slip-ups: The Ten Most Common Errors Processor Suppliers Make (Number Four Will Amaze You!)” tutorial at the May 2024 Embedded Vision Summit.
For over 30 years, BDTI has provided engineering, evaluation and advisory services to processor suppliers and companies that use processors in products. The company has seen a lot, including some classic mistakes. (You know, things like: the chip has an accelerator, but no easy way to program it… or you can only program it using an obscure proprietary framework. Or it has an ISP that only works with one image sensor. Or the development tools promise a lot but fall far short. Or the device drivers don’t work. Or the documentation is deficient.)
Phil Lapsley, BDTI co-founder, presents a fun and fast-paced review of some of the most common processor provider errors, ones seen repeatedly at BDTI. If you’re a processor provider, you’ll learn things you can do to avoid these goofs—and if you’re a processor user, you’ll learn about things to watch for when selecting your next processor!
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-arms-machine-learning-solution-enables-vision-transformers-at-the-edge-a-presentation-from-arm/
Stephen Su, Senior Segment Marketing Manager at Arm, presents the “How Arm’s Machine Learning Solution Enables Vision Transformers at the Edge” tutorial at the May 2024 Embedded Vision Summit.
AI at the edge has been transforming over the last few years, with newer use cases running more efficiently and securely. Most edge AI workloads were initially run on CPUs, but machine learning accelerators have gradually been integrated into SoCs, providing more efficient solutions. At the same time, ChatGPT has driven a sudden surge in interest in transformer-based models, which are primarily deployed using cloud resources. Soon, many transformer-based models will be modified to run effectively on edge devices.
In this presentation, Su explains the role of transformer-based models in vision applications and the challenges of implementing transformer models at the edge. Next, he introduces the latest Arm machine learning solution and how it enables the deployment of transformer-based vision networks at the edge. Finally, he shares an example implementation of a transformer-based embedded vision use case and uses this to contrast such solutions with those based on traditional CNN networks.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/nx-evos-a-new-enterprise-operating-system-for-video-and-visual-ai-a-presentation-from-network-optix/
Nathan Wheeler, Co-founder and CEO of Network Optix, presents the “Nx EVOS: A New Enterprise Operating System for Video and Visual AI” tutorial at the May 2024 Embedded Vision Summit.
In most software domains, developers don’t write code at the bare-metal level; they build applications on top of operating systems, which provide commonly needed functionality. Yet, today, developers of video and AI applications are effectively writing their applications at the bare-metal level, building the “plumbing” themselves to handle basics like device discovery, storage management, security and model deployment. These developers need an operating system that supports their applications so they can focus on what really matters: the core functionality of their product.
Nx EVOS is the world’s first enterprise video operating system. EVOS revolutionizes video management, offering device discovery, bandwidth optimization and security features—in cloud and on device. Its support for AI pipelines and user management enables scalable deployment of AI applications across environments and platforms, and it’s trusted by leading organizations such as SpaceX. In this presentation, you’ll learn how Nx EVOS can save you time and effort in building your next vision product.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/opencv-for-high-performance-low-power-vision-applications-on-snapdragon-a-presentation-from-qualcomm/
Xin Zhong, Computer Vision Product Manager at Qualcomm Technologies, presents the “OpenCV for High-performance, Low-power Vision Applications on Snapdragon” tutorial at the May 2024 Embedded Vision Summit.
For decades, the OpenCV software library has been popular for developing computer vision applications. However, developers have found it challenging to create efficient implementations of their OpenCV applications on processors optimized for edge applications, like the Qualcomm Snapdragon family. As part of its comprehensive support for computer vision application developers, Qualcomm provides a variety of tools to enable developers to take full advantage of the heterogeneous computing resources in the Snapdragon processors.
In this talk, Zhong introduces a new element of Qualcomm’s computer vision tools suite: a version of OpenCV optimized for Snapdragon platforms, which allows developers to leverage and port their existing OpenCV-based applications seamlessly to Snapdragon platforms. Supporting OpenCV v4.x and later releases, this implementation contains unique Qualcomm-specific accelerations of OpenCV and OpenCV extension APIs. Zhong explains how this library enables developers to leverage existing OpenCV code to achieve superior performance and power savings on Snapdragon platforms.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/deploying-large-models-on-the-edge-success-stories-and-challenges-a-presentation-from-qualcomm/
Vinesh Sukumar, Senior Director of Product Management at Qualcomm Technologies, presents the “Deploying Large Models on the Edge: Success Stories and Challenges” tutorial at the May 2024 Embedded Vision Summit.
In this talk, Dr. Sukumar explains and demonstrates how Qualcomm has been successful in deploying large generative AI and multimodal models on the edge for a variety of use cases in consumer and enterprise markets. He examines key challenges that must be overcome before large models at the edge can reach their full commercial potential. He also highlights how Qualcomm is addressing these challenges through upgraded processor hardware, improved developer tools and a comprehensive library of fully optimized AI models in the Qualcomm AI Hub.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/scaling-vision-based-edge-ai-solutions-from-prototype-to-global-deployment-a-presentation-from-network-optix/
Maurits Kaptein, Chief Data Scientist at Network Optix and Professor at the University of Eindhoven, presents the “Scaling Vision-based Edge AI Solutions: From Prototype to Global Deployment” tutorial at the May 2024 Embedded Vision Summit.
The Embedded Vision Summit brings together innovators in silicon, devices, software and applications and empowers them to bring computer vision and perceptual AI into reliable and scalable products. However, integrating recent hardware, software and algorithm innovations into prime-time-ready products is quite challenging. Scaling from a proof of concept—for example, a novel neural network architecture performing a valuable task efficiently on a new piece of silicon—to an AI vision system installed in hundreds of sites requires surmounting myriad hurdles.
First, building on Network Optix’s 14 years of experience, Professor Kaptein details how to overcome the networking, fleet management, visualization and monetization challenges that come with scaling a global vision solution. Second, Kaptein discusses the complexities of making vision AI solutions device-agnostic and remotely manageable, proposing an open standard for AI model deployment to edge devices. The proposed standard aims to simplify market entry for silicon manufacturers and enhance scalability for solution developers. Kaptein outlines the standard’s core components and invites collaborative contributions to drive market expansion.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/whats-next-in-on-device-generative-ai-a-presentation-from-qualcomm/
Jilei Hou, Vice President of Engineering and Head of AI Research at Qualcomm Technologies, presents the “What’s Next in On-device Generative AI” tutorial at the May 2024 Embedded Vision Summit.
The generative AI era has begun! Large multimodal models are bringing the power of language understanding to machine perception, and transformer models are expanding to allow machines to understand using multiple types of sensors. This new wave of approaches is poised to revolutionize user experiences, disrupt industries and enable powerful new capabilities. For generative AI to reach its full potential, however, we must deploy it on edge devices, providing improved latency, pervasive interaction and enhanced privacy.
In this talk, Hou shares Qualcomm’s vision of the compelling opportunities enabled by efficient generative AI at the edge. He also identifies the key challenges that the industry must overcome to realize the massive potential of these technologies. And he highlights research and product development work that Qualcomm is doing to lead the way via an end-to-end system approach—including techniques for efficient on-device execution of LLMs, LVMs and LMMs, methods for orchestration of large models at the edge and approaches for adaptation and personalization.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/learning-compact-dnn-models-for-embedded-vision-a-presentation-from-the-university-of-maryland-at-college-park/
Shuvra Bhattacharyya, Professor at the University of Maryland at College Park, presents the “Learning Compact DNN Models for Embedded Vision” tutorial at the May 2023 Embedded Vision Summit.
In this talk, Bhattacharyya explores methods to transform large deep neural network (DNN) models into effective compact models. The transformation process that he focuses on—from large to compact DNN form—is referred to as pruning. Pruning involves the removal of neurons or parameters from a neural network. When performed strategically, pruning can lead to significant reductions in computational complexity without significant degradation in accuracy. It is sometimes even possible to increase accuracy through pruning.
Pruning provides a general approach for facilitating real-time inference in resource-constrained embedded computer vision systems. Bhattacharyya provides an overview of important aspects to consider when applying or developing a DNN pruning method and presents details on a recently introduced pruning method called NeuroGRS. NeuroGRS considers structures and trained weights jointly throughout the pruning process and can result in significantly more compact models compared to other pruning methods.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/introduction-to-computer-vision-with-cnns-a-presentation-from-mohammad-haghighat/
Independent consultant Mohammad Haghighat presents the “Introduction to Computer Vision with Convolutional Neural Networks” tutorial at the May 2023 Embedded Vision Summit.
This presentation covers the basics of computer vision using convolutional neural networks. Haghighat begins by introducing some important conventional computer vision techniques and then transition to explaining the basics of machine learning and convolutional neural networks (CNNs) and showing how CNNs are used in visual perception.
Haghighat illustrates the building blocks and computational elements of neural networks through examples. This session provides an overview of how modern computer vision algorithms are designed, trained and used in real-world applications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/selecting-tools-for-developing-monitoring-and-maintaining-ml-models-a-presentation-from-yummly/
Parshad Patel, Data Scientist at Yummly, presents the “Selecting Tools for Developing, Monitoring and Maintaining ML Models” tutorial at the May 2023 Embedded Vision Summit.
With the boom in tools for developing, monitoring and maintaining ML models, data science teams have many options to choose from. Proprietary tools provided by cloud service providers are enticing, but teams may fear being locked in—and may worry that these tools are too costly or missing important features when compared with alternatives from specialized providers.
Fortunately, most proprietary, fee-based tools have an open-source component that can be integrated into a home-grown solution at low cost. This can be a good starting point, enabling teams to get started with modern tools without making big investments and leaving the door open to evolve tool selection over time. In this talk, Patel presents a step-by-step process for creating an MLOps tool set that enables you to deliver maximum value as a data scientist. He shares how Yummly built pipelines for model development and put them into production using open-source projects.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/building-accelerated-gstreamer-applications-for-video-and-audio-ai-a-presentation-from-wave-spectrum/
Abdo Babukr, Accelerated Computing Consultant at Wave Spectrum, presents the “Building Accelerated GStreamer Applications for Video and Audio AI,” tutorial at the May 2023 Embedded Vision Summit.
GStreamer is a popular open-source framework for creating streaming media applications. Developers often use GStreamer to streamline the development of computer vision and audio perception applications. Since perceptual algorithms are often quite demanding in terms of processing performance, in many cases developers need to find ways to accelerate key GStreamer building blocks, taking advantage of specialized features of their target processor or co-processor.
In this talk, Babukr introduces GStreamer and shows how to use it to build computer vision and audio perception applications. He also shows how to create efficient, high-performance GStreamer applications that utilize specialized hardware features.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/understanding-selecting-and-optimizing-object-detectors-for-edge-applications-a-presentation-from-walmart-global-tech/
Md Nasir Uddin Laskar, Staff Machine Learning Engineer at Walmart Global Tech, presents the “Understanding, Selecting and Optimizing Object Detectors for Edge Applications” tutorial at the May 2023 Embedded Vision Summit.
Object detectors count objects in a scene and determine their precise locations, while also labeling them. Object detection plays a crucial role in many vision applications, from autonomous driving to smart appliances. In many of these applications, it’s necessary or desirable to implement object detection at the edge.
In this presentation, Laskar explores the evolution of object detection algorithms, from traditional approaches to deep learning-based methods and transformer-based architectures. He delves into widely used approaches for object detection, such as two-stage R-CNNs and one-stage YOLO algorithms, and examines their strengths and weaknesses. And he provides guidance on how to evaluate and select an object detector for an edge application.
This document provides an introduction to LiDAR technology for machine perception. It begins with an overview of LiDAR fundamentals and principles, explaining that LiDAR works similarly to radar but uses laser light instead of radio waves. It then discusses different types of LiDAR sensors and scanning methods, as well as strategies for processing LiDAR point cloud data. The document concludes with examples of common LiDAR representations and neural network architectures used for tasks like object detection from LiDAR point clouds.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/vision-language-representations-for-robotics-a-presentation-from-the-university-of-pennsylvania/
Dinesh Jayaraman, Assistant Professor at the University of Pennsylvania, presents the “Vision-language Representations for Robotics” tutorial at the May 2023 Embedded Vision Summit.
In what format can an AI system best present what it “sees” in a visual scene to help robots accomplish tasks? This question has been a long-standing challenge for computer scientists and robotics engineers. In this presentation, Jayaraman provides insights into cutting-edge techniques being used to help robots better understand their surroundings, learn new skills with minimal guidance and become more capable of performing complex tasks.
Jayaraman discusses recent advances in unsupervised representation learning and explains how these approaches can be used to build visual representations that are appropriate for a controller that decides how the robot should act. In particular, he presents insights from his research group’s recent work on how to represent the constituent objects and entities in a visual scene, and how to combine vision and language in a way that permits effectively translating language-based task descriptions into images depicting the robot’s goals.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Nordic Marketo Engage User Group_June 13_ 2024.pptx
"Embedded Vision Made Smart: Introduction to the HALCON Embedded Machine Vision Library," a Presentation from MVTec
1. MVTec Software GmbH
MVTec is a leading manufacturer of standard software for machine vision and in
business since over 20 years. MVTec products are used in all demanding areas
of imaging: Semi-conductor industry, web inspection, quality control and
inspection applications in general, medicine, 3D vision, and surveillance.
Dr. Olaf Munkelt is one of the co-founders of MVTec Software GmbH and, since its
founding in 1996, also one of the company’s managing directors. Among others, his
main areas of work include supervising the sales department and representing the
company.
Since 2009 Dr. Olaf Munkelt is also serving as the chairman of the board of directors of
the machine vision group within the VDMA (German Engineering Federation).
1
2. HALCON is on the market since over 20 years!
HALCON is the comprehensive standard software for machine vision with an integrated
development environment (HDevelop) that is used worldwide. HALCON’s flexible
architecture facilitates rapid development of any kind of machine vision application.
MVTec HALCON provides outstanding performance and a comprehensive support of
multi-core platforms, special instruction sets like AVX2 and NEON, as well as GPU
acceleration. It serves all industries, with a library used in hundreds of thousands of
installations in all areas of imaging like blob analysis, morphology, matching, measuring,
identification, and 3D vision.
The software secures your investment by supporting a wide range of operating systems
and providing interfaces to hundreds of industrial cameras and frame grabbers, in
particular by supporting standards like GenICam, GigE Vision, and USB3 Vision.
HALCON Embedded is the comprehensive standard software for machine vision running
on your special platform. With this, MVTec HALCON can be ported to various
microprocessors/DSPs, operating systems, and compilers.
HALCON Embedded allows software engineers to develop the machine vision part of
applications on a standard platform and thereby greatly eases the programming of an
embedded system.
Simply put: Develop on a PC, and the application runs on an embedded system. HALCON
Embedded is available for various smart cameras and other embedded platforms.
2
3. HALCON 9.0 was chosen to integrate the various sensor data types and perform all the
complex computations in a single development environment. The stereo-vision pair
consists of two Prosilica GC2450 cameras.
These cameras use the GigEVision communication standard. HALCON supports
GigEVision and allows for a quick setup of each camera in software via its automatic
code generation feature. The Time-of-Flight sensor is a Swiss Ranger SR4000.
The tactile force and finger position sensors are custom designed and fabricated. The
Swiss Ranger, tactile, and finger position sensors use custom C/C++ code to perform
depth measurements and tactile object recognition. HALCON’s Extension Package
Programming feature allows to import all custom code into the integrated development
environment, HDevelop which is used for rapid prototyping of our applications.
The stereo camera calibration methods inside HALCON are used to calibrate the stereo
pair and will be used eventually to calibrate the Swiss Ranger.
3
4. More than 150 million tons of tomatoes are produced every year around the world. With seedlings costing €0.25, plant growers
such as Westland Plantenkwekerij (WPK; Rotterdam, the Netherlands) must ensure that the seeds they receive from their suppliers
will germinate as expected.
Within their contract research organization based at the Wageningen University & Research Centre, the Netherlands, Rick van de
Zedde and his colleagues have been developing vision applications in the agrifood industry for more than 20 years. In this project,
they studied how numerous growers evaluate their tomato seedlings. Although multiple characteristics such as shape and color
were evaluated by the growers, it became clear that the seedling sorting process could be robustly automated by measuring the
mass of each seedling.
FBR designed the machine vision system and Flier Systems, Barendrecht, the Netherlands, currently builds the machine.
Now installed at WPK's facility in Made, the Netherlands, the machine is capable of sorting tomato seedlings at a rate of
18,000/hr. According to Erik van der Arend, owner and director of WPK, the current version of the system sorts seedlings on the
basis of the plant's biomass; it will eventually be upgraded to sort plants based on multiple characteristics such as the plant's
shape, size of the leaves, and defects in shape or color.
As each pot enters the vision station, an optical switch triggers the presence of the plant and 10 individual images of the plant are
captured from numerous angles. The plants are classified according to volume after a high-speed calculation.
"Because of the relatively complex and nonuniform nature of the seedling plants," says van de Zedde, "ten cameras were
required to properly recreate a three-dimensional model of the plants. And still our software is able to generate a 3-D model and
calculate the biomass within 25 msec per seedling."
To calibrate these cameras, a flat checkerboard pattern is used in conjunction with a modified stereo-vision calibration algorithm
available in the HALCON software package from MVTec Software, Munich, Germany.
Because plants are illuminated by a high-frequency fluorescent backlight, the 10 captured images provide multiple views of the
seedlings under inspection from different viewpoints. By subtracting the background from each of these images, a silhouette of
the plant at different viewpoints can be created.
Then by using a technique known as Space Carving, a 3-D rendering of the plant can be created (see Fig. 3). Using this method,
the biomass of each plant can be computed and the data used to classify each plant.
At present, FBR's van de Zedde is installing the next machine at a breeder and is upgrading the vision system to sort seedlings
based on other plant characteristics, including color. "But the productivity gain in the quality of sorting is already huge," says
WPK's van der Arend, "since our investment in this machine will be recovered within four years."
Author: Andy Wilson
Article kindly provided by Vision Systems Design.
4
5. Whatever programming language you are using in your project, HALCON is likely to have
an appropriate interface.
Through the interface, programmers have access to more than 2500 HALCON operators,
and therefore to easy-to-use fast and robust cutting-edge machine vision algorithms.
6. Don‘t ask what HALCON can do. The more interesting question is, what HALCON can‘t
do ;-)
7. Don‘t ask what HALCON can do. The more interesting question is, what HALCON can‘t
do ;-)
8. Reading of data codes
MVTec software products read ECC 200, QR, Micro QR, Aztec, and PDF417 codes of any size with modules smaller than 2x2
pixels. They can also read data codes with a distorted finder pattern. In addition to printed codes, the software robustly reads
"Direct Part Mark" (DPM) codes and etched codes on different surfaces and under varying illumination conditions.
Reading of bar codes
All common bar codes can be read regardless of orientation - even with an element width of only 1.5 pixels, or, if the code is partly
occluded. The error-free identification of bar codes with our software products is constantly improved.
It bears mentioning that our software reads bar codes particularly in the case of defocus as well as significantly overexposed
images in which the code bars are displayed extremely narrow. It is able to read individual bars that are only five percent of their
original width due to overexposure. In addition, bar codes with up to 95% of "print growth" can be reliably identified as well. Print
growth happens when the bars become much too wide during printing because too much ink was used.
2D metrology
With 2D metrology, you can measure the dimensions of objects that can be represented by specific geometric primitives. The
geometric shapes that can be measured comprise circles, ellipses, rectangles, and lines.
Calibration
Using a calibration plate you can easily calibrate your cameras using HALCON.
Surface-based 3D matching
Recognition and 3D pose determination of arbitrary 3D objects: the cutting-edge 3D matching of MVTec software determines the
position and orientation of 3D objects represented by their CAD model.
Shape-based matching
The software’s superior subpixel-accurate matching technology finds objects robustly and accurately in real-time. It does so even if
they are rotated, scaled, perspectively distorted, locally deformed, partially occluded or located outside of the image, or undergo
nonlinear illumination changes. It can process images with 8 or 16 bits and also handles color or multi-channel images. Objects can
be trained from images or from CAD-like data. Moreover, MVTec’s unique component-based matching is able to locate objects
that are composed of multiple parts that can move with respect to each other. Our local deformable matching finds objects with
9. deformed or wrinkled surfaces, and our perspective deformable matching robustly localizes objects with perspective
distortions.
8
10. Besides other technologies HALCON offers a wide range of highly sophisticated matching
technologies:
HALCON allows to locate objects with arbitrary orientation in 3D (3D alignment), it
provides the well established shape-based matching – working even with color images,
the unique component based matching and the well proven normalized cross
correlation.
Since 2008 HALCON features two more matching technologies which can be used for 3D
alignment:
- Descriptor-based matching. This revolutionary new matching technology is able to find
perspectively distorted objects. It is based on the detection of interest points where gray
values are clearly differentiated from neighboring areas (brightness, curvature, corners,
spots).
- Perspective, deformable matching. This new matching technology is also able to match
perspectively distorted objects. In contrast to the descriptor-based matching, the
perspective, deformable matching is edge-based (like HALCON’s shape-based matching)
and thus can best be used with objects with clearly distinguishable edges.
13. The following AOP concepts were already available in earlier HALCON versions and are
continuously further enhanced:
AOP automatically detects the number of available CPUs and splits the image.
The programmer is able to choose an ROI in an image that even can have an arbitrary
shape (that means HALCON is not restricted to shapes like rectangles which are
common in most other libraries). AOP only processes the ROI of the image. That means
significant saving of processing time, because the runtime only depends on the ROI size,
but not on the image size (as you might find in various libraries).
HALCON processes multi-channel images (e.g., color images) with an unlimited number
of channels. AOP automatically processes every channel in parallel.
Moreover, HALCON is able to process image sequences in parallel.
If HALCON has to process a tuple of regions as an output of segmentation (e.g., OCR,
blob analysis) - which can be thousands of single regions – they also are processed by
AOP.
If XLD (HALCON’s term for subpixel-contours) must have to be extracted, the separate
groups of contours also will be split by AOP. This is also true if contours are further
processed or features are extracted.
14. Many HALCON customers already build multi-threading applications. E.g., they use
multiple threads for different tasks, e.g., image acquisition, processing, and visualization.
HALCON’s integrated development environment (IDE), HDevelop, has supports parallel
programming and thus allows concurrency. Also, HALCON supports event-based
processing. The call stack or program line view has been extended by a thread view,
which is helpful for debugging multi-threaded programs.
In the thread view, an overview of all currently active threads is shown. By selecting one
of the threads, it is possible to step through single threads for debugging.
15. In the „traditional“ approach you create the machine vision part of your application in
HDevelop. Once you are satisfied with your program code, you use HDevelop to export
it as C++, C, C#, or Visual basic source code. This code is then inserted into your
application‘s source code, where other essential parts of the application (e.g., user
interface, process integration) are realized. Once done, the final application is compiled.
This approach comes with a number of drawbacks, e.g.,:
• If alterations to the machine vision part of the application are necessary, the code
needs to be exported again and the whole application then needs to be compiled
once more. This makes prototyping quite tedious and slow.
• If changes to the machine vision part of the application are being made, the whole
application also needs to be recertified, to for example keep a certain safety status.
14
16. With HDevEngine, it is also possible to implement HDevelop code into an application in
a smart way, because using HDevEngine allows you to directly load and execute
HDevelop programs and procedures from within your C++ or C# application.
This brings a lot of advantages compared to the „traditional“ approach, e.g.:
• Alterations to the vision part of your applications can be done „on the fly“ without
needing to recompile the whole application. This allows you to quickly create
different prototypes (rapid prototyping) and thus results in a shorter time to market.
• Modularizing different parts of your program (e.g., having a separate „vision module“)
complies with state-of-the-art software development standards, and allows to assign
different modules to dedicated machine vision, UX, or process integration experts.
15
18. You can see the HDevEngine code running on a Raspberry Pi – this does not differ from
the code you would be using on a standard pc; seeing the code which is being executed
live on the machine used to be quite difficult. But with HALCON Embedded, based on
HALCON 13, this is easy! Consequently, the maintenance of your embedded vision
application gets much easier as well.
Furthermore, all the advantages of HDevEngine apply to HALCON Embedded as well.
Changing between standard PCs and embedded devices is easily possible.
17
19. Please find the video at https://www.youtube.com/watch?v=bVc27OYAvh4.
The following is shown in the video:
With a shell using PuTTY, we are connected with a Raspberry Pi. First, we set some environment variables
with setup.env. Then, we start the application. The goal of the application is to find an arbitrary number of
SD cards in an image, which is grabbed with the camera module of the Raspberry Pi. First, there are no
matches, then 1, then 2. However, when we place more than 2 SD cards under the camera, the number
doesn‘t change…a bug!
Now, we open HDevelop, and connect with the process which is running on the Raspberry Pi. Here, we can
see the code which is running on the Raspberry Pi, stop the program and step through it, inspect the
variables, and even save variables for further debugging. After we have saved an image, we stop the
debugging – the application is running again.
Now, we open the same HDevelop program on our local computer. Here, we can again step through the
program and look for the bug in detail. First, some morphology is performed to reduce the region where
we look for the SD cards. Then, we notice that find_shape_model, which looks for the trained shape of the
logo on the SD cards, does return only two Score values – only two SD cards are found. One of the
parameters of find_shape_model is NumMatches – the maximum number of returned matches. It is 2,
which is probably our bug. We can either set it to three to find a maximum number of 3 matches, or 0, to
find all matches.
Lastly, using WinSCP, we copy this procedure on our Raspberry Pi, and restart the application – without
recompiling! Now, all three matches are found. Problem solved.
18
20. The following is shown in the video:
With a shell using PuTTY, we are connected with a Raspberry Pi. First, we set some environment
variables with setup.env. Then, we start the application. The goal of the application is to find an
arbitrary number of SD cards in an image, which is grabbed with the camera module of the
Raspberry Pi. First, there are no matches, then 1, then 2. However, when we place more than 2
SD cards under the camera, the number doesn‘t change…a bug!
Now, we open HDevelop, and connect with the process which is running on the Raspberry Pi.
Here, we can see the code which is running on the Raspberry Pi, stop the program and step
through it, inspect the variables, and even save variables for further debugging. After we have
saved an image, we stop the debugging – the application is running again.
Now, we open the same HDevelop program on our local computer. Here, we can again step
through the program and look for the bug in detail. First, some morphology is performed to
reduce the region where we look for the SD cards. Then, we notice that find_shape_model, which
looks for the trained shape of the logo on the SD cards, does return only two Score values – only
two SD cards are found. One of the parameters of find_shape_model is NumMatches – the
maximum number of returned matches. It is 2, which is probably our bug. We can either set it to
three to find a maximum number of 3 matches, or 0, to find all matches.
Lastly, using WinSCP, we copy this procedure on our Raspberry Pi, and restart the application –
without recompiling! Now, all three matches are found. Problem solved.
19
21. Focusing on industrial vision applications
MVTec focuses on industrial applications. Open source software often lacks features like measure tools,
blob analysis, or matching.
Comprehensive documentation and fast development
MVTec HALCON offers documentation for every user and level – ranging from "Quick Guides" up to
"Solution Guides“, which offer extensive step-by-step tutorials for solving various vision problems (for
example how to read 2D data codes, or how to perform 3D vision and image mosaicking). In the reference
documentation every single one of the more than 2000 HALCON operators is being referenced extensively.
In some cases the reference even includes mathematical backgrounds and scientific publications on which
the method is based upon. In addition, MVTec regularly releases tutorial videos for both HALCON and
HALCON Embedded.
HALCON saves money through continuous maintenance
Maintaining software is a big cost driver. Once the software has been delivered to the customer, he or she
starts working with it. From this life cycle stage on, maintenance will be necessary, too. As this stage of the
product life cycle also tends to be the longest, maintenance costs play a large role in the long term.
However, many customers are not aware of these „hidden“ costs.
MVTec‘s software products are continuously being adapted with respect to, e.g. new architectures like
AVX2, ARM 64 etc. MVTec also ensures compatibility of its software for various versions over a long period
of time.
You can rely on HALCON
Our customers' products require high quality and performance. So we have to provide the best vision
library possible to match their needs.
The software engineering behind HALCON meets the highest standards: in 2015, one error per 12,600 lines
of code was found. This corresponds to 0.08 errors per 1000 lines of code.
20
22. It is essential to point customers‘ attention towards the total costs of software. The
price of a single SDK or runtime license is merely relevant to the overall costs. Other cost
factors have a much bigger impact, like training costs (for new customers),
implementation costs (for new projects) or maintenance and opportunity costs (for
products that already have been delivered).
With HALCON, you can plan and calculate with all these costs from the beginning.
21
25. MVTec HALCON is the comprehensive standard software for machine vision with an integrated
development environment (HDevelop) that is used worldwide.
It enables cost savings and improved time to market. HALCON’s flexible architecture facilitates
rapid development of any kind of machine vision application.
MVTec HALCON provides outstanding performance and a comprehensive support of multi-core
platforms, special instruction sets like AVX2 and NEON, as well as GPU acceleration. It serves all
industries, with a library used in hundreds of thousands of installations in all areas of imaging
like blob analysis, morphology, matching, measuring, identification, and 3D vision.
The software secures your investment by supporting a wide range of operating systems and
providing interfaces to hundreds of industrial cameras and frame grabbers, in particular by
supporting standards like GenICam, GigE Vision, and USB3 Vision.
Machine vision software today has to fulfill many requirements:
The faster the software, the faster the inspection task can be performed, the higher the
throughput of the machine, which results in lower production costs.
The need of cost reduction leads to more and more sophisticated and complex machine vision
tasks. To tackle all upcoming challenges and to be one step before the competitor, it is
important that the machine vision software has tools for many different tasks which can be
combined easily to solve even complex tasks.
Last but not least, the results should of course be as accurate as possible and the software must
be approved to run even under difficult industrial environments without errors.
HALCON fulfills all of these requirements.
24
26. What is HALCON Embedded?
• HALCON Embedded means HALCON running on your special platform. HALCON is
portable to various microprocessors/DSPs, operating systems, and compilers.
• HALCON Embedded lets you exploit the power of a comprehensive machine vision
library on embedded systems.
• HALCON Embedded allows you to develop the software part of your machine vision
application on a standard platform and thereby greatly eases the programming of an
embedded system. Short said: Develop on a PC, and let the application run on an
embedded system.
25
27. MVTec is the only software manufacturer worldwide solely focusing on developing
software for machine vision.
MVTec employs highly qualified experts for machine vision with up to 30 years of
experience in this technology.
We love to solve vision problems!
MVTec is dedicated to machine vision software.
Therefore, the company's vision statement is built on this dedication:
• Maintain the technological market leadership for machine vision software
• Remain a successful manufacturer of world-wide standard-software products
for the machine vision industry
• Represent a unique competence center for image processing algorithms
26
28. MVTec is dedicated to machine vision software: This passion for machine vision
is the driving force of the entire MVTec staff, including the management. We
personally stand behind the high quality of our products and services.
27
29. This example shows how fast and easy application development can be using HDevelop.
Here, we grab live images of a camera and count the number of bottles in a crate with
just 10 lines of code.
In HDevelop, we write this script very simply: We grab live images from a camera,
smooth the image to reduce the impact of image noise, apply a threshold, and compute
the connected regions to get the separate bottles.
30. HDevelop's integrated Image Acquisition Assistant lets you easily detect, connect, and
configure all available cameras and framegrabbers with just a few mouse clicks.
Parameters can be adjusted using an intuitive graphical user interface and can be
verified directly in a live image.
Finally, the settings can be added to the script as automatically generated code
segments.
31. In addition to a comfortable full text editor that actively gives suggestions and supports
auto-completion of code, HDevelop comes with a wide range of helpful features.
HDevelop also offers the dialog-based operator and parameter selection, and a
structured menu hierarchy that enables you to find the best operator for your
application as fast as possible.
32. HDevelop offers a lot of intuitive debugging features. Values of used variables are always
accessible through the variables window, and the results of image processing operations
are instantly visualized in the graphics window. Furthermore, the control flow of the
program can be controlled easily via breakpoints and variable stepping options.
33. Once the image processing part of your program is ready, it can easily be exported to
the programming language of your choice using the file menu.