This document discusses using machine vision techniques like Haar cascade filters and eigenfaces for real-time facial recognition and detection. It proposes using OpenCV to detect faces in video frames, clustering the detected faces to remove "ghost" faces, representing each face as a vector of eigenface coefficients, and searching Solr to identify faces or add new identities. It also discusses challenges like inconsistent face detection and proposes solutions like adaptive clustering parameters and windowing video frames to add context.
How Augment your Reality: Different perspective on the Reality / Virtuality C...Matteo Valoriani
If you think there's been a lot of talk about Augmented Reality and Virtual Reality this year, 2018 is going to blow you away. Apple with ARkit, Google with ARCore , Microsoft with HoloLens, Facebook with Oculus and many others are working to transform our Reality with new products and services in the not-too-distant future. Therefore Apple, Microsoft, Google and Facebook is approaching AR/VR from different perspectives and in this session we will try to understand how these different technologies work and which best suits the different areas (industry 4.0, tourism, healthcare, ...) .
Slides from Portland Machine Learning meetup, April 13th.
Abstract: You've heard all the cool tech companies are using them, but what are Convolutional Neural Networks (CNNs) good for and what is convolution anyway? For that matter, what is a Neural Network? This talk will include a look at some applications of CNNs, an explanation of how CNNs work, and what the different layers in a CNN do. There's no explicit background required so if you have no idea what a neural network is that's ok.
Machine Learning Tokyo - Deep Neural Networks for Video - NumberBoostAlex Conway
Slides from a talk I gave at the Machine Learning Tokyo meetup group on 20190318.
More info here: https://www.meetup.com/Machine-Learning-Tokyo/events/259467268/
Feel free to reach out if you ever need to build a computer vision system or need data labelled to train machine learning models :)
www.numberboost.com
Deep learning is making news across the country as one of the most promising techniques in machine learning research. However, these methods are complex to implement, finicky to tune, and state-of-the-art accuracy is only achieved by a few experts in the field. In this session, we give a beginner-friendly explanation of deep learning using neural networks—what it is, what it does, and how; and introduce the concept of deep features, which allows you to obtain great performance with reduced running times and data set sizes. We then show how these methods can easily be deployed on GPU instances (G2) on Amazon EC2.
How Augment your Reality: Different perspective on the Reality / Virtuality C...Matteo Valoriani
If you think there's been a lot of talk about Augmented Reality and Virtual Reality this year, 2018 is going to blow you away. Apple with ARkit, Google with ARCore , Microsoft with HoloLens, Facebook with Oculus and many others are working to transform our Reality with new products and services in the not-too-distant future. Therefore Apple, Microsoft, Google and Facebook is approaching AR/VR from different perspectives and in this session we will try to understand how these different technologies work and which best suits the different areas (industry 4.0, tourism, healthcare, ...) .
Slides from Portland Machine Learning meetup, April 13th.
Abstract: You've heard all the cool tech companies are using them, but what are Convolutional Neural Networks (CNNs) good for and what is convolution anyway? For that matter, what is a Neural Network? This talk will include a look at some applications of CNNs, an explanation of how CNNs work, and what the different layers in a CNN do. There's no explicit background required so if you have no idea what a neural network is that's ok.
Machine Learning Tokyo - Deep Neural Networks for Video - NumberBoostAlex Conway
Slides from a talk I gave at the Machine Learning Tokyo meetup group on 20190318.
More info here: https://www.meetup.com/Machine-Learning-Tokyo/events/259467268/
Feel free to reach out if you ever need to build a computer vision system or need data labelled to train machine learning models :)
www.numberboost.com
Deep learning is making news across the country as one of the most promising techniques in machine learning research. However, these methods are complex to implement, finicky to tune, and state-of-the-art accuracy is only achieved by a few experts in the field. In this session, we give a beginner-friendly explanation of deep learning using neural networks—what it is, what it does, and how; and introduce the concept of deep features, which allows you to obtain great performance with reduced running times and data set sizes. We then show how these methods can easily be deployed on GPU instances (G2) on Amazon EC2.
Discovering Your AI Super Powers - Tips and Tricks to Jumpstart your AI ProjectsWee Hyong Tok
In this session, we will share about cutting-edge deep learning innovations, and present emerging trends in the AI community. This session is for data scientists, developers who have a keen interest in getting started in an AI project, and wants to learn the tools of the trade. We will draw on practical experiences from working on various AI projects, and share the key learning, and pitfalls
2019년 5월 23일 창원대학교 정보통신공학과 특강자료 입니다.
* 일 시 : 2019년 5월 23일 (목) 13:00 ~
* 장 소 : 창원대학교 51호관 328호실
* 강연자 : 한국전자통신연구원(ETRI) 김성수 책임연구원
* 주 최 : 창원산업진흥원
* 주 관 : 창원시 스마트모바일앱지원센터
The goal of this report is the presentation of our biometry and security course’s project: Face recognition for Labeled Faces in the Wild dataset using Convolutional Neural Network technology with Graphlab Framework.
Chaos Engineering - The Art of Breaking Things in ProductionKeet Sugathadasa
This is an introduction to Chaos Engineering - the Art of Breaking things in Production. This is conducted by two Site Reliability Engineers which explains the concepts, history, principles along with a demonstration of Chaos Engineering
The technical talk is given in this video: https://youtu.be/GMwtQYFlojU
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
Unsupervised Computer Vision: The Current State of the ArtTJ Torres
This presentation was originally given at a styling research presentation at Stitch Fix, where I talk about some of the recent progress in the field of unsupervised deep learning methods for image analysis. It includes descriptions of Variational Autoencoders (VAE), Generative Adversarial Networks (GAN), their hybrid (VAE/GAN), Generative Moment Matching Networks (GMMN), and Adversarial Autoencoders.
Computer vision has started to achieve some very impressive results over the last 5-10 years. It is now possible to quickly and reliably detect faces, recognize and localize target images, and even classify pictures of objects into generic categories. Unfortunately, knowledge of these techniques remains largely confined to academia. In this session we’ll go over some of the tools available, placing an emphasis on exploring the ideas and algorithms behind their design.
To show how these components can be put together, a sample system will be developed over the course of the presentation. Starting with standard image descriptors, we’ll first see how to do direct image recognition. We’ll then extend that into a simple object classifier, which will be able to distinguish (for example) between images which contain a bicycle and those that don’t.
Deep Neural Networks for Video Applications at the EdgeAlex Conway
Slides from my talk about deep learning for video applications and edge computing on 16 May 2019 at the Oslo Machine Learning Meetup on sponsored by InMeta!
More details about the event here:
https://www.meetup.com/Oslo-Maskinlaering/events/261318845/
I covered the following topics:
* Neural network crash course
* Convolutional neural networks
* Recurrent neural networks
* Object detection
* Counting people getting in/out transport using deep learning
* Edge computing
* Creating labelled training datasets using sebenz.ai
SDVIs and In-Situ Visualization on TACC's StampedeIntel® Software
Speaker: Paul Navrátil, Texas Advanced Computing Center (TACC)
The design emphasis for supercomputing systems has moved from raw performance to performance-per-watt, and as a result, supercomputing architectures are converging on processors with wide vector units and many processing cores per chip. Such processors are capable of performant image rendering purely in software. This improved capability is fortuitous, since the prevailing homogeneous system designs lack dedicated, hardware-accelerated rendering subsystems for use in data visualization. Reliance on this “software-defined” rendering capability will grow in importance since, due to growing data sizes, visualizations must be performed on the same machine where the data is produced. Further, as data sizes outgrow disk I/O capacity, visualization will be increasingly incorporated into the simulation code itself (in situ visualization).
This talk presents recent work in high-fidelity visualization using the OSPRay ray tracing framework on TACC’s local and remote visualization systems. We present work using OSPRay within ParaView Catalyst in situ framework from Kitware, including capitalizing on opportunities to reduce data costs migrating through VTK filters for visualization. We highlight the performance opportunities and advantages of Intel® Advanced Vector Extensions 512, the memory system improvements possible with Intel® Xeon Phi™ processor multi-channel DRAM (MCDRAM) and the Intel® Omni-Path Architecture interconnect.
How to create a neural network that detects people wearing masks. Ultimate description, the A-to-Z workflow for creating a neural network that recognizes images.
A short intro to the paper: https://blog.fulcrum.rocks/neural-network-image-recognition
Search is the Tip of the Spear for Your B2B eCommerce StrategyLucidworks
With ecommerce experiencing explosive growth, it seems intuitive that the B2B segment of that ecosystem is mirroring the same trajectory. That said, B2B has very different needs when it comes to transacting with the same style of experiences that we see in B2C. For instance, B2B ecommerce is about precision findability, whereas B2C customers can convert at higher rates when they’re just browsing online. In order for the B2B buying experience to be successful, search needs to be tuned to meet the unique needs of the segment.
In this webinar with Forrester senior analyst Joe Cicman, you’ll learn:
-Which verticals in B2B will drive the most growth, and how machine-learning powered personalization tactics can be deployed to support those specific verticals
-Why an omnichannel selling approach must be deployed in order to see success in B2B
-How deploying content search capabilities will support a longer sales cycle at scale
-What the next steps are to support a robust B2B commerce strategy supported by new technology
Speakers
Joe Cicman, Senior Analyst, Forrester
Jenny Gomez, VP of Marketing, Lucidworks
Customer loyalty starts with quickly responding to your customer’s needs. When it comes to resolving open support cases, time is of the essence. Time spent searching for answers adds up and creates inefficiencies in resolving cases at scale. Relevant answers need to be a few clicks away and easily accessible for agents directly from their service console.
We will explore how Lucidworks’ Agent Insights application automatically connects agents with the correct answers and resources. You’ll learn how to:
-Configure a proactive widget in an agent’s case view page to access resources across third-party systems (such as Sharepoint, Confluence, JIRA, Zendesk, and ServiceNow).
-Easily set up query pipelines to autonomously route assets and resources that are relevant to the case-at-hand—directly to the right agent.
-Identify subject matter experts within your support data and access tribal knowledge with lightning-fast speed.
More Related Content
Similar to Solr and Machine Vision - Scott Cote, Lucidworks & Trevor Grant, IBM
Discovering Your AI Super Powers - Tips and Tricks to Jumpstart your AI ProjectsWee Hyong Tok
In this session, we will share about cutting-edge deep learning innovations, and present emerging trends in the AI community. This session is for data scientists, developers who have a keen interest in getting started in an AI project, and wants to learn the tools of the trade. We will draw on practical experiences from working on various AI projects, and share the key learning, and pitfalls
2019년 5월 23일 창원대학교 정보통신공학과 특강자료 입니다.
* 일 시 : 2019년 5월 23일 (목) 13:00 ~
* 장 소 : 창원대학교 51호관 328호실
* 강연자 : 한국전자통신연구원(ETRI) 김성수 책임연구원
* 주 최 : 창원산업진흥원
* 주 관 : 창원시 스마트모바일앱지원센터
The goal of this report is the presentation of our biometry and security course’s project: Face recognition for Labeled Faces in the Wild dataset using Convolutional Neural Network technology with Graphlab Framework.
Chaos Engineering - The Art of Breaking Things in ProductionKeet Sugathadasa
This is an introduction to Chaos Engineering - the Art of Breaking things in Production. This is conducted by two Site Reliability Engineers which explains the concepts, history, principles along with a demonstration of Chaos Engineering
The technical talk is given in this video: https://youtu.be/GMwtQYFlojU
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
Unsupervised Computer Vision: The Current State of the ArtTJ Torres
This presentation was originally given at a styling research presentation at Stitch Fix, where I talk about some of the recent progress in the field of unsupervised deep learning methods for image analysis. It includes descriptions of Variational Autoencoders (VAE), Generative Adversarial Networks (GAN), their hybrid (VAE/GAN), Generative Moment Matching Networks (GMMN), and Adversarial Autoencoders.
Computer vision has started to achieve some very impressive results over the last 5-10 years. It is now possible to quickly and reliably detect faces, recognize and localize target images, and even classify pictures of objects into generic categories. Unfortunately, knowledge of these techniques remains largely confined to academia. In this session we’ll go over some of the tools available, placing an emphasis on exploring the ideas and algorithms behind their design.
To show how these components can be put together, a sample system will be developed over the course of the presentation. Starting with standard image descriptors, we’ll first see how to do direct image recognition. We’ll then extend that into a simple object classifier, which will be able to distinguish (for example) between images which contain a bicycle and those that don’t.
Deep Neural Networks for Video Applications at the EdgeAlex Conway
Slides from my talk about deep learning for video applications and edge computing on 16 May 2019 at the Oslo Machine Learning Meetup on sponsored by InMeta!
More details about the event here:
https://www.meetup.com/Oslo-Maskinlaering/events/261318845/
I covered the following topics:
* Neural network crash course
* Convolutional neural networks
* Recurrent neural networks
* Object detection
* Counting people getting in/out transport using deep learning
* Edge computing
* Creating labelled training datasets using sebenz.ai
SDVIs and In-Situ Visualization on TACC's StampedeIntel® Software
Speaker: Paul Navrátil, Texas Advanced Computing Center (TACC)
The design emphasis for supercomputing systems has moved from raw performance to performance-per-watt, and as a result, supercomputing architectures are converging on processors with wide vector units and many processing cores per chip. Such processors are capable of performant image rendering purely in software. This improved capability is fortuitous, since the prevailing homogeneous system designs lack dedicated, hardware-accelerated rendering subsystems for use in data visualization. Reliance on this “software-defined” rendering capability will grow in importance since, due to growing data sizes, visualizations must be performed on the same machine where the data is produced. Further, as data sizes outgrow disk I/O capacity, visualization will be increasingly incorporated into the simulation code itself (in situ visualization).
This talk presents recent work in high-fidelity visualization using the OSPRay ray tracing framework on TACC’s local and remote visualization systems. We present work using OSPRay within ParaView Catalyst in situ framework from Kitware, including capitalizing on opportunities to reduce data costs migrating through VTK filters for visualization. We highlight the performance opportunities and advantages of Intel® Advanced Vector Extensions 512, the memory system improvements possible with Intel® Xeon Phi™ processor multi-channel DRAM (MCDRAM) and the Intel® Omni-Path Architecture interconnect.
How to create a neural network that detects people wearing masks. Ultimate description, the A-to-Z workflow for creating a neural network that recognizes images.
A short intro to the paper: https://blog.fulcrum.rocks/neural-network-image-recognition
Search is the Tip of the Spear for Your B2B eCommerce StrategyLucidworks
With ecommerce experiencing explosive growth, it seems intuitive that the B2B segment of that ecosystem is mirroring the same trajectory. That said, B2B has very different needs when it comes to transacting with the same style of experiences that we see in B2C. For instance, B2B ecommerce is about precision findability, whereas B2C customers can convert at higher rates when they’re just browsing online. In order for the B2B buying experience to be successful, search needs to be tuned to meet the unique needs of the segment.
In this webinar with Forrester senior analyst Joe Cicman, you’ll learn:
-Which verticals in B2B will drive the most growth, and how machine-learning powered personalization tactics can be deployed to support those specific verticals
-Why an omnichannel selling approach must be deployed in order to see success in B2B
-How deploying content search capabilities will support a longer sales cycle at scale
-What the next steps are to support a robust B2B commerce strategy supported by new technology
Speakers
Joe Cicman, Senior Analyst, Forrester
Jenny Gomez, VP of Marketing, Lucidworks
Customer loyalty starts with quickly responding to your customer’s needs. When it comes to resolving open support cases, time is of the essence. Time spent searching for answers adds up and creates inefficiencies in resolving cases at scale. Relevant answers need to be a few clicks away and easily accessible for agents directly from their service console.
We will explore how Lucidworks’ Agent Insights application automatically connects agents with the correct answers and resources. You’ll learn how to:
-Configure a proactive widget in an agent’s case view page to access resources across third-party systems (such as Sharepoint, Confluence, JIRA, Zendesk, and ServiceNow).
-Easily set up query pipelines to autonomously route assets and resources that are relevant to the case-at-hand—directly to the right agent.
-Identify subject matter experts within your support data and access tribal knowledge with lightning-fast speed.
How Crate & Barrel Connects Shoppers with Relevant ProductsLucidworks
Lunch and Learn during Retail TouchPoints #RIC21 virtual event.
***
Crate & Barrel’s previous search solution couldn’t provide its shoppers with an online search and browse experience consistent with the customer-centric Crate & Barrel brand. Meanwhile, Crate & Barrel merchandisers spent the bulk of their time manually creating and maintaining search rules. The search experience impacted customer retention, loyalty, and revenue growth.
Join this lunch & learn for an interactive chat on how Crate & Barrel partnered with Lucidworks to:
-Improve search and browse by modernizing the technology stack with ML-based personalization and merchandising solutions
-Enhance the experience for both shoppers and merchandisers
-Explore signals to transform the omnichannel shopping experience
Questions? Visit https://lucidworks.com/contact/
Learn how to guide customers to relevant products using eCommerce search, hyper-personalisation, and recommendations in our ‘Best-In-Class Retail Product Discovery’ webinar.
Nowadays, shoppers want their online experience to be engaging, inspirational and fulfilling. They want to find what they’re looking for quickly and easily. If the sought after item isn’t available, they want the next best product or content surfaced to them. They want a website to understand their goals as though they were talking to a sales assistant in person, in-store.
In this webinar, we explore IMRG industry data insights and a best-in-class example of retail product discovery. You’ll learn:
- How AI can drive increased revenue through hyper-personalised experiences
- How user intent can be easily understood and results displayed immediately
- How merchandisers can be empowered to curate results and product placement – all without having to rely on IT.
Presented by:
Dave Hawkins, Principal Sales Engineer - Lucidworks
Matthew Walsh, Director of Data & Retail - IMRG
Connected Experiences Are Personalized ExperiencesLucidworks
Many companies claim personalization and omnichannel capabilities are top priorities. Few are able to deliver on those experiences.
For a recent Lucidworks-commissioned study, Forrester Consulting surveyed 350+ global business decision-makers to see what gets in the way of achieving these goals. They discovered that inefficient technology, lack of behavioral insights, and failure to tie initiatives to enterprise-wide goals are some of the most frequent blockers to personalization success.
Join guest speaker, Forrester VP and Principal Analyst, Brendan Witcher, and Lucidworks CEO, Will Hayes, to hear the results of the Forrester Consulting study, how to avoid “digital blindness,” and how to apply VoC data in real-time to delight customers with personalized experiences connected across every touchpoint.
In this webinar, you’ll learn:
- Why companies who utilize real-time customer signals report more effective personalization
- How to connect employees and customers in a shared experience through search and browse
- How Lucidworks clients Lenovo, Morgan Stanley and Red Hat fast-tracked improvements in conversion, engagement and customer satisfaction
Featuring
- Will Hayes, CEO, Lucidworks
- Brendan Witcher, VP, Principal Analyst, Forrester
Intelligent Insight Driven Policing with MC+A, Toronto Police Service and Luc...Lucidworks
Intelligent Policing. Leveraging Data to more effectively Serve Communities.
Policing in the next decade is anticipated to be very different from historical methods. More data driven, more focused on the intricacies of communities they serve and more open and collaborative to make informed recommendations a reality. Whether its social populations, NIBRS or organization improvement that’s the driver, the IT requirement is largely the same. Provide 360 access to large volumes of siloed data to gain a full 360 understanding of existing connections and patterns for improved insight and recommendation.
Join us for a round table discussion of how the Toronto Police Service is better serving their community through deploying a unified intelligent data platform.
Data innovation improves officers' engagement with existing data and streamlines investigation workflows by enhancing collaboration. This improved visibility into existing police data allows for a more intelligent and responsive police force.
In this webinar, we'll cover:
-The technology needs of an intelligent police force.
-How a Global Search improves an officer's interaction with existing data.
Featuring:
-Simon Taylor, VP, Worldwide Channels & Alliances, Lucidworks
-Michael Cizmar, Managing Director, MC+A
-Ian Williams, Manager of Analytics & Innovation, Toronto Police Service
[Webinar] Intelligent Policing. Leveraging Data to more effectively Serve Com...Lucidworks
Policing in the next decade is anticipated to be very different from historical methods. More data driven, more focused on the intricacies of communities they serve and more open and collaborative to make informed recommendations a reality. Whether its social populations, NIBRS or organization improvement that’s the driver, the IT requirement is largely the same. Provide 360 access to large volumes of siloed data to gain a full 360 understanding of existing connections and patterns for improved insight and recommendation.
Join us for a round table discussion of how the Toronto Police Service is better serving their community through deploying a unified intelligent data platform.
Data innovation improves officers' engagement with existing data and streamlines investigation workflows by enhancing collaboration. This improved visibility into existing police data allows for a more intelligent and responsive police force.
In this webinar, we'll cover:
The technology needs of an intelligent police force.
How a Global Search improves an officer's interaction with existing data.
Featuring
-Simon Taylor, VP, Worldwide Channels & Alliances, Lucidworks
-Michael Cizmar, Managing Director, MC+A
-Ian Williams, Manager of Analytics & Innovation, Toronto Police Service
Accelerate The Path To Purchase With Product Discovery at Retail Innovation C...Lucidworks
Wish your conversion rates were higher? Can’t figure out how to efficiently and effectively serve all the visitors on your site? Embarrassed by the quality of your product discovery experience? The bar is high and the influx of online shopping over recent months has reminded us that the opportunities are real. We’re all deep in holiday prep, but let’s take a few minutes to think about January 2021 and beyond. How can we position ourselves for success with our customers and against our competition?
Grab your lunch and let’s dive into three strategies that need to be part of your 2021 roadmap. You don’t need an army to get there. But you do need to take action and capitalize on the shoppers abandoning the product discovery journey on your site.
In this session, attendees will find out how to:
-Take control of merchandising at scale;
-Implement hands-free search relevancy; and
-Address personalization challenges.
AI-Powered Linguistics and Search with Fusion and RosetteLucidworks
For a personalized search experience, search curation requires robust text interpretation, data enrichment, relevancy tuning and recommendations. In order to achieve this, language and entity identification are crucial.
For teams working on search applications, advanced language packages allow them to achieve greater recall without sacrificing precision.
Join us for a guided tour of our new Advanced Linguistics packages, available in Fusion, thanks to the technology partnership between Lucidworks and Basistech.
We’ll explore the application of language identification and entity extraction in the context of search, along with practical examples of personalizing search and enhancing entity extraction.
In this webinar, we’ll cover:
-How Fusion uses the Rosette Basic Linguistics and Entity Extraction packages
-Tips for improving language identification and treatment as well as data enrichment for personalization
-Speech2 demo modeling Active Recommendation
-Use Rosette’s packages with Fusion Pipelines to build custom entities for specific domain use cases
Featuring:
-Radu Miclaus, Director of Product, AI and Cloud, Lucidworks, Lucidworks
-Robert Lucarini, Senior Software Engineer, Lucidworks
-Nick Belanger, Solutions Engineer, Basis Technology
The Service Industry After COVID-19: The Soul of Service in a Virtual MomentLucidworks
Before COVID-19, almost 80% of the US workforce worked service in jobs that involve in-person interaction with strangers. Now, leaders of service organizations must reshape their offerings during the pandemic and prepare for whatever the new normal turns out to be. Our three panelists will share ideas for adapting their service businesses, now that closer-than-six-feet isn’t an option.
Join Lucidworks as we talk shop with 3 service business leaders, covering:
-Common impacts of the pandemic on service businesses (and what to do about them),
-How service teams can maintain a human touch across virtual channels, and
-Plans for the future, before and after the pandemic subsides.
Featuring
-Sara Nathan, President & CEO, AMIGOS
-Anthony Carruesco, Founder, AC Fly Fishing
-sara bradley, chef and proprietor, freight house
-Justin Sears, VP Product Marketing, Lucidworks
Webinar: Smart answers for employee and customer support after covid 19 - EuropeLucidworks
The COVID-19 pandemic has forced companies to support far more customers and employees through digital channels than ever before. Many are turning to chatbots to help meet increasing demand, but traditional rules-based approaches can’t keep up. Our new Smart Answers add-on to Lucidworks Fusion makes existing chatbots and virtual assistants more intelligent and more valuable to the people you serve.
Smart Answers for Employee and Customer Support After COVID-19Lucidworks
Watch our on-demand webinar showcasing Smart Answers on Lucidworks Fusion. This technology makes existing chatbots and virtual assistants more intelligent and more valuable to the people you serve.
In this webinar, we’ll cover off:
-How search and deep learning extend conversational frameworks for improved experiences
-How Smart Answers improves customer care, call deflection, and employee self-service
-A live demo of Smart Answers for multi-channel self-service support
Applying AI & Search in Europe - featuring 451 ResearchLucidworks
In the current climate, it’s now more important than ever to digitally enable your workforce and customers.
Hear from Simon Taylor, VP Global Partners & Alliances, Lucidworks and Matt Aslett, Research Vice President, 451 Research to get the inside scoop on how industry leaders in Europe are developing and executing their digital transformation strategies.
In this webinar, we’ll discuss:
The top challenges and aspirations European business and technology leaders are solving using AI and search technology
Which search and AI use cases are making the biggest impact in industries such as finance, healthcare, retail and energy in Europe
What technology buyers should look for when evaluating AI and search solutions
Webinar: 5 Must-Have Items You Need for Your 2020 Ecommerce StrategyLucidworks
In this webinar with 451 Research, you'll understand how retailers are using AI to predict customer intent and learn which key performance metrics are used by more than 120 online retailers in Lucidworks’ 2019 Retail Benchmark Survey.
In this webinar, you’ll learn:
● What trends and opportunities are facing the ecommerce industry in 2020
● Why search is the universal path to understanding customer intent
● How large online retailers apply AI to maximize the effectiveness of their personalization efforts
Where Search Meets Science and Style Meets Savings: Nordstrom Rack's Journey ...Lucidworks
Nordstrom Rack | Hautelook curates and serves customers a wide selection of on-trend apparel, accessories, and shoes at an everyday savings of up to 75 percent off regular prices. With over a million visitors shopping across different platforms every day, and a realization that customers have become accustomed to robust and personalized search interactions, Nordstrom Rack | Hautelook launched an initiative over a year ago to provide data science-driven digital experiences to their customers.
In this session, we’ll discuss Nordstrom Rack | Hautelook’s journey of operationalizing a hefty strategy, optimizing a fickle infrastructure, and rallying troops around a single vision of building an expansible machine-learning driven product discovery engine.
The audience will learn about:
-The key technical challenges and outcomes that come with onboarding a solution
-The lessons learned of creating and executing operational design
-The use of Lucidworks Fusion to plug custom data science models into search and browse applications to understand user intent and deliver personalized experiences
Apply Knowledge Graphs and Search for Real-World Decision IntelligenceLucidworks
Knowledge graphs and machine learning are on the rise as enterprises hunt for more effective ways to connect the dots between the data and the business world. With newer technologies, the digital workplace can dramatically improve employee engagement, data-driven decisions, and actions that serve tangible business objectives.
In this webinar, you will learn
-- Introduction to knowledge graphs and where they fit in the ML landscape
-- How breakthroughs in search affect your business
-- The key features to consider when choosing a data discovery platform
-- Best practices for adopting AI-powered search, with real-world examples
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Solr and Machine Vision - Scott Cote, Lucidworks & Trevor Grant, IBM
1. Solr and Machine Vision
Scott Cote and Trevor Grant
Lucidworks / IBM
2. ABOUT US
Trevor Grant
PMC: Apache Mahout
Apache Streams
IBM: Open Source Evangelist
“AI Engineer”
@rawkintrevo
www.rawkintrevo.org
Scott Cote
Organizer: DFW Data Science
Mahout Fan
Lucidworks: Senior Software Engineer
(Fusion Core Team)
@scottccote @dfwdatascience
4. DEEP LEARNING: AN
OVERVIEW
Deep learning is an exciting new technology with numerous applications, such as
detecting cats in pictures, creating nonsensical manuscripts, “completing” un finished
symphonies, magically returning your company to profitability after decades of poor
management through clever application of buzzwords, etc.
11. IMAGE DETECTION
Haar Cascade Filters Deep Learning
Speed of training Days Months
Speed of prediction Ultrafast Not great
Accuracy Slightly lower Higher to MUCH higher (domain)
Type of recognition Well understood problem (faces) Poorly understood problem
(darkmatter)
Best Use-case • You understand the domain
• You can use multiple methods
• You have limited resources:
• Limited Time
• Limited Compute Power
• Limited $$$
16. LESS HATER-Y
“Neural Nets are universal function approximates”
- Jake Manix, talk an hour ago.
When milliseconds count- we can’t afford to approximate.
- Me, Now.
17. ANCIENT PARADIGM
Fast
(Training and Prediction Time)
Right
(Highest
accuracy)
Cheap
(In dollars
and
in hardware)
GPU
Deep Learning
Haar-Cascade
Filters
CPU
Deep
Learning
22. EIGENFACES (FACIAL
RECOGNITION) OVERVIEW
Similar to Principal Component Analysis-
We week reduce dimensionality of images (tens of thousands of individual pixels) to a composition of
“eigenfaces”
A face (as a 250x250 image) is represented as a vector of length 62500 (250 x 250 = 62500 pixels)
If we decompose into a combination of 130 Eigenfaces, we can represent a face with a vector of length
130.
Advantages over “Deep Learning”
Quicker to identify face
Quicker to retrain
Can instantaneously add new face to dataset
History of Eigenfaces:
39. RECAP
Cascade Filters: Facial Detection (where/is there a ‘face’ in this picture)
Eigen faces: Facial Recognition (WHO am I looking at?)
Neural nets / deep learning- could do both in one pass- very very slow.
41. CREATING THE EIGENFACES:
COMPUTING
Apache Spark- an In-Memory Map-Reduce Engine (has weak ML library, however we
won’t use).
Apache Mahout- Provides Distributed Stochastic Singular Value Decomposition
method. (Also provides Mathematically expressive Scala DSL, and GPU/CPU
acceleration)
Creating Eigen faces- Spark Job took 45 minutes on Desktop with 32GB RAM, 8CPUs
@ 3.9GHz, but also I was watching Rick And Morty.
THIS JOB CAN BE GPU ACCELERATED BY CHANGING ONE DEPENDENCY.
42. CREATING THE EIGENFACES:
DATASET
University of Mass. Faces in the Wild Dataset: 10k images of labeled faces from the
internet. Each image is 250x250 (62500 pixels)
10k Faces Dataset Matrix
(10,000 x 62500)
Each row corresponds to 1 image of a face
Each column corresponds to a given pixel position
43. APACHE MAHOUT ON
APACHE SPARK CALCULATES
EIGENFACES
10k Faces Dataset Matrix
Linear
Combos
Eigenfaces
x =
53. WHO DOES THE WORK?
Local
Advantages:
- Edge device can build use context clues
to make final decision
Disadvantages:
- Requires more hardware at edge to
“think”
On Solr
Advantages:
- Leverage advantages of Solr
- Less hardware requirement on edge
Disadvantages:
- “Contextual clues” must be encoded in
query
Response
Recognize?
55. DRONES ARE GETTING
CHEAP
Drone 2-Pack
$99.99
Controlled via Smartphone
FPV Camera
$39.99 / ea
Video over Wifi via RTSP
Video enabled drones for ~$90 each
57. CHALLENGES AND
OPPORTUNITIES
Opportunity:
Video gives us a lot more ”context clues”
than still frames.
People don’t sporadically disappear and appear
Someone seen recently is more likely to be present
than someone seen long ago.
64. 2 PROBLEMS
1. The face is inconsistently detected (Eigenfaces is sensitive to this)
2. Shadows, patterns on clothes, etc. cause “ghost faces” to be identified
sporradically.
66. SOLUTION: CLUSTERING/
FILTERING/WINDOWING
Proposal: Cluster faces by location in frame. If less than N faces in cluster- remove
all faces in cluster (e.g. ghost clusters)
Problem-2: People move around frame in time.
Proposal-2: Break frames up into sliding window of M seconds.
Problem-3: Clustering/machine learning can be somewhat computationally expensive
Proposal-3: Canopy clustering (old, but still effective method- 1 pass clustering).
67. CANOPY CLUSTERING
Create N Second Window
Cluster Faces in Window
Quick dirty clustering- but effective.
First point is “center”
All points within distance t2 are “in that cluster.
If a point is not within t2 of any cluster- it becomes a new cluster center.
68. t2= max square width
OPENCV DETECTS FACES IN
VIDEO FRAME
69. t2= max square widthFirst rect – new cluster
Second Rect- within one width of first rect (same cluster)
Third Rect- within one width of first rect (same cluster)
Forth Rect- NOT within one width of first rect (new cluster)
Fifth Rect- within one width of first rect (same cluster)
Finally- any cluster with less than two entities in windows gets filtered out.
CANOPY CLUSTERING TO
REMOVE “GHOST” FACES
73. ADAPTIVE HYPER-
PARAMETERS
A very simple machine learning algorithm adapts its self in real time to the input it is
receiving…
A.I. Is a strong buzzword but...
80. WINDOWING
A video is just a stream of Frames
Apache Flink gives us a nice API for splitting/joining the stream, as well as creating
windows and applying functions to the windows. (Other bonuses too)
82. ENTER THE STREAM:
MAHOUT CANOPY CLUSTER
An n-m-second sliding window:
Every m seconds this window emits a set of clusters based on the last n seconds of data. For Exampe:
5-1, every 1 second a new set of ”face zones” based on faces detected the previous 5 seconds.
83. MAHOUT CANOPY CLUSTER
An n-m-second sliding window:
Every m seconds this window emits a set of clusters based on the last n seconds of data. For Exampe:
5-1, every 1 second a new set of ”face zones” based on faces detected the previous 5 seconds.
(Or 0.5 / 0.1 – Every 10th of a second based on last half second)
87. STORE OUR MEMORIES IN
SOLR
METHOD1: AVERAGING
1. Take all Face Rects in Cluster.
2. Average them All together.
3. Search Solr for this averaged image.
4. If this “Average Face” matches a face in the cluster (within
some distance tolerance) we assign that name to every face
in the cluster- and write all faces to Solr as that person’s name.
5. Otherwise- we create a new name, and write all faces to Solr under the new
Name.
6. This really doesn’t work very well at all.
7. ADVANTAGE: Minimize network traffic/SOLR taxation
88. STORE OUR MEMORIES IN
SOLR
METHOD2: “VOTING”
1. Search EACH face
2. Get list of names in results
3. Assign points based on rank or distance
4. Aggregate points across all rects, highest points “wins”- if winner has some
minimum threshold, assign that name.
5. Otherwise- we create a new name, and write all faces to Solr under the new
Name.
90. WHY APACHE SOLR
Capable of storing large amounts of data
Scales to petabytes text oriented
Numeric compute friendly
Many ways to store different types of data
91. WHY APACHE MAHOUT
Engine Agnostic (Spark/Flink/Standalone/RYO)
Native acceleration on CPU/GPU/CUDA
Possible to accelerate BLAS operations on ANY arch
(edge devices)
Mathematically expressive Scala
95. SHAPE OF THINGS TO COME.
”Science Fiction” of 10 years ago, today is domain of
hobbyists
Demo presented here is “Science Fair” grade AI.
Vlad Putin’s recently talking about “it is undesirable for
anyone to monopolize AI”. (Yay Apache!)