Image Recognition has probably been one of the hottest topics throughout 2014 with announcements such as the launch of the Amazon FireFly app and several millions of VC capital and M&A in this space. Image recognition has the potential to become ubiquitous in our day-to-day interactions with real world objects that are connected with the digital world.
This talk will be divided in four topics. First, it will cover basic aspects of the technology: the different approaches, the type of objects that are recognized, and the limitations of each technique through demonstrations. Second, the audience will be guided through the steps required to embed an image recognition solution into an app or service. Third, a number of vendor solutions will be described to give hands on pointers for those willing to start integrating such solutions. Finally, the talk will discuss the future of image recognition in different fields.
You can watch the video of the presentation here: https://www.youtube.com/watch?v=ilbTvfchtQY
1) Augmented reality definition
2) Basic Characteristics of Augmented reality
3) Where it’s been really used?
4) Key components of Augmented Reality technology
5) Working principle of AR
6) Conclusion
Augmented Reality is a new and trending technology of the present era. It turns your dreams into reality. AR has changed the life of a human being completely. Augmented Reality is used in almost all major areas like Education, Medical, Architecture, E-commerce, Emergency Management, Gaming and Fun, Manufacturing Industry, Beauty and Fashion as well. Know what is augmented reality and how it can be used in all such major fields via a presentation.
1) Augmented reality definition
2) Basic Characteristics of Augmented reality
3) Where it’s been really used?
4) Key components of Augmented Reality technology
5) Working principle of AR
6) Conclusion
Augmented Reality is a new and trending technology of the present era. It turns your dreams into reality. AR has changed the life of a human being completely. Augmented Reality is used in almost all major areas like Education, Medical, Architecture, E-commerce, Emergency Management, Gaming and Fun, Manufacturing Industry, Beauty and Fashion as well. Know what is augmented reality and how it can be used in all such major fields via a presentation.
Image recognition is a problem that clearly illustrates the advantages of machine learning over traditional programming approaches. In this deep dive, how to quickly get set up with TensorFlow on Ubuntu using containers will be shown. To be even more efficient, what is becoming known as transfer learning will be demonstrated. An existing image recognition model will be used rather than the time consuming approach of building one from scratch. Subsequently, this classifier model will be trained with an image dataset. And finally, the retrained model will be tested with new external images.
Augmented reality using Triggered by Image RecognitionNilesh Pawar
This is the ppt based on Ugmented reality triggered by Image recognition. In this we are telling that How the android app name(AURASMA) made by hp triggeres the AR. Just see the ppt you will get the idea about the augmented reality....
Imago OCR: Open-source toolkit for chemical structure image recognitionMikhail Rybalkin
http://ggasoftware.com/opensource/imago
Presentation at the Symposium on 244th ACS National Meeting & Exposition.
Hunting for Hidden Treasures: Chemical Information in Patents and Other Documents
본 자료는 빅데이터를 분석하는 전반적인 과정에 대해 정리한 자료로써 사회과학을 포함한 다양한 영역(컴퓨터 공학, 통계학, 수학 등)이 분석 과정에 참여할 수 있는지를 정리한 자료이다. 분석 과정 세부 영역에 있어선 주로 사회과학의 관점에서 기술하였다. 현재 자료는 2010년부터 사회과학의 관점에서 데이터 분석을 계속 해오면서 경험한 부분과 문헌 및 발표 자료 등을 통해 정리한 자료이다. 앞으로 여러 영역을 공부하면서 빅데이터 분석 프로세스를 더욱 발전시켜 나갈 예정이다.
Image to Text Converter PPT. PPT contains step by step algorithms/methods to which we can convert images in to text , specially contains algorithms for images which contains human handwritting, can convert writting in to text, img to text.
Deep Learning with TensorFlow: Understanding Tensors, Computations Graphs, Im...Altoros
1. The elements of Neural Networks: Weights, Biases, and Gating functions
2. MNIST (Hand writing recognition) using simple NN in TensorFlow (Introduce Tensors, Computation Graphs)
3. MNIST using Convolution NN in TensorFlow
4. Understanding words and sentences as Vectors
5. word2vec in TensorFlow
VR & AR Visual techniques in Market Research: a real case study.Elio Dalprato
- VR / AR - Background
- Our experiences
- Key learnings
- Tools development
- AR – a touchpoint between the Brand and the Stakeholders
- VR / AR / Visual and Sensorial Experience Long term development in MR
- Live test - Q&A
Image recognition is a problem that clearly illustrates the advantages of machine learning over traditional programming approaches. In this deep dive, how to quickly get set up with TensorFlow on Ubuntu using containers will be shown. To be even more efficient, what is becoming known as transfer learning will be demonstrated. An existing image recognition model will be used rather than the time consuming approach of building one from scratch. Subsequently, this classifier model will be trained with an image dataset. And finally, the retrained model will be tested with new external images.
Augmented reality using Triggered by Image RecognitionNilesh Pawar
This is the ppt based on Ugmented reality triggered by Image recognition. In this we are telling that How the android app name(AURASMA) made by hp triggeres the AR. Just see the ppt you will get the idea about the augmented reality....
Imago OCR: Open-source toolkit for chemical structure image recognitionMikhail Rybalkin
http://ggasoftware.com/opensource/imago
Presentation at the Symposium on 244th ACS National Meeting & Exposition.
Hunting for Hidden Treasures: Chemical Information in Patents and Other Documents
본 자료는 빅데이터를 분석하는 전반적인 과정에 대해 정리한 자료로써 사회과학을 포함한 다양한 영역(컴퓨터 공학, 통계학, 수학 등)이 분석 과정에 참여할 수 있는지를 정리한 자료이다. 분석 과정 세부 영역에 있어선 주로 사회과학의 관점에서 기술하였다. 현재 자료는 2010년부터 사회과학의 관점에서 데이터 분석을 계속 해오면서 경험한 부분과 문헌 및 발표 자료 등을 통해 정리한 자료이다. 앞으로 여러 영역을 공부하면서 빅데이터 분석 프로세스를 더욱 발전시켜 나갈 예정이다.
Image to Text Converter PPT. PPT contains step by step algorithms/methods to which we can convert images in to text , specially contains algorithms for images which contains human handwritting, can convert writting in to text, img to text.
Deep Learning with TensorFlow: Understanding Tensors, Computations Graphs, Im...Altoros
1. The elements of Neural Networks: Weights, Biases, and Gating functions
2. MNIST (Hand writing recognition) using simple NN in TensorFlow (Introduce Tensors, Computation Graphs)
3. MNIST using Convolution NN in TensorFlow
4. Understanding words and sentences as Vectors
5. word2vec in TensorFlow
VR & AR Visual techniques in Market Research: a real case study.Elio Dalprato
- VR / AR - Background
- Our experiences
- Key learnings
- Tools development
- AR – a touchpoint between the Brand and the Stakeholders
- VR / AR / Visual and Sensorial Experience Long term development in MR
- Live test - Q&A
In the successful partnership between Idea Cellular and iProgrammer, mobile application development services took center stage. iProgrammer's strategic focus on crafting a native mobile app for Idea Cellular reshaped the telecom landscape, optimizing user engagement and customer service efficiency. Overcoming challenges like call volume spikes and store scalability limitations, the app provided seamless access to services, driving revenue growth and user satisfaction. With advanced technologies and robust backend systems, iProgrammer ensured uninterrupted functionality, boasting an impressive 99.6% uptime and attracting millions of daily active users.
For more information, Visit - https://www.iprogrammer.com/mobile-app-development-service/
The vast majority of net software solutions produced in recent years have been delivered in the form of SaaS, a trend which shows no signs of abating.
This presents a paradoxical challenge for IT & business leaders; how to enable their IT consumers to discover innovative tools which potentially unlock business benefits, whilst also ensuring governance, risk, compliance, and security policies are met.
Join Ampliphae and Novosco to explore this challenge with real world examples of how the controlled adoption of innovative SaaS solutions has led to real world business benefits.
This will official brochure of I-Verve Inc .
I-Verve Inc is a US-based software development company formed in 2016 with headquarters in New Jersey. From offering end-to-end software development services to body shopping to various businesses and big giants across the globe, i-Verve has proved itself one of the most reliable IT solutions providers. It started with 5 employees and is now packed with 100+ highly skilled, talented, and experienced software engineers. So far, i-Verve has successfully delivered 1000+ projects in various domains leveraging the latest tech stack. Delivering only quality services and staying transparent with clients by adding value to their concept is what i-Verve’s goal is.
Rendered.ai - Intro to Synthetic data for Computer Vision.pdfChris Andrews
Rendered.ai is a Platform as a Service that enables data scientists, data engineers, and developers to create and deploy unlimited, customized synthetic data generation for computer vision-related machine learning and artificial intelligence workflows, reducing expense, closing gaps, and overcoming bias, security, and privacy issues when compared with the use or acquisition of real data.
Visit https://www.rendered.ai
This guide was put together to help leaders who are working with teams who struggle to build computer vision algorithms using the data that they have available.
Find out how Rendered.ai helps overcome time to market and performance issues when training CV algorithms.
Rewarded Video: Benefits and Best PracticesironSource
ironSource's Head of Monetization talks about how best to integrate RV for maximum revenue, great user experience, and increased engagement in your app.
PWC: Why we believe VR/AR will boost global GDP by $1.5 trillionAlejandro Franceschi
PWC: Why we believe VR/AR will boost global GDP by $1.5 trillion USD.
We estimate virtual reality (VR) and augmented reality (AR) can bring net economic benefits of $1.5 trillion by 2030. But where did we get that number from? As you can imagine, estimating the potential impacts of new technologies like VR and AR is tricky and uncertain. The task is even more difficult when these technologies are expected to develop rapidly and become more deeply ingrained in our everyday lives. But we feel it's important to highlight the potential in a way that give our clients the facts to build a business case to act - and that starts with a robust methodology.
Unlocking The Marketing Potential Behind the Beacon Technology Outbreak Klyp
Beacons aren't just a chip, they’re not just a technology, they’re a new way to experience the world. In this presentation we discover exactly what beacons are, how they work and why you should care before looking a use case scenarios across a wide variety of industries.
Computer Vision Software Development is the process of creating software that can interpret and understand visual information from the world using a computer. This type of software is designed to simulate human visual perception using Artificial Intelligence (AI) and machine learning techniques. Computer Vision software can be used for a wide range of applications, such as image and video analysis, object and facial recognition, and navigation for self-driving cars. The software can also be used in industries such as retail, manufacturing, and healthcare for tasks such as product inspection, quality control, and medical imaging analysis. Computer vision software development helps companies to automate their processes, improve their efficiency, and gain new insights from visual data.
Learn why Site24x7 is a perfect alternative to SolarWinds. Offering flexible monitoring capabilities, ability to monitor from global locations and with affordable pricing, Site24x7 is an obvious choice.
A Practical Guide for Integrating Mobile into your B2B Business Strategy & Marketing Mix. Presented by Will Keible, Manager of Digital Sales and Marketing for WCNC.com, the digital arm of Charlotte’s NBC affiliate, WCNC-TV.
Similar to Image Recognition. Technology, Guidelines and Trends (20)
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
2. The visual recognition market is estimated to grow
from $9.65 billion in 2014
to $25.65 billion by 2019
According to Image Recognition Market, Markets and Markets, May 2014
12. Choose the IR mode that fits best
Cloud Service On-Device SDK
13. Choose the IR mode that fits best
Cloud Service On-Device SDK
IR requires Internet Yes No
IR speed Depends on network Controlled
Content updates Immediate Require local sync
Analytics Latest available Rely on app connection
14. Outline
What works with image recognition
How to put image recognition into your app
Vendor comparison
Trends
23. Takeaways
1. Image recognition is the door to a broad range of
applications and services
2. Improve performance with better image databases
3. Choose on-device or cloud IR depending on your use
case.
4. Catchoom is already behind 420M interactions and
looking to meet upcoming trends
27. Challenges with benchmarks
Label a database with both reference and test
images
Identify infrastructure differences
Understand performance is not necessarily
optimized for your use case
28. How to benchmark
Small dataset Full test
1. Contact the vendor 1. Contact the vendor
2. Label your database
3. Use APIs
Editor's Notes
The visual recognition market is growing extremely quickly.
The two main reasons for this growth are kind of obvious:
There is a big proliferation of images on the Internet and;
There has also been a big expansion in the use of mobile for searching and purchasing
On December 1975, Kodak and Sasson invented the digital camera. Ever since we can process images and videos digitally, we’ve been developing visual recognition, trying to make machines understand the environment.
Visual Recognition at large is a field of activity that has many branches. It is important to know that each one uses different computer vision approaches and there is not yet one ring to rule them all.
The most prominent branches are Image Recognition, Face Recognition, Object classification, and Object Character Recognition, and each one has a different level of maturity.
Image Recognition enables a fast search for images in a database to match an image taken by a smartphone or tablet. The image match pulls up related content, and users can interact, shop or rate products.
Face Recognition is basically the same but instead of comparing with images or any object, it focuses on faces. Most face recognition solutions work by training a system with very large databases of images of faces previously labelled. The main use case is security or photo album organization.
Object classification is a bit different in scope. Instead of searching for a very specific match in a database, it is trying to understand the elements present in a picture. This is the closest to what a kid does: this is a chair, this is a dog, or more complex descriptions like this is a steem train under Swiss Matterhorn. The use case is simple: Google.
Object Character Recognition identfies letters and numbers from an image. It is used in digitizing ancient books for instance.
In this tutorial, I’ll talk about Image Recognition and give you an overview of the technology, guidelines to build apps and services, and trends that we see in the market.
Why am I talking about IR in an AR conf?
Image Recognition is the door to most AR interactions in the world.
Via Image Recognition, a machine can tell what the user is seeing through her camera. If we know that, we can provide limitless options connected to the digital world.
For instance, we can augment the environment with an inmersive experience that helps the user make a better decision.
Computer Vision tries to understand what is there and what is happening in the world via images and videos.
Let’s take a look at the world with the eyes of a machine and try to see what will make us suffer.
In the first row, you can find samples of objects that differ from the amount of visual pattern that is available for recognition.
In the second row, you see two sorts that differ a lot in the amount of different samples that can exist from the same very object.
It is important to set the expectations right with respect to the kinds of objects that I showed before and the technology that is available.
If an object has a lot of texture, it has a higher probability of being more distinguishable within a large set of images, for instance, book covers.
It does not work so well when the difference between two hundred objects have no pattern and all the same shade of grey.
On the other hand, if the goal is to say “this is a blue shirt”, object classification works smoothly.
If an object is deformable, we could create a database with tones of samples but it would become unbearable if you want to do that for a hundred thousand object.
On the other hand, you can still train a classification system with many samples of that object in different deformations.
What happens if an object is transparent?
Let me tell you a story: when Logitech launched a mouse that could work over glass surfaces a few years ago,... well, rumor has it that on the day of the demo, they had to scratch the glass to make it work. The reason was that the sensor needed to "see" the dirty dots and scratches to translate that into motion.
As another example, time-of-flight cameras like Kinect see through glass, or in other words, they do not see the glass in front of them.
These examples showcase the challenge that glass puts into any sensing.
-------
I’ve been restrictive here and for instance Catchoom’s IR engine works with deformable objects, as along as they are textured. Textureless is possible, but depends on the size of the database and how close two objects can be.
In in this 2nd part, I’ll cover fundamental aspects of project development and discuss the pieces that are necessary to deploy an app that includes Image Recognition.
There are three elements that you need for an Image Recognition app to be built.
The base of the pyramid is the Image database. This is something that is often overlooked at the beginning. Sometimes, we find customers that consider the collection of images that trigger experiences after they’ve spent resources on building the app. We suggest to spend as much time with the reference images as possible to get the best experience for your users.
The second piece is the technology component. There are many options here and I’ll give you some pointers in a minute.
And last but not least, Content is always king. Make sure your app is valuable to your users. Image recognition is impressive, but even more impressive is when users want to repeat and come back to your app.
Imagine you prepared your database with any of the images below. Then you try to recognize that logo with a query image like the one on top.
For different reference images, you’ll get very different results.
The message here is to devote time to the image database. Typically, you’ll learn what works and what doesn’t, but it is good to chat with us to know what will work and what may be an issue.
One of our customers augments tattoos. You definitively want to get it right before tattooing your skin.
On-device IR makes sense especially on cases that it is preferable to offload a server infrastructure and provide quick responses to users. This is the case for second screen environments where the user gets content or offers in sync with a TV show.
Cloud IR on the other hand is very well suited for magazines or any content that is frequently updated and has a rather uniform traffic.
Let’s compare both at the feature level.
While OD looks technically more appealing, it has some limitations when it comes to enable common business interests like content updates or analytics.
In general, you will achieve the same results with both, so it depends on the use case or even your business model.
I’ll give you an overview of the vendors in the AR and outside the AR space that can help you with that.
In this list we have AR-vendors.
AR vendors offer IR that is used to trigger AR experiences at scale. In other words, they allow to search through larger databases that what would fit into a smartphone by relying on the cloud.
The disadvantage of most AR vendors who offer cloud IR, is that they’re designed only for AR and are not that flexible when used for non-AR use cases.
Also, for augmented reality it is now commonly known that patterns need well-spread texture. Image recognition is not as demanding, but benefits from curation.
In this list, we have vendors that offer the core service, independently of how you want to use it whether it is to render an AR experience, compare products or anything you’d like to do.
The table shows one additional column, which is “On Premises”. Instead of a SaaS, some vendors including Catchoom, license the core server technology to allow others build entire platforms. For example Times of India, the largest publisher in India, among other AR browsers run Catchoom inside their servers.
As you can see from this and the previous slide, Catchoom is the only one who offer in both spaces AR and IR, and also have the full set of options.
But the real reason why I like Catchoom is that we have a unique combination of ingredients in our magic sauce.
First, our image recognition tests are performed using pictures snapped by users in real world environments – so our technology knows how to handle difficult angles, blurry images, low light conditions and reflections.
Second, our passion for seamless interactions. Catchoom was built to give users an easy, seamless image recognition experience – with no knowledge of the technology required. They just keep snapping photos like they always do.
Third, the results speak for themselves. An independent benchmark study using images taken by real users rated Catchoom 20% higher on image recognition than our competitors. We also ensure a response within half a second regardless of your location thanks to our servers in the US and EU.
And last, you can build entire platforms. Whether you use our service or an on-premises installation, our image recognition software is designed to deliver outstanding performance regardless of the traffic or size of your database. From hundreds of requests per second, to millions of images, we’ve engineered our software to be prepared.
Catchoom is, in fact, already one of the most used IR engines.
Even though you may not have not heard of the brand Catchoom, our solution is already behind 420 million image recognitions globally.
And now I’m getting to the last part of the talk to discuss some of the trends that we see in this space.
There is a number of businesses with a long list of products that have a head and a long-tail of popularity. This is typically the case for eCommerce sites.
What we see is an increasing demand to search on-device on a subset of images and if there is no match, continue with a cloud request.
We have patented technology to provide support for this kind of environments without cutting any corner on the performance.
Imagine you’re a technician that has to repair a very specific part in a Star Destroyer.
How can you search through all the catalogue of parts in a fraction of a second just by scanning that part.
This is another research line that Catchoom is working on right now.
Fashion is one of the most exhiting sectors for image recognition.
Being able to recognize a pair of shoes, a handbag or a complete look is in the mindset of thousands of fashionistas around the world.
Catchoom is investing in recent advances in the field of computer vision using a technique that is called deep learning. Deep learning allows neural networks to learn the visual properties of certain objects and be able to classify them with very high precision.
-----
Those three are the main trends that we see in the IR space, and Catchoom’s Labs are heavily investing in building the technology that will make them possible in the near future.
1. Image recognition is the door to a broad range of applications and services in a fast growing market.
2. You can significantly improve the performance with better image databases.
3. Choose on-device or cloud depending on your technical and business needs.
4. Catchoom is already behind 420M interactions and is working on the current trends to meet them in the near future.
Please visit our booth in the next couple of days for live demos.
Thanks you very much for your time!
Catchoom in fact is already one of the most used IR engines out there.
While maybe you have not heard of the brand Catchoom, our solution is already behind 420 million image recognition interactions across the world.
There are a number of challenges when trying to compare the performance of image recognition vendors.
1. How many of you have around 100,000 images on both sides of the equation, references and test images?
That’s probably around the number you need to build to 1M images.
2. Is the infrastructure showing the real experience that your users will have?
Let me give you an example, Catchoom has servers in the US and in EU that allow apps to connect to the closest server wherever you are in the world. Is your app global, or simply your customer is in another continent? Take that into account.
3. Performance is not necessarily optimized for a specific use case. So the question is, does that vendor perform so well / wrong?
Most vendors provide the same experience to all customers because they cannot fine-tune parameters, but rather offer performance that is on average good for a large variety of cases.
If you use 100,000 images, you probably have multiple use cases represented, but if you just have a few, you may not show the full benchmark of that solution.
You’re probably under two situations:
Situation #1: you have a customer, with ver few images and you just want it to work like charm.
Situation #2: you’re building a self-served service, where your customers or partners will upload images without any supervision.
In both cases, my suggestion is to contact the vendor to know exactly what is possible and what not, and whether some tweeks here and there can improve significantly the results.
For instance, at Catchoom, we look at particular cases in your results to try to identify improvements or simply different profiles of the internal paramenters that can be tuned for your case.
But the reality is that unless you have an On Premises license, you won’t be able to fine tune any paramenter as all cloud service providers have the same performance across all customers.