This is my presentation for Global Azure Verona 2021, where I talked about Azure Functions and how this technology can be used to process messages that come from WhatsApp in a chatbot environment.
My participation at Cloud Skills Challenge with the knowledge acquired when studying the AI modules at Discover AI challenge and a solution developed to detect facemasks in videos.
Building real time image classifiers for mobile apps with azure custom visionLuis Beltran
Azure Custom Vision allows you to create powerful image classifiers in minutes to without having to be an AI expert. You feed the service with images -so the service adapts to your own needs-, tag them and train a model that can be published to an endpoint URL for further requests. You can also use the Custom Vision SDK to automatize the process.
Furthermore, this model can also be exported for offline, real-time classification experiences. For instance, you can embed the classifier into a mobile application, or a website.
In this session, the Custom Vision service will be described. An image classifier will be created by using the .portal. The output model will be exported to both Tensorflow and CoreML to integrate it into an Android and iOS mobile applications, respectively.
Infuse your apps, websites and bots with intelligent algorithms to see, hear, speak, understand and interpret your user needs through natural methods of communication. Azure Cognitive Services are APIs, SDKs, and services available to help developers build intelligent applications without having direct AI or data science skills or knowledge.
Developing .NET apps for Microsoft TeamsLuis Beltran
Microsoft Teams is the hub for team collaboration which integrates people and tools to improve productivity within your organization. From chat-based collaboration to web conferences, it brings effectivity within your business to the next level.
Customizing your Microsoft Teams workspace is possible thanks to the developer platform, which allows you to extend the capabilities of the product and roll your own custom applications into your organization. Furthermore, these solutions can be distributed publicly to other enterprises, either for free or monetized, via AppSource, the Microsoft ecosystem for app publication.
Let's learn the process around implementing your own .NET apps and bringing them to Microsoft Teams to engage your organization and improve collaboration. A bot that understands users' conversations and brings information from a database will be showcased as part of the demo.
Technologies involved:
* Microsoft Teams
* Bot Framework
* Azure SQL
* LUIS + Cognitive Services
* Visual Studio 2019
This is my presentation for Global Azure Verona 2021, where I talked about Azure Functions and how this technology can be used to process messages that come from WhatsApp in a chatbot environment.
My participation at Cloud Skills Challenge with the knowledge acquired when studying the AI modules at Discover AI challenge and a solution developed to detect facemasks in videos.
Building real time image classifiers for mobile apps with azure custom visionLuis Beltran
Azure Custom Vision allows you to create powerful image classifiers in minutes to without having to be an AI expert. You feed the service with images -so the service adapts to your own needs-, tag them and train a model that can be published to an endpoint URL for further requests. You can also use the Custom Vision SDK to automatize the process.
Furthermore, this model can also be exported for offline, real-time classification experiences. For instance, you can embed the classifier into a mobile application, or a website.
In this session, the Custom Vision service will be described. An image classifier will be created by using the .portal. The output model will be exported to both Tensorflow and CoreML to integrate it into an Android and iOS mobile applications, respectively.
Infuse your apps, websites and bots with intelligent algorithms to see, hear, speak, understand and interpret your user needs through natural methods of communication. Azure Cognitive Services are APIs, SDKs, and services available to help developers build intelligent applications without having direct AI or data science skills or knowledge.
Developing .NET apps for Microsoft TeamsLuis Beltran
Microsoft Teams is the hub for team collaboration which integrates people and tools to improve productivity within your organization. From chat-based collaboration to web conferences, it brings effectivity within your business to the next level.
Customizing your Microsoft Teams workspace is possible thanks to the developer platform, which allows you to extend the capabilities of the product and roll your own custom applications into your organization. Furthermore, these solutions can be distributed publicly to other enterprises, either for free or monetized, via AppSource, the Microsoft ecosystem for app publication.
Let's learn the process around implementing your own .NET apps and bringing them to Microsoft Teams to engage your organization and improve collaboration. A bot that understands users' conversations and brings information from a database will be showcased as part of the demo.
Technologies involved:
* Microsoft Teams
* Bot Framework
* Azure SQL
* LUIS + Cognitive Services
* Visual Studio 2019
Bringing AI to the edge: On-premise Azure Cognitive Services Luis Beltran
Azure Cognitive Services allow developers to build powerful AI-based solutions, enabling different capabilities in our software: vision. speech, search, text analytics, language understanding, and much more. Basically, the model is already built by Microsoft, you just need to do an API call to the Azure cloud and the service retrieves a result. For instance, you send a message and the Text Analytics API returns its sentiment score.
However, there might be cases in which our customers need a local, non-cloud AI solution (either because of limited Internet access or data compliance). This is now possible thanks to the latest update of Azure Cognitive Services, which offers containerization support. Using containers, we can still deliver ML-driven solutions while keeping the data in-house.
In this talk, we'll explore what it takes to configure and use containers in Azure Cognitive Services. Demos will be showcased as well for local Face and Text Cognitive Services.
Slides from DevNexus in Atlanta GA showing Cognitive Services. Minus demos unfortunately! Best place to check all this out is https://www.microsoft.com/cognitive-services/
virtual-2021-data.sql_.saturday.la-Building database interactions with users ...Luis Beltran
Slides for my presentation at Data SQL Saturday 2021 about building user interactions with chatbots consuming information from a database and sending messages to Microsoft Teams
Clever data: building a chatbot from your databaseLuis Beltran
The development of Artificial Intelligence is increasingly present in our lives and as time goes by, its presence will grow thanks to the momentum that enterprises are currently providing.
One of the most engaging AI applications are chatbots, which interact with real-time users in order to assist them to perform a task -such as booking a hotel, answering a question or looking for specific information on the Internet- while simulating that a real human is behind the scene.
Data is knowledge, and the data that has been stored in your Azure SQL database can be used as an input for a bot which assists a company's customers in order to process the information for them and return expected results.
This session will be focused on explaining the actors involved when building a bot capable of obtaining data from your storage, including Azure SQL Database, Microsoft Bot Framework and LUIS (Language Understanding Intelligent Services). A mobile app built with Xamarin will be used as demo.
In this session for MSP Tech Days Latin America 2019 I explain the main advantages, new functionality that Visual Studio 2019 brings for developers and why VS2019 is the best tool for software development.
by Amit Narayanan, Solutions Architect, AWS
Amazon Lex is a service for building conversational interfaces into any application using voice and text, and Amazon Polly is a service that turns text into lifelike speech. This session combines both of these AWS services, the presenter will demonstrate how to build DevOps and Help Desk chatbots that feature spoken-voice interfaces, and explore the potential of bringing characters to life through interactive chatbots that improves customer engagement. Attendees will be provided with the foundational skills for those looking to enrich their applications with natural, conversational interfaces. Level 300
Bringing AI to the edge: On-premise Azure Cognitive Services Luis Beltran
Azure Cognitive Services allow developers to build powerful AI-based solutions, enabling different capabilities in our software: vision. speech, search, text analytics, language understanding, and much more. Basically, the model is already built by Microsoft, you just need to do an API call to the Azure cloud and the service retrieves a result. For instance, you send a message and the Text Analytics API returns its sentiment score.
However, there might be cases in which our customers need a local, non-cloud AI solution (either because of limited Internet access or data compliance). This is now possible thanks to the latest update of Azure Cognitive Services, which offers containerization support. Using containers, we can still deliver ML-driven solutions while keeping the data in-house.
In this talk, we'll explore what it takes to configure and use containers in Azure Cognitive Services. Demos will be showcased as well for local Face and Text Cognitive Services.
Slides from DevNexus in Atlanta GA showing Cognitive Services. Minus demos unfortunately! Best place to check all this out is https://www.microsoft.com/cognitive-services/
virtual-2021-data.sql_.saturday.la-Building database interactions with users ...Luis Beltran
Slides for my presentation at Data SQL Saturday 2021 about building user interactions with chatbots consuming information from a database and sending messages to Microsoft Teams
Clever data: building a chatbot from your databaseLuis Beltran
The development of Artificial Intelligence is increasingly present in our lives and as time goes by, its presence will grow thanks to the momentum that enterprises are currently providing.
One of the most engaging AI applications are chatbots, which interact with real-time users in order to assist them to perform a task -such as booking a hotel, answering a question or looking for specific information on the Internet- while simulating that a real human is behind the scene.
Data is knowledge, and the data that has been stored in your Azure SQL database can be used as an input for a bot which assists a company's customers in order to process the information for them and return expected results.
This session will be focused on explaining the actors involved when building a bot capable of obtaining data from your storage, including Azure SQL Database, Microsoft Bot Framework and LUIS (Language Understanding Intelligent Services). A mobile app built with Xamarin will be used as demo.
In this session for MSP Tech Days Latin America 2019 I explain the main advantages, new functionality that Visual Studio 2019 brings for developers and why VS2019 is the best tool for software development.
by Amit Narayanan, Solutions Architect, AWS
Amazon Lex is a service for building conversational interfaces into any application using voice and text, and Amazon Polly is a service that turns text into lifelike speech. This session combines both of these AWS services, the presenter will demonstrate how to build DevOps and Help Desk chatbots that feature spoken-voice interfaces, and explore the potential of bringing characters to life through interactive chatbots that improves customer engagement. Attendees will be provided with the foundational skills for those looking to enrich their applications with natural, conversational interfaces. Level 300
A Journey with Microsoft Cognitive Service IMarvin Heng
A Journey with Microsoft Cognitive Service I
This slide is about Microsoft Cognitive Services. By going through you will understand what and how Microsoft Cognitive Service works.
Marvin Heng
Medium: @hmheng
Twitter: @hmheng
Github: hmheng
Gracias a los Cognitive Services ahora podemos añadir inteligencia a nuestras apps de una manera sencilla. La combinación de estos servicios abren un mundo nuevo de posibilidades, por lo que durante esta charla veremos una breve introducción a los distintos servicios para pasar directamente a verlos en acción en aplicaciones y situaciones reales. Se trata de una charla introductoria en la que haremos demos y veremos cómo podemos utilizar estos servicios en nuestro código.
Azure Cognitive Services for DevelopersMarvin Heng
Azure Cognitive Services has been an AI solution that close to many developers's heart. They implement it in their applications easily. There are some new Microsoft Cognitive Services that are newly being introduced.
Azure thursday HoloLens and cognitive services a powerful combinationAlexander Meijers
HoloLens as a Mixed Reality device allows you to build applications to support your business processes in different ways by using visualization and information provisioning. It gets more interesting when you expand such applications by using external services like Azure Cognitive Services. This development session explains and shows you how to combine both technologies to create a powerful combination
Solvion Trendwerkstatt - Microsoft Azure + BotsHolzerKerstin
In der Solvion Trendwerkstatt erfahren die Teilnehmer alle Trends rund um Microsoft Azure, Artikficial Intelligence und Bots. Microsoft MVP Stephan Bisser leitet durch den Workshop.
Slide deck for my talk Getting started with Azure Cognitive Services. The talk was given at a meetup in Eindhoven and at a .NET Zuid evening among others.
A video search tool allows you to search and index videos to make them searchable for words, images, logos, and other metadata in video libraries. This means you can find information faster and more efficiently. Finding information online is relatively easy, most of it being just a search engine away.
More Besides Sora: Tools to Create Dynamic Videos from Textual ContentRachelWang856621
Undoubtedly, video content is more attractive than text. The demand for engaging visual content continues to soar quickly. As businesses and content creators strive to capture the attention of their audiences, tools that can seamlessly transform text into dynamic videos have become focus hots. So comes out OpenAI Sora. This text-to-video generative AI model looks incredibly impressive so far, introducing some huge potential across many industries.
Digital transformation with AI and process automation.
Prior consulting use cases in the domain of talent acquisition, e-commerce, e-Publishing and HR analytics.
Real NET Docs Show - Serverless Machine Learning v3.pptxLuis Beltran
Slides of my presentation about Serverless Machine Learning using Azure Functions, Twilio APIs, and Cognitive Services for text and image processing of WhatsApp messages at .NET Docs Show weekly community event organized by Microsoft
Latam Space Week - Clasificación de rocas espaciales por medio de IA.pptxLuis Beltran
Slides of my presentation about Space rocks image classification using Machine Learning and Artificial Intelligence with Python at Latam Space Week event
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
3. Computer Vision
Face/Emotion Recognition
OCR/Handwriting
Custom Vision
Video Indexer
Text-to-Speech
Speech-to-Text
Translator
Custom Speech
Language Understanding
Text Translator
Text Analytics
QnA Maker
Bing Custom Search
Bing Visual Search
Bing Autosuggest
Bing Image Search
Bing News Search
microsoft.com/cognitive
Vision Voice
Language
Web Search
Bringing AI within reach of every developer—without requiring machine-learning expertise!
Azure Cognitive Services
4. Custom Vision
Customize image recognition and tailor it to your business needs
Computer Vision API
Analyze content in images and videos
Face API
Detect and identify people and emotions in images
Form Recognizer
Extract text, key-value pairs, and tables in documents
Computer Vision
5. Imagine if you could…
• Automatically create all of the metadata (spoken text,
written text, faces, places, objects etc) for any video,
including from their archives.
• As a call center operator, scan for frustrated customers or
count time spent on hold.
• Collaboratively upload & edit videos in real-time, search
to find key moments, and speed up time to publish new
content
• Enable your viewers to search by Name, Search by
Keyword, Search by Shot Location, key objects
• Distribute playable clips of live video content internally or
externally without long download or encode times
• Create automated summaries or highlight reels of their
video content based on scene detection, specific people,
and motion within the video
• Translate your content into other languages for global
and diverse audience reach.
6. Video AI is the key to solve these challenges
• Improve Content Discoverability
• Increase Content Value
• Personalize the Viewing Experience
• Uncover Hidden Content Insights
• Reduce Manual Labor
• Increase Revenue / Drive Viewing
• Targeted Advertising
• Predictive modeling & recommendations
7. • Azure Video Indexer is a service that extracts valuable
insights from media.
• It uses machine learning models that can be further
customized and trained.
• The video insights include face identification, text
recognition, object labels, scene segmentations…
• Additional insights are extracted from audio, such as
transcription and emotion detection.
• You can use these results to improve search, extract clips,
create thumbnails, and more, thus enhancing user
engagement.
• It is available as a service and as a platform.
Video Indexer
Video Indexer
Analyze the visual and audio channels in a video, and
index its content.
8. Subscribe to the API
• To use Video Indexer, you need
to create a subscription.
• You can sign up for a trial:
• Up to 600 minutes of free indexing
using the Video Indexer Portal
• Up to 2400 free minutes using the
API.
• If you sign up for a trial, you will
have a subscription created
automatically.
videoindexer.ai
9. Video Indexer Portal
• Upload videos
• From URL
• From File
• Most common media
formats are supported
(MOV, WMV, MPG, AVI,
MP4…)
• Features are indexed
28. Call to Action
• Video Indexer
https://www.videoindexer.ai/
• Video Indexer API Portal
https://api-portal.videoindexer.ai/
• Extract insights from videos with the Video Indexer Service
https://docs.microsoft.com/en-us/learn/modules/extract-insights-
from-videos-with-video-indexer-service/
29. Thank you for your attention!
Luis Beltrán
Tomás Bata University in Zlín
Tecnológico Nacional de México en Celaya
luis@luisbeltran.mx luisbeltran.mx @darkicebeam
About Me:
https://about.me/luis-beltran
Editor's Notes
Your goal is to help travelers search and filter to find videos uploaded by others.
Proper indexing of media and extracting insights is a challenge.
The Video Indexer can extract insights from posted videos to help build a stunning platform with easy to implement features.
Azure Cognitive Services is a suite of services and APIs backed by machine learning that enables developers to incorporate intelligent features such as facial recognition in photos and videos, sentiment analysis in text, and language understanding into their applications.
Video Indexer is one of the newest members of the Cognitive Services family. Its purpose is to transform raw video content into content that is searchable, discoverable, and more engaging to the user. Want to generate a video transcript, index words spoken in the video or written on a whiteboard, or create a list of keywords from topics discussed in the video? Video Indexer can do all this and more. It can even find individuals in the video, and sometimes tell who they are.
Video Indexer is both a service and an API. The service is accessed through a Web portal. It allows you to upload videos and examine the information generated from them. The Video Indexer API is a REST API that does everything the portal does, and also allows you to access the information that is generated when videos are indexed.
Open the Video Indexer portal in your browser and select Get Started. Then select Sign in with your work account (AAD) to sign in with a work or school account, or Sign in with your personal Microsoft account to use a personal Microsoft account. Answer Yes if prompted to let this app access your info.
Select the Upload button. Then select enter a file url to upload a video from a URL.
Paste the following URL into the URL field, and enter "Overview of the Microsoft AI School" as the video name. Then select Upload to begin the upload.
https://topcs.blob.core.windows.net/public/Machine-Learning-in-IoT-solutions_high.mp4
Machine Learning in IoT Solutions
When indexing is complete, you will receive an e-mail notification for each video with a link to the video and a short description of what was found in it, such as people, topics, and keywords. Wait for all three videos to finish indexing, and then proceed to the next exercise.
In Video Indexer, insights are aggregated views of the knowledge extracted from a video, such as faces, keywords, and sentiment. For example, you can see the faces of people appearing in the video, as well as time ranges and percentages for each face shown. Video Indexer cross-references the faces that it finds against a database of thousands of famous people and automatically identifies them. You can see for yourself by opening the "Microsoft in Education" video in the portal. Microsoft CEO Satya Nadella appears in that video, and Video Indexer recognizes him.
Video Indexer automatically generates video transcripts based on its built-in speech and speaker recognition services. It even provides facilities for editing the information that it generated so you can correct errors in transcripts, put names to faces that weren't recognized, and more.
The "Insights" tab shows people featured in the video, keywords generated from the video, topics identified in the video, brands featured in the video, and even emotions found in the video. You can select Play next for any of these items and cycle through the corresponding points in the video.
In this example, Video indexer found two people in the video. It was unable to identify them because they don't appear in its database of famous people. However, you can lend a helping hand by identifying them yourself. Enable editing by selecting the Edit icon in the upper-right corner. Then select the pencil icon next to "Unknown #1" and enter "Sonya Koptyev" as the person's name. Finish up by pressing Enter to save the change.
Repeat this step for the "Unknown #2" in the video. This person's name is "Seth Juarez".
Want to see a full transcript of the video? Select Timeline at the top of the page. Video Indexer uses a deep neural network (DNN) to aid in converting speech to text, but such conversions are rarely perfect. Here, too, you can help out by editing words and phrases that weren't converted properly. To demonstrate, make sure you're still in editing mode and change "High Amsonia captive." to "Hi, I'm Sonya Koptyev.”
Select Insights, and then search for the word "intelligence." This time, the results are conceptual topics that include the search term.
Video Indexer has the ability to translate transcripts into a variety of languages, including German, Dutch, Spanish, French, Czech, Korean, and Japanese. To demonstrate, select Timeline again, select the world icon, and select a language other than English from the drop-down list.
The "Insights" tab shows people featured in the video, keywords generated from the video, topics identified in the video, brands featured in the video, and even emotions found in the video. You can select Play next for any of these items and cycle through the corresponding points in the video.
In this example, Video indexer found two people in the video. It was unable to identify them because they don't appear in its database of famous people. However, you can lend a helping hand by identifying them yourself. Enable editing by selecting the Edit icon in the upper-right corner. Then select the pencil icon next to "Unknown #1" and enter "Sonya Koptyev" as the person's name. Finish up by pressing Enter to save the change.
Repeat this step for the "Unknown #2" in the video. This person's name is "Seth Juarez".
Want to see a full transcript of the video? Select Timeline at the top of the page. Video Indexer uses a deep neural network (DNN) to aid in converting speech to text, but such conversions are rarely perfect. Here, too, you can help out by editing words and phrases that weren't converted properly. To demonstrate, make sure you're still in editing mode and change "High Amsonia captive." to "Hi, I'm Sonya Koptyev.”
Select Insights, and then search for the word "intelligence." This time, the results are conceptual topics that include the search term.
Video Indexer has the ability to translate transcripts into a variety of languages, including German, Dutch, Spanish, French, Czech, Korean, and Japanese. To demonstrate, select Timeline again, select the world icon, and select a language other than English from the drop-down list.
Want to see a full transcript of the video? Select Timeline at the top of the page. Video Indexer uses a deep neural network (DNN) to aid in converting speech to text, but such conversions are rarely perfect. Here, too, you can help out by editing words and phrases that weren't converted properly. To demonstrate, make sure you're still in editing mode and change "High Amsonia captive." to "Hi, I'm Sonya Koptyev.”
Select Insights, and then search for the word "intelligence." This time, the results are conceptual topics that include the search term.
Video Indexer has the ability to translate transcripts into a variety of languages, including German, Dutch, Spanish, French, Czech, Korean, and Japanese. To demonstrate, select Timeline again, select the world icon, and select a language other than English from the drop-down list.
Once a video is indexed, you can search its contents. Type "suggestion" into the search box at the top of the page and press Enter. Confirm that the search results include four instances in which the word "suggestion" was found in the video.
Select Insights, and then search for the word "intelligence." This time, the results are conceptual topics that include the search term.
Video Indexer provides regarding each video that it indexes, and this information is available not only in the portal, but through the Video Indexer API.
The Video Indexer portal provides a window into the videos that you index and lets you see a wealth of information extracted from them. But the real power of Video Indexer lies in the Video Indexer API, which lets you submit videos for indexing programmatically and access the results using a REST API. In Exercise 4, you will build an app that uses this API to expose content in the videos you indexed in Exercise 1. But to call the API, you must first subscribe to it and obtain an API key that is transmitted in each request. In this exercise, you will create a Video Indexer API subscription and retrieve the API key created for it.
Open the Video Indexer API portal in your browser and select SIGN IN in the top-right corner. Sign in with your Microsoft account — the same one you used to sign in to the Video Indexer portal. Answer Yes if prompted to let this app access your info.
Select Products, and then select Authorization.
Select the Subscribe button. The subscription will be created.
Select Products again, and then select Authorization. You will now see a list of the subscriptions you have, so select the Product Authorization Subscription.
Select the Subscribe button. The subscription will be created.
Select Products again, and then select Authorization. You will now see a list of the subscriptions you have, so select the Product Authorization Subscription.
Select the Show button next to the Primary Key. Copy the API key to the clipboard, and then select Hide to hide it again.
Now that you have an API key, you can write apps that call the Video Indexer API. The API key travels in an HTTP header in each request. Without a valid API key, the Video Indexer API fails requests placed to it. It is the API's way of ensuring that the caller is authorized.
Open the Video Indexer API portal in your browser and select SIGN IN in the top-right corner. Sign in with your Microsoft account — the same one you used to sign in to the Video Indexer portal. Answer Yes if prompted to let this app access your info.
Select Products, and then select Authorization.
Select the Subscribe button. The subscription will be created.
Select Products again, and then select Authorization. You will now see a list of the subscriptions you have, so select the Product Authorization Subscription.
Select the Subscribe button. The subscription will be created.
Select Products again, and then select Authorization. You will now see a list of the subscriptions you have, so select the Product Authorization Subscription.
Select the Show button next to the Primary Key. Copy the API key to the clipboard, and then select Hide to hide it again.
Now that you have an API key, you can write apps that call the Video Indexer API. The API key travels in an HTTP header in each request. Without a valid API key, the Video Indexer API fails requests placed to it. It is the API's way of ensuring that the caller is authorized.
There is no cost associated with the AI part because it doesn't require an Azure subscription.
The Video Indexer API is a rich one that includes methods for uploading videos for indexing, searching indexed videos, retrieving and modifying transcripts, monitoring the processing state as a video is being indexed, and more. One of the more powerful methods is Get Video Index, which returns the indexed content of a video containing the same kind of detailed information found in the Video Indexer portal after a video is indexed.
Video Explorer uses the Search Videos method, which is just one of more than 30 methods featured in the Video Indexer API. You could leverage additional APIs to make Video Explorer richer and more interactive. For example, you could allow users to search for people that appear in a video, or search specifically for text that is extracted via OCR. You could even use the Get Video Player Widget URL method to embed a video player in the app. Feel free to use these APIs to expand Video Explorer and customize it to fit your needs, and learn more about Video Indexer in the process.