Slides from DevNexus in Atlanta GA showing Cognitive Services. Minus demos unfortunately! Best place to check all this out is https://www.microsoft.com/cognitive-services/
Get a tour of Microsoft’s wide range of Cognitive Services with a deep dive of the Computer Vision API. Learn about how you can recognize images, faces and emotions. Also take a peek at other services such as translation, speech recognition, natural language processing, etc. Use the tools you know and love with Visual Studio, C# and .NET to jump into a new world of Cognitive Services.
Explore the risks and concerns surrounding generative AI in this informative SlideShare presentation. Delve into the key areas of concern, including bias, misinformation, job loss, privacy, control, overreliance, unintended consequences, and environmental impact. Gain valuable insights and examples that highlight the potential challenges associated with generative AI. Discover the importance of responsible use and the need for ethical considerations to navigate the complex landscape of this transformative technology. Expand your understanding of generative AI risks and concerns with this engaging SlideShare presentation.
Discover the power of Vector Search using OpenAI in Azure Cognitive Search through a comprehensive .NET application tutorial. This presentation will delve into the intricacies of integrating Azure OpenAI with your .NET applications, focusing specifically on the creation and utilization of vector embeddings. Learn how to effectively harness the capabilities of Azure OpenAI for generating precise vector embeddings, which are crucial for enhancing search functionalities in your applications. We will explore the concept of Hybrid search, demonstrating how it combines traditional keyword search with the advanced vector search to provide more relevant and context-aware results. This session is designed to equip developers with the knowledge and skills needed to implement state-of-the-art search capabilities in their .NET applications, leveraging the cutting-edge AI and machine learning technologies provided by Azure OpenAI.
Get a tour of Microsoft’s wide range of Cognitive Services with a deep dive of the Computer Vision API. Learn about how you can recognize images, faces and emotions. Also take a peek at other services such as translation, speech recognition, natural language processing, etc. Use the tools you know and love with Visual Studio, C# and .NET to jump into a new world of Cognitive Services.
Explore the risks and concerns surrounding generative AI in this informative SlideShare presentation. Delve into the key areas of concern, including bias, misinformation, job loss, privacy, control, overreliance, unintended consequences, and environmental impact. Gain valuable insights and examples that highlight the potential challenges associated with generative AI. Discover the importance of responsible use and the need for ethical considerations to navigate the complex landscape of this transformative technology. Expand your understanding of generative AI risks and concerns with this engaging SlideShare presentation.
Discover the power of Vector Search using OpenAI in Azure Cognitive Search through a comprehensive .NET application tutorial. This presentation will delve into the intricacies of integrating Azure OpenAI with your .NET applications, focusing specifically on the creation and utilization of vector embeddings. Learn how to effectively harness the capabilities of Azure OpenAI for generating precise vector embeddings, which are crucial for enhancing search functionalities in your applications. We will explore the concept of Hybrid search, demonstrating how it combines traditional keyword search with the advanced vector search to provide more relevant and context-aware results. This session is designed to equip developers with the knowledge and skills needed to implement state-of-the-art search capabilities in their .NET applications, leveraging the cutting-edge AI and machine learning technologies provided by Azure OpenAI.
AI and ML Series - Introduction to Generative AI and LLMs - Session 1DianaGray10
Session 1
👉This first session will cover an introduction to Generative AI & harnessing the power of large language models. The following topics will be discussed:
Introduction to Generative AI & harnessing the power of large language models.
What’s generative AI & what’s LLM.
How are we using it in our document understanding & communication mining models?
How to develop a trustworthy and unbiased AI model using LLM & GenAI.
Personal Intelligent Assistant
Speakers:
📌George Roth - AI Evangelist at UiPath
📌Sharon Palawandram - Senior Machine Learning Consultant @ Ashling Partners & UiPath MVP
📌Russel Alfeche - Technology Leader RPA @qBotica & UiPath MVP
Langchain Framework is an innovative approach to linguistic data processing, combining the principles of language sciences, blockchain technology, and artificial intelligence. This deck introduces the groundbreaking elements of the framework, detailing how it enhances security, transparency, and decentralization in language data management. It discusses its applications in various fields, including machine learning, translation services, content creation, and more. The deck also highlights its key features, such as immutability, peer-to-peer networks, and linguistic asset ownership, that could revolutionize how we handle linguistic data in the digital age.
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...Krishnaram Kenthapadi
Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial presents an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We will motivate the need for adopting a "fairness by design" approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we will focus on the application of fairness-aware machine learning techniques in practice by presenting non-proprietary case studies from different technology companies. Finally, based on our experiences working on fairness in machine learning at companies such as Facebook, Google, LinkedIn, and Microsoft, we will present open problems and research directions for the data mining / machine learning community.
Please cite as:
Sarah Bird, Ben Hutchinson, Krishnaram Kenthapadi, Emre Kiciman, and Margaret Mitchell. Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned. WSDM 2019.
The key challenge in making AI technology more accessible to the broader community is the scarcity of AI experts. Most businesses simply don’t have the much needed resources or skills for modeling and engineering. This is why automated machine learning and deep learning technologies (AutoML and AutoDL) are increasingly valued by academics and industry. The core of AI is the model design. Automated machine learning technology reduces the barriers to AI application, enabling developers with no AI expertise to independently and easily develop and deploy AI models. Automated machine learning is expected to completely overturn the AI industry in the next few years, making AI ubiquitous.
Regulating Generative AI - LLMOps pipelines with TransparencyDebmalya Biswas
The growing adoption of Gen AI, esp. LLMs, has re-ignited the discussion around AI Regulations — to ensure that AI/ML systems are responsibly trained and deployed. Unfortunately, this effort is complicated by multiple governmental organizations and regulatory bodies releasing their own guidelines and policies with little to no agreement on the definition of terms.
Rather than trying to understand and regulate all types of AI, we recommend a different (and practical) approach in this talk based on AI Transparency —
to transparently outline the capabilities of the AI system based on its training methodology and set realistic expectations with respect to what it can (and cannot) do.
We outline LLMOps architecture patterns and show how the proposed approach can be integrated at different stages of the LLMOps pipeline capturing the model's capabilities. In addition, the AI system provider also specifies scenarios where (they believe that) the system can make mistakes, and recommends a ‘safe’ approach with guardrails for those scenarios.
Azure OpenAI Service provides REST API access to OpenAI's powerful language models, including the GPT-3, GPT-4, DALL-E, Codex, and Embeddings model series. These models can be easily adapted to any specific task, including but not limited to content generation, summarization, semantic search, translation, transformation, and code generation. Microsoft offers the accessibility of the service through REST APIs, Python or C# SDK, or the Azure OpenAI Studio.
by Keith Steward, Solutions Architect, AWS
Amazon Lex is a service for building conversational interfaces into any application using voice and text, and Amazon Polly is a service that turns text into lifelike speech. This session combines both of these AWS services, the presenter will demonstrate how to build DevOps and Help Desk chatbots that feature spoken-voice interfaces, and explore the potential of bringing characters to life through interactive chatbots that improves customer engagement. Attendees will be provided with the foundational skills for those looking to enrich their applications with natural, conversational interfaces. Level 300
AI Agent and Chatbot Trends For EnterprisesTeewee Ang
Renowned entrepreneurs and technologists including Mark Zuckerberg, Elon Musk and Reid Hoffman have recently declared their renewed interest in Artificial Intelligence (AI) projects. AI assistants and chatbots are fast becoming key AI applications. Read about the AI engines of chatbot and the key AI assistant trends in the enterprise and organisation.
For this plenary talk at the Charlotte AI Institute for Smarter Learning, Dr. Cori Faklaris introduces her fellow college educators to the exciting world of generative AI tools. She gives a high-level overview of the generative AI landscape and how these tools use machine learning algorithms to generate creative content such as music, art, and text. She then shares some examples of generative AI tools and demonstrate how she has used some of these tools to enhance teaching and learning in the classroom and to boost her productivity in other areas of academic life.
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
An overview of some key concepts of chatbots, with some do's and don'ts.
We will happily present the high-resolution version of this presentation, extended with additional detailed slides, and a clear explanation at your offices. Contact us for that.
Build an LLM-powered application using LangChain.pdfAnastasiaSteele10
LangChain is an advanced framework that allows developers to create language model-powered applications. It provides a set of tools, components, and interfaces that make building LLM-based applications easier. With LangChain, managing interactions with language models, chaining together various components, and integrating resources like APIs and databases is a breeze. The platform includes a set of APIs that can be integrated into applications, allowing developers to add language processing capabilities without having to start from scratch.
Cognitive Services: Building Smart Apps with Speech, NLP & VisionNick Landry
Your computer can recognize your voice and detect words in a speech dictation, but can it truly understand the meaning of what you are saying? Can it analyze your intent and respond accordingly? You don’t need a PhD in artificial intelligence to integrate speech and natural language understanding in your projects. Microsoft Cognitive Services (aka “Project Oxford”) is a portfolio of cloud-based REST APIs and SDKs powered by Machine Learning which enable developers to write applications which understand the content within the rapidly growing set of multimedia data. Cognitive Services API services will help you understand and interact with audio, text, image, and video. In this session, we’ll start with an overview of available services for speech recognition and speech synthesis. Then we’ll explore through live demos how to leverage the Language Understanding Intelligent Service which lets you determine intent, detect entities in user speech and improve language understanding models to more efficiently work with user data. Lastly, we’ll leverage Computer Vision APIs to detect human faces, analyze the content of images, and perform Optical Character Recognition (OCR) to detect and analyze words within a photo. Come learn how your apps can tap into the same active learning services behind the brain of Cortana, and get started writing smart applications that can understand what your users are saying.
Rina Ahmed, Tech Evangelist at Microsoft, is giving an introduction to the Microsoft Cognitive Services APIs. You can find live examples, the range of services the APIs offer and information on how to integrate them.
AI and ML Series - Introduction to Generative AI and LLMs - Session 1DianaGray10
Session 1
👉This first session will cover an introduction to Generative AI & harnessing the power of large language models. The following topics will be discussed:
Introduction to Generative AI & harnessing the power of large language models.
What’s generative AI & what’s LLM.
How are we using it in our document understanding & communication mining models?
How to develop a trustworthy and unbiased AI model using LLM & GenAI.
Personal Intelligent Assistant
Speakers:
📌George Roth - AI Evangelist at UiPath
📌Sharon Palawandram - Senior Machine Learning Consultant @ Ashling Partners & UiPath MVP
📌Russel Alfeche - Technology Leader RPA @qBotica & UiPath MVP
Langchain Framework is an innovative approach to linguistic data processing, combining the principles of language sciences, blockchain technology, and artificial intelligence. This deck introduces the groundbreaking elements of the framework, detailing how it enhances security, transparency, and decentralization in language data management. It discusses its applications in various fields, including machine learning, translation services, content creation, and more. The deck also highlights its key features, such as immutability, peer-to-peer networks, and linguistic asset ownership, that could revolutionize how we handle linguistic data in the digital age.
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...Krishnaram Kenthapadi
Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial presents an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We will motivate the need for adopting a "fairness by design" approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we will focus on the application of fairness-aware machine learning techniques in practice by presenting non-proprietary case studies from different technology companies. Finally, based on our experiences working on fairness in machine learning at companies such as Facebook, Google, LinkedIn, and Microsoft, we will present open problems and research directions for the data mining / machine learning community.
Please cite as:
Sarah Bird, Ben Hutchinson, Krishnaram Kenthapadi, Emre Kiciman, and Margaret Mitchell. Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned. WSDM 2019.
The key challenge in making AI technology more accessible to the broader community is the scarcity of AI experts. Most businesses simply don’t have the much needed resources or skills for modeling and engineering. This is why automated machine learning and deep learning technologies (AutoML and AutoDL) are increasingly valued by academics and industry. The core of AI is the model design. Automated machine learning technology reduces the barriers to AI application, enabling developers with no AI expertise to independently and easily develop and deploy AI models. Automated machine learning is expected to completely overturn the AI industry in the next few years, making AI ubiquitous.
Regulating Generative AI - LLMOps pipelines with TransparencyDebmalya Biswas
The growing adoption of Gen AI, esp. LLMs, has re-ignited the discussion around AI Regulations — to ensure that AI/ML systems are responsibly trained and deployed. Unfortunately, this effort is complicated by multiple governmental organizations and regulatory bodies releasing their own guidelines and policies with little to no agreement on the definition of terms.
Rather than trying to understand and regulate all types of AI, we recommend a different (and practical) approach in this talk based on AI Transparency —
to transparently outline the capabilities of the AI system based on its training methodology and set realistic expectations with respect to what it can (and cannot) do.
We outline LLMOps architecture patterns and show how the proposed approach can be integrated at different stages of the LLMOps pipeline capturing the model's capabilities. In addition, the AI system provider also specifies scenarios where (they believe that) the system can make mistakes, and recommends a ‘safe’ approach with guardrails for those scenarios.
Azure OpenAI Service provides REST API access to OpenAI's powerful language models, including the GPT-3, GPT-4, DALL-E, Codex, and Embeddings model series. These models can be easily adapted to any specific task, including but not limited to content generation, summarization, semantic search, translation, transformation, and code generation. Microsoft offers the accessibility of the service through REST APIs, Python or C# SDK, or the Azure OpenAI Studio.
by Keith Steward, Solutions Architect, AWS
Amazon Lex is a service for building conversational interfaces into any application using voice and text, and Amazon Polly is a service that turns text into lifelike speech. This session combines both of these AWS services, the presenter will demonstrate how to build DevOps and Help Desk chatbots that feature spoken-voice interfaces, and explore the potential of bringing characters to life through interactive chatbots that improves customer engagement. Attendees will be provided with the foundational skills for those looking to enrich their applications with natural, conversational interfaces. Level 300
AI Agent and Chatbot Trends For EnterprisesTeewee Ang
Renowned entrepreneurs and technologists including Mark Zuckerberg, Elon Musk and Reid Hoffman have recently declared their renewed interest in Artificial Intelligence (AI) projects. AI assistants and chatbots are fast becoming key AI applications. Read about the AI engines of chatbot and the key AI assistant trends in the enterprise and organisation.
For this plenary talk at the Charlotte AI Institute for Smarter Learning, Dr. Cori Faklaris introduces her fellow college educators to the exciting world of generative AI tools. She gives a high-level overview of the generative AI landscape and how these tools use machine learning algorithms to generate creative content such as music, art, and text. She then shares some examples of generative AI tools and demonstrate how she has used some of these tools to enhance teaching and learning in the classroom and to boost her productivity in other areas of academic life.
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
An overview of some key concepts of chatbots, with some do's and don'ts.
We will happily present the high-resolution version of this presentation, extended with additional detailed slides, and a clear explanation at your offices. Contact us for that.
Build an LLM-powered application using LangChain.pdfAnastasiaSteele10
LangChain is an advanced framework that allows developers to create language model-powered applications. It provides a set of tools, components, and interfaces that make building LLM-based applications easier. With LangChain, managing interactions with language models, chaining together various components, and integrating resources like APIs and databases is a breeze. The platform includes a set of APIs that can be integrated into applications, allowing developers to add language processing capabilities without having to start from scratch.
Cognitive Services: Building Smart Apps with Speech, NLP & VisionNick Landry
Your computer can recognize your voice and detect words in a speech dictation, but can it truly understand the meaning of what you are saying? Can it analyze your intent and respond accordingly? You don’t need a PhD in artificial intelligence to integrate speech and natural language understanding in your projects. Microsoft Cognitive Services (aka “Project Oxford”) is a portfolio of cloud-based REST APIs and SDKs powered by Machine Learning which enable developers to write applications which understand the content within the rapidly growing set of multimedia data. Cognitive Services API services will help you understand and interact with audio, text, image, and video. In this session, we’ll start with an overview of available services for speech recognition and speech synthesis. Then we’ll explore through live demos how to leverage the Language Understanding Intelligent Service which lets you determine intent, detect entities in user speech and improve language understanding models to more efficiently work with user data. Lastly, we’ll leverage Computer Vision APIs to detect human faces, analyze the content of images, and perform Optical Character Recognition (OCR) to detect and analyze words within a photo. Come learn how your apps can tap into the same active learning services behind the brain of Cortana, and get started writing smart applications that can understand what your users are saying.
Rina Ahmed, Tech Evangelist at Microsoft, is giving an introduction to the Microsoft Cognitive Services APIs. You can find live examples, the range of services the APIs offer and information on how to integrate them.
Bots are the New Apps: Building with the Bot Framework & Language UnderstandingNick Landry
Bots (or conversation agents) are rapidly becoming an integral part of your users’ digital experience – they are as vital a way for users to interact with a service or application as is a web site or a mobile experience. Developers writing bots all face the same problems: bots require basic I/O; they must have language and dialog skills; and they must connect to users – preferably in any conversation experience and language the user chooses. In this session, you will learn how to build and connect intelligent bots to interact with your users naturally wherever they are, from text/sms to Skype, Slack, Facebook, Office 365 mail and other popular services.
We will explore the Microsoft Bot Framework, which provides just what you need to build and connect intelligent bots that interact naturally wherever your users are talking. Through live demos, we’ll cover the Bot Connector in the cloud, the Bot Build SDK with C# (Node.js is also supported) and we’ll also explore how to handle natural language input from the user with the Language Understanding Intelligent Service (LUIS) from Microsoft Cognitive Services. Every business needs bots to provide a more personal experience to its users and customers. Come learn how you can build your own bots in just a few hours.
First we had the web, we were able to connect people with information using mice and keyboards. Then we had apps, written natively to take advantage of device sensors and operating systems. Next in the information revolution is Bots, making it simpler and quicker to connect users with the information and services that they need and using more natural methods of interaction.
In this talk I will open up the Microsoft Bot Framework, show how to develop a Bot using it and show you that you have the skills to do this too. Expect code, demos and to leave knowing what someone is actually talking about when they use the phrase ‘Conversations as a Platform'.
Java Puzzlers NG S02: Down the Rabbit Hole as presented at DevNexus 2017Baruch Sadogursky
Moar puzzlers! The more we work with Java 8, the more we go into the rabbit hole. Did they add all those streams, lambdas, monads, Optionals and CompletableFutures only to confuse us? It surely looks so! And Java 9 that heads our way brings even more of what we like the most, more puzzlers, of course! In this season we as usual have a great batch of the best Java WTF, great jokes to present them and great prizes for the winners!
Building a private CI/CD pipeline with Java and Docker in the Cloud as presen...Baruch Sadogursky
A private Java (Maven or Gradle) repository as a service can be setup in the cloud. A private Docker registry as a service can be easily setup in the cloud. But what if you want to build a holistic CI/CD pipeline, and on the cloud of YOUR choice?
In this talk Baruch will take you through steps of setting up a universal artifact repository, which can serve for both Java and Docker. You’ll learn how to build a CI/CD pipeline with traceable metadata from the Java source files all the way to Docker images. Amazon, Azure, and Google Cloud (do you have setup that works on these?) will be used as an example although the recipes shown would be applicable to other cloud as well.
Apache Cordova is one of the most popular frameworks for cross-platform mobile development. Web developers can build apps for iOS, Android, and Windows based on the same frameworks they use for the Web by using a shared codebase. However, the methods for improving mobile app performance can be different from those web developers use for web apps. Doris Chen outlines what impacts “native performance” and explains how the startup time, as well as the overhead of resume, memory, communication, and the Web, can all contribute to the performance of Cordova apps. To build Cordova apps that perform well, it’s important to understand how to avoid common pitfalls and how to use the technologies in the most efficient ways. Doris introduces tools for performance tests and demonstrates how to measure mobile app performance by using diagnostic tools for different platforms. Doris also shares practical tips for building faster Cordova apps by exploring Document Object Model (DOM) complexity, animation techniques, and memory management.
You can write Android applications in Ceylon, using the standard Android tools. This has many advantages, since you can use all the language features available for the other platforms that Ceylon targets, such as:
Union and intersection types
Top-level and higher-order functions
Tuples
Comprehensions
Typesafe metamodel (Ceylon’s version of Java reflection, with additional type safety)
DevOps @Scale (Greek Tragedy in 3 Acts) as it was presented at DevNexus 2017Baruch Sadogursky
As in a good Greek Tragedy, scaling devops to big teams has 3 stages and usually end badly. In this play (it’s more than a talk!) we’ll present you with Pentagon Inc, and their way to scaling devops from a team of 3 engineers to a team of 100 (spoiler – it’s painful!)
Agile Metrics : Velocity is NOT the Goal - NDC Oslo 2014Doc Norton
Velocity is one of the most common metrics used-and one of the most commonly misused-on agile projects. Velocity is simply a measurement of speed in a given direction-the rate at which a team is delivering toward a product release. As with a vehicle en route to a particular destination, increasing the speed may appear to ensure a timely arrival. However, that assumption is dangerous because it ignores the risks with higher speeds. And while it’s easy to increase a vehicle’s speed, where exactly is the accelerator on a software team?
Michael “Doc" Norton walks us through the Hawthorne Effect and Goodhart’s Law to explain why setting goals for velocity can actually hurt a project's chances. Take a look at what can negatively impact velocity, ways to stabilize fluctuating velocity, and methods to improve velocity without the risks. Leave with a toolkit of additional metrics that, coupled with velocity, give a better view of the project's overall health.
Your Cognitive Future in the Retail IndustryLuke Farrell
Taking from Tero Angeria IBM
Cognitive + retail = the future
Welcome to the age of cognitive computing, where intelligent machines simulate human brain capabilities to help solve society’s most vexing problems. For retail,cognitive computing has already arrived, and its potential to transform the industry is enormous. Cognitive systems are driving more personalized
shopping experiences and helping unearth customer trends. Our research reveals that retail leaders globally are poised to embrace this groundbreaking technology more holistically and, by doing so, will redefine the future in retail.
Post-Modern CSS: Start learning CSS Grid, Flexbox and other new propertiesBryan Robinson
A look at the history and future of CSS with an eye toward new features like CSS Grid and Flexbox.
Examples for this presentation live at http://bryanlrobinson.com/post-modern-examples/
Intelligent Mobile App with Azure Cognitive ServicesPuja Pramudya
I present how to add intelligence to mobile app. Using Azure Cognitive Services, we can leverage several human intelligence by using easy-to-use REST API
Microsoft AI: Cognitive Service - Global Azure bootcamp 2018JoTechies
Infuse your apps, websites and bots with intelligent algorithms to see, hear, speak, understand and interpret your user needs through natural methods of communication. Transform your business with AI today
The Vision of Computer Vision: The bold promise of teaching computers to unde...ITCamp
How is Computer vision, machine learning, and mixed reality designed into innovative software built on the latest and greatest emerging experiences? Join Tim Huckaby in a keynote focused on the power of Computer Vision: Designing, developing, and putting interactive vision based software systems into production. This keynote will explain the components of computer vision like machine learning and demonstrate the use cases where computer vision solutions are happening. In addition, those coming in the immediate future. Moreover, speculate on the computer vision solutions of the future. This demo-heavy keynote will show you a number of real interactive solutions in the realm of 2D & 3D cameras in addition to mixed reality devices that produce interactive holographic experiences.
Unleash some AI into the Wild - IT.A.K.EHenk Boelman
We’ll dive into a real-world case on how A.I. can assist medical workers in the field. From eliminating time-consuming documentation to gaining valuable insights into their work. By letting AI take care of common tasks, medical workers can focus on delivering essential medical care.
In this session, I’ll take you along on a AI-First technical journey, from patient identification handled by Cognitive Services, efficiency improvement with natural language processing and handling data in the cloud. Besides that we take a look at the collected data and see how we can make a predictive model.
We took the app to some remote places in Uganda and beta tested it. We will zoom into some challenges we faced running it in the field, discuss the tools and components used and give you a peek into future steps.
Why Don’t You Understand Me? Build Intelligence into Your AppsEran Stiller
Artificial Intelligence is not something exotic anymore. With Siri, Cortana & Google Assistant on virtually any device, our users have come to expect our applications to understand them in more meaningful ways than ever before. Luckily for us, various platforms have emerged which enable us to integrate this kind of functionality into our applications and services without having to do the heavy lifting by ourselves.
In this session we will take a full-fledged cross platform Xamarin application built around a public image API, and enhance it with intelligence using Azure Cognitive Services. We'll see how we can add vision, language and speech into our applications, making them smarter than ever before. Stand out in the crowd - bring intelligence into your apps!
An Introduction to the AI services at AWS - AWS Summit Tel Aviv 2017Amazon Web Services
Artificial Intelligence (AI) services on the AWS cloud bring deep learning (DL) technologies like natural language understanding (NLU), automatic speech recognition (ASR), image recognition and computer vision (CV), text-to-speech (TTS), and machine learning (ML) within reach of every developer. In this session, you will be introduced to several new AI services: Amazon Lex, to build sophisticated text and voice chatbots; Amazon Rekognition, for deep learning-based image recognition; and Amazon Polly, for turning text into lifelike speech. The opportunities to apply one or more of these DL services are nearly boundless and this session will provide a number of examples and use cases to help you get started.
Amazon AI services bring natural language understanding (NLU), automatic speech recognition (ASR), visual search and image recognition, text-to-speech (TTS), and ML technologies within the reach of every developer. In this session, we will dive deep into 2 specific AWS services: Amazon Lex and Amazon Polly. Amazon Lex uses the same technology as Amazon Alexa to provide advanced deep learning functionalities of automatic speech recognition (ASR) and natural language understanding (NLU) to enable you to build applications with conversational interfaces, commonly called chatbots. Amazon Polly is a service that turns text into lifelike speech. Polly lets you create applications that speak in over two dozen languages with a wide variety of natural sounding male and female voices to enable you to build entirely new categories of speech-enabled products.
Microsoft Cognitive Services let you build apps with powerful algorithms using just a few lines of code. They work across devices and platforms such as iOS, Android, and Windows, keep improving, and are easy to set up.
An Overview of AI on the AWS Platform - June 2017 AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn about the breadth of AI services available on the AWS Cloud
- Gain insight into Amazon Lex, Amazon Polly, and Amazon Rekognition
- Learn more about why Apache MXNet is the deep learning framework of choice for AWS
Similar to Intro to Microsoft Cognitive Services (20)
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
7. Bringing it all together
The Seeing AI App
Computer Vision, Image, Speech Recognition, NLP, and
ML from Microsoft Cognitive Services
Watch Video HereRead Blog Here
10. Computer Vision API
Distill actionable
information from images
Video API
Analyze, edit, and
process videos within
your app
Face API
Detect, identify, analyze,
organize, and tag faces in
photos
Emotion API
Personalize experiences
with emotion recognition
Vision
11.
12. Updated Computer Vision API
Content of Image:
Categories v0: [{ “name”: “animal”, “score”: 0.9765625 }]
V1: [{ "name": "grass", "confidence": 0.9999992847442627 },
{ "name": "outdoor", "confidence": 0.9999072551727295 },
{ "name": "cow", "confidence": 0.99954754114151 },
{ "name": "field", "confidence": 0.9976195693016052 },
{ "name": "brown", "confidence": 0.988935649394989 },
{ "name": "animal", "confidence": 0.97904372215271 },
{ "name": "standing", "confidence": 0.9632768630981445 },
{ "name": "mammal", "confidence": 0.9366017580032349,
"hint": "animal" },
{ "name": "wire", "confidence": 0.8946959376335144 },
{ "name": "green", "confidence": 0.8844101428985596 },
{ "name": "pasture", "confidence": 0.8332059383392334 },
{ "name": "bovine", "confidence": 0.5618471503257751,
"hint": "animal" },
{ "name": "grassy", "confidence": 0.48627158999443054 },
{ "name": "lush", "confidence": 0.1874018907546997 },
{ "name": "staring", "confidence": 0.165890634059906 }]
Describe
0.975 "a brown cow standing on top of a lush green field“
0.974 “a cow standing on top of a lush green field”
0.965 “a large brown cow standing on top of a lush green field”
19. Speech
Bing Spell
Check API
Detect and correct
spelling mistakes
within your app
Language Understanding
Intelligent Service
Teach your apps to
understand
commands from
your users
Web Language
Model API
Leverage the power
of language models
trained on web-scale
data
Linguistic
Analysis API
Easily parse complex
text with language
analysis
Text Analytics
API
Detect sentiment,
key phrases, topics,
and language from
your text
Language
20. Reduce labeling effort with interactive featuring
Seamless integration to Speech API
Deploy using just a few examples with active learning
Supports 5 languages (English, Chinese, Italian, French, Spanish)
Language Understanding Models
What are Cognitive Services? Microsoft Cognitive Services are a new collection of intelligence and knowledge APIs that enable developers to ultimately build smarter apps.
NOTES: key concepts we are trying to convey in this above statement:
That we are bringing together Intelligence (Oxford) and Knowledge from the corpus of the web (Bing)
That cognitive = human perception and understanding, enabling your apps to see the world around them, to hear and talk back with the users—to have a human side.
What are Microsoft Cognitive Services?
Microsoft Cognitive Services is a new collection of intelligent APIs that allow systems to see, hear, speak, understand and interpret our needs using natural methods of communication. Developers can use these APIs to make their applications more intelligent, engaging and discoverable. To try Cognitive Services for free, visit www.microsoft.com/cognitive.
With Cognitive Services, developers can easily add intelligent features – such as emotion and sentiment detection, vision and speech recognition, knowledge, search and language understanding – into their applications. The collection will continuously improve, adding new APIs and updating existing ones.
Cognitive Services includes:
Vision: From faces to feelings, allow apps to understand images and video
Speech: Hear and speak to users by filtering noise, identifying speakers, and understanding intent
Language: Process text and learn how to recognize what users want
Knowledge: Tap into rich knowledge amassed from the web, academia, or your own data
Search: Access billions of web pages, images, videos, and news with the power of Bing APIs
At Microsoft, we’ve been offering APIs for a very long time across the company. In delivering Microsoft Cognitive Services API, we started with 4 last year at /build (2015); added 7 more last December, and today (May 2016) we have 22 APIs in our collection.
Cognitive Services are available individually or as a part of the Cortana Intelligence Suite, formerly known as Cortana Analytics, which provides a comprehensive collection of services powered by cutting-edge research into machine learning, perception, analytics and social bots.
These APIs are powered by Microsoft Azure.
Developers and businesses can use this suite of services and tools to create apps that learn about our world and interact with people and customers in personalized, intelligent ways.
Here’s a preview of the Seeing AI app, which started as a research project that helps people who are visually impaired or blind to understand who and what is around them.
The app will use computer vision, image & speech recognition, natural language processing and machine learning from Microsoft Cognitive Services. The app is under development and is not available today but it shows what’s possible with the APIs.
Vision
Computer Vision API: as a free trial on the website microsoft.com/cognitive. There are also SDKs and Samples available on GitHub or through NuGet, Maven, and Cocoapods for select platforms to make development easier. It’s important to note here that it’s not client side running code, but light wrappers around the REST calls to make integration easy.
A photo app would use this as a way to tag user photos and make it easier for users to search through their collections. An assistive app would use this as a way to describe the surroundings to visually-impaired users. Works really well on both indoor or outdoor images; it can recognize common household objects, and it can describe outdoor scenes. However, we did not train on aerial images (say from drones), or on many close ups (so pictures where we zoomed in extremely on the subject won't do well). We also do really well recognizing celebrities (as long as most of the face is visible, and they were facing the camera).
Face API: Some potential uses for this technology include facial login, photo tagging, and home monitoring. Or attribute detection to know age, gender, facial hair, etc.
Emotion API: is available in the Azure marketplace, as a free trial on the website microsoft.com/cognitive. See Computer Vision description.
Build an app that responds to moods. Using facial expressions, this cloud-based API can detect happiness, neutrality, sadness, contempt, anger, disgust, fear, and surprise. The AI understands these emotions based on universal facial expressions, and it functions cross-culturally, so your app will work around the world. Some use cases would be an advertising company wants to test user response to an ad, a tv studio wants to track responses to a pilot.
Video API: as a free trial on the website microsoft.com/cognitive. See Computer Vision description.
It brings Microsoft state of the art video processing algorithms to developers. With Video API, developers can analyze and automatically edit videos, including stabilize videos, create motion thumbnails, track faces, and detect motion. Use cases: For Stabilization: If you have multiple action videos, you can use the stabilization algorithm to make them less shaky and easier to watch. You can also use the stabilization algorithm as a first step in performing other video APIs. For Face Tracking: You can track faces in a video to do A/B testing in a retail setting. You can combine Video API Face Tracking with capabilities in Face API to search through surveillance, crime, or media footage to look for certain person. For Motion Detection: Instead of having to watch long clips of surveillance footage, the API will let you know what time motion occurred and its duration. For Video Thumbnail: Take a long video, such as a keynote presentation, and automatically create a short preview clip of the talk. For Face Tracking: Works best for frontal faces. Currently cannot detect small faces, side or partial faces. For Motion Detection: Detects motion on a stationary background (e.g. fixed camera). Current limitations of the algorithms include night vision videos, semi-transparent objects, and small objects. For Video Thumbnail: Take a long video, such as a keynote presentation, and automatically create a short preview clip of the talk.
Bing Spell Check API: Microsoft’s state-of-the-art cloud-based spelling algorithms to detect a wide variety of spelling errors and provide accurate suggestions. Using Bing Spell Check, your mobile and PC apps will be powered with state-of-the-art spelling capabilities. Our service is trained on a massive corpus of data gleaned from billions of web pages. There is no need to train your own models. The speller model is updated regularly and incorporates new terms and brands almost as quickly as they appear on the web. This API is available through Microsoft Cognitive Services for customers with low-volume and high-latency jobs. For high-volume and low-latency we have an internal API which may be more suitable. Contact donaldbr directly for more information.
Use cases: 1) Improve the quality of a website's product search 2) provide spell correction for a keyboard app 3) provide spell correction for text fields in an app or web page 4) detect errors in UI text and user data. See https://blogs.msdn.microsoft.com/msr_er/2010/12/15/building-a-better-speller-bing-and-microsoft-research-offer-prizes-for-best-search-engine-spelling-alteration-services/ The speller is exceptional at common spelling errors with low edit-distance (such as febuary->February) but a lot of other spellers are good at that as well. We Do a very good job with word breaking, proper names in context (try "director stephen spielberg") and fictional character names, just a few examples. Areas that are a challenge are capitalization (we don't know what to do with "March" for example, even with context) and consistency (there are times when we will flag a word only intermittently based on the context).
Web Language Model API: Web Language API indexes the web and Bing queries to allow users to calculate the probabilities of natural language expressions and estimate a list of most likely words to follow an existing sequence of words. Use this API to insert spaces into a string of words without spaces, like a hashtag or URL. Use this API to rerank machine translation/speech recognition candidate sentences based on probability of the sentence.
Use this API for academic research. http://research.microsoft.com/apps/pubs/default.aspx?id=130762
We also have SDKs available for WebLM
Linguistic Analysis API: The Linguistic Analysis API helps you gain insights from text. Given a natural language parse, it’s easy to identify the concepts and entities (noun phrases), actions (verbs and verb phrases), descriptive words, and more. The processed text can provide useful features for classification tasks such as sentiment analysis.
We also have SDKs available for Linguistic Analysis.
LUIS: Language Understanding Intelligent Service (LUIS) allows developers to build a model that understands natural language and commands tailored to their application. Example: You can say “turn down the thermostat in the living room,” send it to a LUIS model, and instead of just returning the text that represents what was said, LUIS will return: the action is “turn down,” the location is “living room,” and the target is “thermostat.” LUIS allows developers to iteratively build on these models and take speech or text input and return a structured representation of what the person said. Not only that but by build LUIS will help developers create and train smart conversational bot (Intercom or Slack) with a single button. LUIS will also offer action fulfillment capabilities by simple integration with webhooks. LUIS works pretty well it comes to intents. For the entities, the learning curve is slower especially when the number of entities increases. LUIS only supports 20 intents & 10 entities yet by build each entities can have up to 10 children.
Text Analytics API: At the time of publication, this data was not available. Please email Rebecca Duffy, reduffy@microsoft.com, if you would like more information.
First, you have to build your bot. Your bot lives in the cloud and you host it yourself. You write it just like another web service component, using Node.js or C#, like an ASP.NET Web API component. The Microsoft Bot Builder SDK is Open Source, so you’ll see more languages and web stacks get supported over time. Your bot will have its own logic. [CLICK] but you’ll also need a conversation logic, suing dialogs to model a conversation. The Bot Builder SDK gives you facilities for this, and there are many types of dialogs included, from simple yes/no questions, to full Natural Language Understanding – or LUIS, one of the APIs provided in Microsoft Cognitive Services.
Access to strong documentation, sample code and community resources is critical for developers to be able to understand and become users of Cognitive Services. Customize these links based on your own resources or use the ones listed here.