Working with Cortana is fun. This presentation is for people who are seasoned developer and want to learn more on integrating Cortana with Windows Phone Apps.
Integrando nuestra Aplicación Windows Phone con CortanaJavier Suárez Ruiz
The document contains code snippets for integrating voice commands into a Windows Phone application using Cortana. It includes XML for defining voice commands with examples, feedback and navigation actions. It also includes code for installing the voice command definitions and handling activation from voice commands to navigate to the appropriate page in the app.
The document discusses integrating voice commands into Windows Phone applications. It provides XML code examples for defining voice command sets with commands to search or find content on MSDN based on dictated terms. The commands provide feedback and navigate to a main page. Voice commands were improved between Windows Phone 8.0 and 8.1, allowing more powerful and accurate natural language recognition.
The document describes the three main steps to integrate Cortana voice commands in a Windows Phone app:
1. Create an XML voice command definition file
2. Register the XML file on app startup
3. Handle voice command activation in the app's OnActivated override to navigate to the correct page based on the activated command
El documento proporciona información sobre el diseño centrado en el usuario y la usabilidad de sitios web. Explica que es importante entender las necesidades, objetivos y comportamientos de los usuarios para crear una interfaz simple, eficiente y satisfactoria. También describe los pasos clave para el diseño como organizar el contenido, estructurar la navegación, optimizar la presentación visual y asegurar la accesibilidad.
Lucas g. santiago g. juan g. ramiro z. isaac r.beronyk
Keawe, un hombre valiente y educado, encuentra una casa de sus sueños. Un hombre misterioso le ofrece una botella de cristal que contiene un demonio. Más tarde, Keawe contrae la lepra y decide viajar para recuperar la botella y curarse, pero sufre varios contratiempos que le impiden recuperar la botella y su salud.
Portfolio Profesional Oihane Santiuste Mayo 2010Oihane Car
El documento presenta la experiencia profesional del autor en varios estudios de arquitectura entre 2006 y 2009. Incluye proyectos de diseño, desarrollo y gestión de oficinas, centros deportivos, aeropuertos y viviendas protegidas oficiales en España, Reino Unido y Portugal.
La Batalla de Trafalgar resultó en una victoria británica. La flota franco-española estaba mal dirigida y sus tripulaciones eran
inexpertas, mientras que la flota británica, bajo el mando del almirante Nelson, estaba experimentada y mejor equipada. Los
británicos cortaron por el centro la formación enemiga en arco, envolviéndolos. A pesar de la valentía española, la falta de liderazgo
y experiencia de la flota combinada llevó a su derrota.
Este documento describe la evolución de las tecnologías de la información y la comunicación (TIC) a través de la historia. Comienza con las primeras manifestaciones del pensamiento humano en la Edad de Piedra y sigue hasta la actualidad, donde las TIC conforman un conjunto de recursos necesarios para manipular la información. También compara la imprenta con Internet, señalando que ambas revolucionaron el acceso al conocimiento. Finalmente, analiza el impacto de las TIC en la educación y cómo esto ha dado paso a un cambio del modelo educativo tra
Integrando nuestra Aplicación Windows Phone con CortanaJavier Suárez Ruiz
The document contains code snippets for integrating voice commands into a Windows Phone application using Cortana. It includes XML for defining voice commands with examples, feedback and navigation actions. It also includes code for installing the voice command definitions and handling activation from voice commands to navigate to the appropriate page in the app.
The document discusses integrating voice commands into Windows Phone applications. It provides XML code examples for defining voice command sets with commands to search or find content on MSDN based on dictated terms. The commands provide feedback and navigate to a main page. Voice commands were improved between Windows Phone 8.0 and 8.1, allowing more powerful and accurate natural language recognition.
The document describes the three main steps to integrate Cortana voice commands in a Windows Phone app:
1. Create an XML voice command definition file
2. Register the XML file on app startup
3. Handle voice command activation in the app's OnActivated override to navigate to the correct page based on the activated command
El documento proporciona información sobre el diseño centrado en el usuario y la usabilidad de sitios web. Explica que es importante entender las necesidades, objetivos y comportamientos de los usuarios para crear una interfaz simple, eficiente y satisfactoria. También describe los pasos clave para el diseño como organizar el contenido, estructurar la navegación, optimizar la presentación visual y asegurar la accesibilidad.
Lucas g. santiago g. juan g. ramiro z. isaac r.beronyk
Keawe, un hombre valiente y educado, encuentra una casa de sus sueños. Un hombre misterioso le ofrece una botella de cristal que contiene un demonio. Más tarde, Keawe contrae la lepra y decide viajar para recuperar la botella y curarse, pero sufre varios contratiempos que le impiden recuperar la botella y su salud.
Portfolio Profesional Oihane Santiuste Mayo 2010Oihane Car
El documento presenta la experiencia profesional del autor en varios estudios de arquitectura entre 2006 y 2009. Incluye proyectos de diseño, desarrollo y gestión de oficinas, centros deportivos, aeropuertos y viviendas protegidas oficiales en España, Reino Unido y Portugal.
La Batalla de Trafalgar resultó en una victoria británica. La flota franco-española estaba mal dirigida y sus tripulaciones eran
inexpertas, mientras que la flota británica, bajo el mando del almirante Nelson, estaba experimentada y mejor equipada. Los
británicos cortaron por el centro la formación enemiga en arco, envolviéndolos. A pesar de la valentía española, la falta de liderazgo
y experiencia de la flota combinada llevó a su derrota.
Este documento describe la evolución de las tecnologías de la información y la comunicación (TIC) a través de la historia. Comienza con las primeras manifestaciones del pensamiento humano en la Edad de Piedra y sigue hasta la actualidad, donde las TIC conforman un conjunto de recursos necesarios para manipular la información. También compara la imprenta con Internet, señalando que ambas revolucionaron el acceso al conocimiento. Finalmente, analiza el impacto de las TIC en la educación y cómo esto ha dado paso a un cambio del modelo educativo tra
Arma virumque cano (nidicaciones sobre c+¦mo y por qu+® leer la e neida)Deletrea .
La Eneida fue encargada por el emperador Augusto a Virgilio como un poema nacional romano para ensalzar los valores del Imperio y otorgar un origen divino a la familia imperial. Virgilio dedicó los últimos 11 años de su vida a escribir la Eneida, divida en 12 libros que siguen la estructura de la Odisea y la Ilíada de Homero. Narra las aventuras de Eneas tras la caída de Troya y su llegada a Italia, donde funda una nueva Troya y da origen al pueblo
La bilis es una secreción acuosa producida por el hígado que ayuda a digerir las grasas y absorber vitaminas. Contiene agua, sales biliares, bilirrubina, colesterol y ácidos grasos. Sus funciones incluyen emulsificar las grasas en el intestino, eliminar desechos y regular el colesterol. Se almacena en la vesícula biliar y se libera al duodeno cuando se ingiere comida.
El documento presenta 25 fotografías de vestimentas de fiestas de épocas pasadas donde no se usaban obligatoriamente los colores blanco y rojo, que actualmente son símbolos de celebración en muchas culturas.
El documento presenta siete motivos para recordar el municipio de Malambo en Colombia. Cada motivo se representa por una flor de color diferente e incluye: la ubicación del municipio junto al Río Grande de la Magdalena (flor azul); la tradición de los indígenas que llegaron antes del año 1120 a.C. (flor dorada); la contribución de personas de Malambo a la independencia de Cartagena (flor roja); la diversidad cultural de la gente del municipio (flor amarilla); la participación en el Carna
Se arrestó a un hombre de 59 años por nuevas acusaciones de delitos sexuales relacionadas con el escándalo de abusos sexuales de Jimmy Savile. La policía anunció el arresto de un hombre por nuevas acusaciones de delitos sexuales vinculadas al escándalo de abusos sexuales de Jimmy Savile.
This document discusses agile development practices and how communities can grow through sharing stories. It provides an overview of agile practices like iterations, standups, retrospectives, and continuous integration. The document suggests forming communities through chartering and using tools like personas and story maps. It asks the reader to reflect on their journey and practice beginner's mind to avoid expertise traps. The overall message is that agile methods are tools to deliver value iteratively while connecting as a community in discovery and growth.
Aquí una presentación en la que hablo sobre el mundo del gofre, su historia, sus tipos, lugares recomendados y una receta para aprender a prepararlos de una forma muy rápida y sencilla.
El documento analiza las oportunidades que ofrece la crisis para las empresas a través de una mayor empatía con los clientes. Sugiere que las empresas se enfoquen en cuidar a los clientes, anticiparse a sus necesidades y sorprenderlos. También propone varias ideas como ofrecer versiones gratuitas de productos, facilitar la comparación de precios y opciones asequibles para que los consumidores gasten menos o se diviertan en casa.
Penn State Cooperative Extension conducted a survey of 940 school districts to elicit feedback on how Marcellus shale gas drilling is affecting their students and their schools.
El documento resume las celebraciones del 250 aniversario de la inauguración del Real Colegio de Artillería en Segovia. El Rey Juan Carlos I presidió el acto central en el Alcázar de Segovia, donde se entregó una medalla de oro al director de la Academia de Artillería. La academia es considerada el centro de enseñanza militar más antiguo del mundo y ha tenido un papel importante en la modernización del ejército español y el desarrollo científico y tecnológico.
Call Las Vegas Reltor Jeff Mix at 702-510-9625.
Here is a CMA I prepared for 2124 Lipari Ct Las Vegas NV 89123 that is currently listed at 168,800 and 88 per sq ft.
Sales over the last 90 days have been between 105,000 to 188,000 for similar sized homes and between 55 to 87 per ft.
Rents for similar sized homes over last 6 months are between 1350 to 1650 per month.
This document summarizes lessons learned from holiday email marketing strategies in 2012. It discusses performance of Black Friday and Cyber Monday emails, the success of a sweepstakes campaign in acquiring new subscribers, and ideas for additional holiday email types. Key takeaways include the importance of re-engaging past customers, testing multiple subject lines, and capitalizing on post-holiday sales. Data from 2012 showed benefits of re-mailing on key days and planning extensions when deals are expiring.
Este documento describe diferentes formas abreviadas y alternativas de escribir, incluyendo lenguaje de chat, emoticonos y leet speak. Explica que estas formas facilitan la comunicación al ahorrar tiempo y dinero. También identifica algunas ventajas como la rapidez y creatividad, pero señala posibles desventajas como la pérdida de la ortografía correcta y la formalidad del lenguaje.
El documento presenta a Alvaro Hervas, un piloto de karting de 11 años de edad. En la temporada 2009 competirá con el equipo Geruco Motorsport en varios campeonatos a nivel nacional e internacional. El documento también describe las oportunidades de patrocinio disponibles para las empresas que deseen apoyar la carrera de Alvaro.
El documento presenta oportunidades comerciales en el mercado canadiense. Canadá tiene una población concentrada cerca de la frontera con EEUU, con inglés y francés como idiomas oficiales. El PBI per cápita canadiense es 7-8 veces mayor que el peruano. Se destacan oportunidades en sectores agropecuario, pesquero y textil, con productos como vinos, café, frutas y prendas de vestir. También hay posibilidades en software educativo y servicios de franquicias.
The document discusses key findings from a 2011 global study on employer branding and social media. Some of the main findings include: 84% of companies believe a clearly defined employer branding strategy is key; 71% of employees say obtaining an adequate budget is the number one challenge in managing an employer brand; and 59% of companies leverage their career website to communicate their employer brand. The document also discusses the importance of defining an employer value proposition, using a hybrid team approach to managing employer branding, and ensuring consistency between internal and external marketing communications.
An introduction to the API for OnTime for IBMontimesuite
Presentation from the OnTime for IBM API workshop in Shinjuku, Tokyo, Japan on Thursday 19 November 2015. Please contact OnTime support either in Denmark or Japan for more information.
The document describes a five-day bootcamp on quantitative, algorithmic trading and high frequency trading to be held in Milan from May 11-15, 2015. The bootcamp will cover various trading strategies including dispersion trading, trend following, statistical arbitrage, artificial intelligence based strategies, portfolio optimization, market making strategies, and high frequency trading. Topics will include modeling, backtesting, risk management, and coding. Speakers include professors and professionals from hedge funds and trading firms. Attendees will include quantitative analysts, traders, researchers and others interested in breaking into quantitative trading.
Un'introduzione ai Social Media per aziende B2b:
- 4 miti da sfatare
- 5 buoni motivi per le aziende B2B
- le ragioni del No: quando non conviene approcciare i Social media
Beyond Cortana & Siri: Using Speech Recognition & Speech Synthesis for the Ne...Nick Landry
Our society has a problem. Individuals are hooked on apps, phones, tablets and social networking. We created these devices and these apps that have become a core part of our lives but we stopped short. We failed to recognize some of the problematic situations where our apps are used. People are texting, emailing and chatting while driving. Pedestrians walk into busy intersections and into sidewalk hazards because they refuse to put their phone down. We cannot entirely blame them. We created a mobile revolution, and now we just can’t simply ask them to put it on hold when it’s not convenient. It’s almost an addiction and too often it has led to fatal results.
Furthermore, mobile applications are not always easy to work with due to the small screen and on-screen keyboard. Other people struggle to use traditional computing devices due to handicaps. Using our voice is a natural form of communication amongst humans. Ever since 2001: A Space Odyssey, we’ve been dreaming of computers who can converse with us like HAL9000 or the Star Trek computers. Or maybe you’re part of the new generation of geeks dreaming of Halo’s Cortana? Thanks to the new advances and SDKs for speech recognition and synthesis (aka text-to-speech), we are now several steps closer to this reality. Siri is not the end game, she’s the beginning.
This session explores the design models and development techniques you can use to add voice recognition to your mobile applications, including in-app commands, standard & custom grammars, and voice commands usable outside your app. We’ll also see how your apps can respond to the user via speech synthesis, opening-up a new world of hands-free scenarios. This reality is here, you’ll see actual live cross-platform demos with speech and you can now learn how to do it. Speech support is not just cool or a convenience, it should be a necessity in many apps.
This document provides code examples for adding and handling voice commands in Windows Phone apps. It shows how to install voice command sets from an XML file, navigate to different pages based on the recognized command, and access semantics from the recognition result to handle commands with parameters.
Arma virumque cano (nidicaciones sobre c+¦mo y por qu+® leer la e neida)Deletrea .
La Eneida fue encargada por el emperador Augusto a Virgilio como un poema nacional romano para ensalzar los valores del Imperio y otorgar un origen divino a la familia imperial. Virgilio dedicó los últimos 11 años de su vida a escribir la Eneida, divida en 12 libros que siguen la estructura de la Odisea y la Ilíada de Homero. Narra las aventuras de Eneas tras la caída de Troya y su llegada a Italia, donde funda una nueva Troya y da origen al pueblo
La bilis es una secreción acuosa producida por el hígado que ayuda a digerir las grasas y absorber vitaminas. Contiene agua, sales biliares, bilirrubina, colesterol y ácidos grasos. Sus funciones incluyen emulsificar las grasas en el intestino, eliminar desechos y regular el colesterol. Se almacena en la vesícula biliar y se libera al duodeno cuando se ingiere comida.
El documento presenta 25 fotografías de vestimentas de fiestas de épocas pasadas donde no se usaban obligatoriamente los colores blanco y rojo, que actualmente son símbolos de celebración en muchas culturas.
El documento presenta siete motivos para recordar el municipio de Malambo en Colombia. Cada motivo se representa por una flor de color diferente e incluye: la ubicación del municipio junto al Río Grande de la Magdalena (flor azul); la tradición de los indígenas que llegaron antes del año 1120 a.C. (flor dorada); la contribución de personas de Malambo a la independencia de Cartagena (flor roja); la diversidad cultural de la gente del municipio (flor amarilla); la participación en el Carna
Se arrestó a un hombre de 59 años por nuevas acusaciones de delitos sexuales relacionadas con el escándalo de abusos sexuales de Jimmy Savile. La policía anunció el arresto de un hombre por nuevas acusaciones de delitos sexuales vinculadas al escándalo de abusos sexuales de Jimmy Savile.
This document discusses agile development practices and how communities can grow through sharing stories. It provides an overview of agile practices like iterations, standups, retrospectives, and continuous integration. The document suggests forming communities through chartering and using tools like personas and story maps. It asks the reader to reflect on their journey and practice beginner's mind to avoid expertise traps. The overall message is that agile methods are tools to deliver value iteratively while connecting as a community in discovery and growth.
Aquí una presentación en la que hablo sobre el mundo del gofre, su historia, sus tipos, lugares recomendados y una receta para aprender a prepararlos de una forma muy rápida y sencilla.
El documento analiza las oportunidades que ofrece la crisis para las empresas a través de una mayor empatía con los clientes. Sugiere que las empresas se enfoquen en cuidar a los clientes, anticiparse a sus necesidades y sorprenderlos. También propone varias ideas como ofrecer versiones gratuitas de productos, facilitar la comparación de precios y opciones asequibles para que los consumidores gasten menos o se diviertan en casa.
Penn State Cooperative Extension conducted a survey of 940 school districts to elicit feedback on how Marcellus shale gas drilling is affecting their students and their schools.
El documento resume las celebraciones del 250 aniversario de la inauguración del Real Colegio de Artillería en Segovia. El Rey Juan Carlos I presidió el acto central en el Alcázar de Segovia, donde se entregó una medalla de oro al director de la Academia de Artillería. La academia es considerada el centro de enseñanza militar más antiguo del mundo y ha tenido un papel importante en la modernización del ejército español y el desarrollo científico y tecnológico.
Call Las Vegas Reltor Jeff Mix at 702-510-9625.
Here is a CMA I prepared for 2124 Lipari Ct Las Vegas NV 89123 that is currently listed at 168,800 and 88 per sq ft.
Sales over the last 90 days have been between 105,000 to 188,000 for similar sized homes and between 55 to 87 per ft.
Rents for similar sized homes over last 6 months are between 1350 to 1650 per month.
This document summarizes lessons learned from holiday email marketing strategies in 2012. It discusses performance of Black Friday and Cyber Monday emails, the success of a sweepstakes campaign in acquiring new subscribers, and ideas for additional holiday email types. Key takeaways include the importance of re-engaging past customers, testing multiple subject lines, and capitalizing on post-holiday sales. Data from 2012 showed benefits of re-mailing on key days and planning extensions when deals are expiring.
Este documento describe diferentes formas abreviadas y alternativas de escribir, incluyendo lenguaje de chat, emoticonos y leet speak. Explica que estas formas facilitan la comunicación al ahorrar tiempo y dinero. También identifica algunas ventajas como la rapidez y creatividad, pero señala posibles desventajas como la pérdida de la ortografía correcta y la formalidad del lenguaje.
El documento presenta a Alvaro Hervas, un piloto de karting de 11 años de edad. En la temporada 2009 competirá con el equipo Geruco Motorsport en varios campeonatos a nivel nacional e internacional. El documento también describe las oportunidades de patrocinio disponibles para las empresas que deseen apoyar la carrera de Alvaro.
El documento presenta oportunidades comerciales en el mercado canadiense. Canadá tiene una población concentrada cerca de la frontera con EEUU, con inglés y francés como idiomas oficiales. El PBI per cápita canadiense es 7-8 veces mayor que el peruano. Se destacan oportunidades en sectores agropecuario, pesquero y textil, con productos como vinos, café, frutas y prendas de vestir. También hay posibilidades en software educativo y servicios de franquicias.
The document discusses key findings from a 2011 global study on employer branding and social media. Some of the main findings include: 84% of companies believe a clearly defined employer branding strategy is key; 71% of employees say obtaining an adequate budget is the number one challenge in managing an employer brand; and 59% of companies leverage their career website to communicate their employer brand. The document also discusses the importance of defining an employer value proposition, using a hybrid team approach to managing employer branding, and ensuring consistency between internal and external marketing communications.
An introduction to the API for OnTime for IBMontimesuite
Presentation from the OnTime for IBM API workshop in Shinjuku, Tokyo, Japan on Thursday 19 November 2015. Please contact OnTime support either in Denmark or Japan for more information.
The document describes a five-day bootcamp on quantitative, algorithmic trading and high frequency trading to be held in Milan from May 11-15, 2015. The bootcamp will cover various trading strategies including dispersion trading, trend following, statistical arbitrage, artificial intelligence based strategies, portfolio optimization, market making strategies, and high frequency trading. Topics will include modeling, backtesting, risk management, and coding. Speakers include professors and professionals from hedge funds and trading firms. Attendees will include quantitative analysts, traders, researchers and others interested in breaking into quantitative trading.
Un'introduzione ai Social Media per aziende B2b:
- 4 miti da sfatare
- 5 buoni motivi per le aziende B2B
- le ragioni del No: quando non conviene approcciare i Social media
Beyond Cortana & Siri: Using Speech Recognition & Speech Synthesis for the Ne...Nick Landry
Our society has a problem. Individuals are hooked on apps, phones, tablets and social networking. We created these devices and these apps that have become a core part of our lives but we stopped short. We failed to recognize some of the problematic situations where our apps are used. People are texting, emailing and chatting while driving. Pedestrians walk into busy intersections and into sidewalk hazards because they refuse to put their phone down. We cannot entirely blame them. We created a mobile revolution, and now we just can’t simply ask them to put it on hold when it’s not convenient. It’s almost an addiction and too often it has led to fatal results.
Furthermore, mobile applications are not always easy to work with due to the small screen and on-screen keyboard. Other people struggle to use traditional computing devices due to handicaps. Using our voice is a natural form of communication amongst humans. Ever since 2001: A Space Odyssey, we’ve been dreaming of computers who can converse with us like HAL9000 or the Star Trek computers. Or maybe you’re part of the new generation of geeks dreaming of Halo’s Cortana? Thanks to the new advances and SDKs for speech recognition and synthesis (aka text-to-speech), we are now several steps closer to this reality. Siri is not the end game, she’s the beginning.
This session explores the design models and development techniques you can use to add voice recognition to your mobile applications, including in-app commands, standard & custom grammars, and voice commands usable outside your app. We’ll also see how your apps can respond to the user via speech synthesis, opening-up a new world of hands-free scenarios. This reality is here, you’ll see actual live cross-platform demos with speech and you can now learn how to do it. Speech support is not just cool or a convenience, it should be a necessity in many apps.
This document provides code examples for adding and handling voice commands in Windows Phone apps. It shows how to install voice command sets from an XML file, navigate to different pages based on the recognized command, and access semantics from the recognition result to handle commands with parameters.
The document provides instructions for integrating Cortana voice commands into an app using 3 simple steps:
1. Create a voice command definition (VCD) XML file that defines the voice commands and their corresponding actions.
2. Register the VCD XML file on app startup so that Cortana recognizes the new commands.
3. Handle the voice command activation by getting the triggered command details from Cortana and executing the associated code.
The document discusses speech capabilities on Windows Phone 8, including speech synthesis and voice commands. Speech synthesis allows applications to speak text using the default voice or a selected language voice. Voice commands allow applications to be launched or controlled through spoken phrases defined in a voice command definition file. The file defines commands, example phrases, and which application pages to open in response to voice input.
Fonctions vocales sous Windows Phone : intégrez votre application à Cortana !Microsoft
Cortana a pris des cours de français ! Elle est désormais disponible en version alpha sous Windows Phone. Profitez de ce lancement pour mettre à jour vos applications et les intégrer simplement à Cortana. Au programme : Cortana, les fonctions vocales (Commandes, Reconnaissance, Synthèse) et le contextual awareness. En bref, donnez la parole à vos applications !
Developing with Speech and Voice Recognition in Mobile AppsNick Landry
Can you hear me now? Move over Siri, here comes an army of speech-enabled mobile applications on Windows Phone. Mobile applications are not always easy to work with due to the small screen and small on-screen keyboard. Using our voice is a natural form of communication amongst humans, and ever since 2001: A Space Odyssey, we’ve been dreaming of computers who can converse with us like HAL9000. Or maybe you’re part of the new generation of geeks dreaming of Cortana? Thanks to the new Microsoft SDKs for voice recognition and speech synthesis (aka text-to-speech), we are now several steps closer to this reality. This session explores the development techniques you can use to add voice recognition to your Windows Phone applications, including in-app commands, standard & custom grammars, and voice commands usable outside your app. We’ll also see how your apps can respond to the user via speech synthesis, opening-up a new world of hands-free scenarios. This reality is here, you’ll see actual live demos with speech and you can now learn how to do it.
This document discusses speech capabilities for Windows Phone 8 applications, including:
1) Voice commands that allow users to trigger actions like navigation through spoken commands.
2) Speech recognition that enables natural interaction through grammar-based recognition of user speech.
3) Text-to-speech (TTS) that outputs synthesized speech to provide spoken instructions to users.
This document is a slide deck presentation about administering SharePoint 2010 with Windows PowerShell. The presentation introduces the SharePoint Management Shell and demonstrates common administrative tasks like managing permissions, sites, servers, and web applications using PowerShell cmdlets. It encourages attendees to think of tasks they want to automate and provides an overview of supported filters and limits for retrieving objects. The presentation concludes with a Q&A section and information on obtaining additional resources.
KE User Group 2011 showcase: WordPress integrationPaul Trafford
The document discusses integrating a collection from the EMu database system into a WordPress website. Key points:
- The Museum of the History of Science installed WordPress and customized a theme to access their EMu collection through the EMu PHP API.
- This allows searching the collection and displaying search results on WordPress pages through shortcodes. It provides a consistent interface and closer integration between collections and other site content.
- While the integration was successful, it required significant coding and has some limitations. The museum plans to further develop functionality and also investigate alternatives like the IMu database system.
This document provides an overview of Selenium, an open source web application testing framework. It can run tests across different browsers and platforms. Selenium allows simulating user interactions like navigating web pages and making assertions to validate pages. Tests can be recorded using the Selenium IDE Firefox plugin and played back with assertions. A test suite is a HTML file containing links to individual test case files. The Selenium IDE test runner allows executing the test suite and controlling the test execution speed and log output.
SELENIUM COURSE CONTENT:
Course Description
Within fast moving agile software development teams it becomes important to test user interfaces as they are being coded. Automated Testing techniques using Selenium 2 allow for test replay of important features as development progresses. Selenium IDE and Selenium Web Driver are important tools for any tester or developer to use in ensuring software quality and making changes with confidence. This interactive, hands-on workshop provides an understanding and advanced techniques in the use of Selenium 2 with hands on practice. The practice exercises are tailored to various skill levels and type of application being tested, from simple forms to complex web applications.
Objectives:
The class will teach participants to:
Understand trade-offs of automated vs. manual testing.
Record, edit and play back Selenium IDE tests against multiple types of web applications.
Minimize test failure due to normal changes to code.
Understanding of basic Selenium commands to get through common issue with web applications.
Use of Eclipse to run tests individually and as a group to generate test failure reports.
Learn how to help developers understand the importance of making applications more testable to improve usability.
SELENIUM COURSE CONTENT:
Course Description
Within fast moving agile software development teams it becomes important to test user interfaces as they are being coded. Automated Testing techniques using Selenium 2 allow for test replay of important features as development progresses. Selenium IDE and Selenium Web Driver are important tools for any tester or developer to use in ensuring software quality and making changes with confidence. This interactive, hands-on workshop provides an understanding and advanced techniques in the use of Selenium 2 with hands on practice. The practice exercises are tailored to various skill levels and type of application being tested, from simple forms to complex web applications.
Objectives:
The class will teach participants to:
Understand trade-offs of automated vs. manual testing.
Record, edit and play back Selenium IDE tests against multiple types of web applications.
Minimize test failure due to normal changes to code.
Understanding of basic Selenium commands to get through common issue with web applications.
Use of Eclipse to run tests individually and as a group to generate test failure reports.
Learn how to help developers understand the importance of making applications more testable to improve usability.
Topics:
Overview of automated testing
Selenium Suite Overview
Selenium 2 Limitations
Selenium IDE
HTML Locator strategy and false test failure
Firefox Firebug and reading HTML
Selenium Web Driver setup
Eclipse and JUnit (Don't Panic it's just code)
Convert Selenium IDE tests into Selenium 2 Java WebDriver tests
Working with unsupported commands from Selenium IDE
Dealing with security and certificates
Selenium Web Driver practice workshop
Learn how to test in multiple browsers and generate metrics and reports
Discussion of setting up Selenium Web Driver for continuous integration
Bonus Features:
Sample Selenium Web Driver code
Scripts to run JUnit test suites on multiple browsers and generate reports and metrics
List of web resources and blogs for reference
Laminated CSS selector cheat sheet
Laminated Selenium 2 command reference
Courses Offered in Our Training Institute:
1)Msbi
2)Dotnet
3)SharePoint 2010
4)Testing TOOLs - QTP, QC , Load Runner, Selenium
5)SQL SERVER
Visit for the course Details www.mindqonline.com
Mail us for details: online@mindqsystems.com
Call me at: +91-9502991277.
Best Practices for Embedded UA - WritersUA 2012, Scott DeLoach, ClickStartScott DeLoach
1) The document provides best practices for writing embedded user assistance (UA) including using an informal friendly writing style, integrating content from other sources, allowing user feedback, customization, and learning.
2) It demonstrates HTML5 techniques for UA like adding subtitles to videos, editing content, and saving user-provided content using local storage.
3) Forms guidelines are discussed like requiring input, validating formats, and spellchecking. Examples of applications and websites using these techniques are provided.
The document discusses the Android search framework and provides guidance on how to integrate search functionality into an Android application. Key points include:
- The search framework provides a customizable system-wide search dialog that applications can leverage instead of building their own search interfaces.
- Applications need to create a searchable configuration XML file, declare a searchable Activity, and implement searching and displaying results when a query is passed from the framework.
- Additional customizations like voice search, recent query suggestions, and passing context data are possible. Suggestions are implemented with a SearchRecentSuggestionsProvider content provider.
- The framework handles invoking the search dialog and passing the query to the application searchable Activity using Intents when
The document discusses the Android search framework and how applications can integrate with it. Key points:
- The search framework provides a customizable search dialog that applications do not have to build themselves. It handles passing search queries to applications.
- Applications need to create a searchable configuration XML, declare a searchable Activity, and implement searching and displaying results when a query is received.
- The search dialog can be invoked via the device search key or a button. Context data can be passed to refine searches.
- Voice search and recent query suggestions can be added. Suggestions require implementing a content provider to store and provide suggestion data.
The document discusses tips and challenges for optimizing search experiences in SharePoint. It recommends using the Search Center template to create a dedicated search site, indexing external content from databases and applications, and customizing search results and properties. Troubleshooting tips include checking crawl logs, validating IFilters, mapping crawled to managed properties, and using scopes and synonyms to refine searches.
The document discusses various topics related to Spring Boot including Spring Data REST, CSRF protection, and Cloud Foundry integration. It provides code examples for exposing repositories as REST APIs with Spring Data REST, handling errors and exceptions, and securing applications with CSRF tokens. It also briefly mentions Spring Boot features like configuration properties and the Actuator.
Azure service fabric for building micro service based applications. Comparison of monolythic application with cloud based micro service application, hosting over cloud containers like docker
C# is being continually updated with new features. C# 6.0 introduced many new features such as auto-property initializers, getter-only auto-properties, and null propagation. C# 7.0 may include additional features like binary literals, nested methods, and pattern matching. The .NET Compiler Platform makes it easier for Microsoft to improve and extend the C# language over time.
In this presentation, I presented how to build an angular JS Application with SPA in mind and also make sure you use up all the available concepts to create versatile and creative web application with less boilerplate javascript code.
This document discusses streaming data and event processing in the cloud. It notes that event data is already distributed globally and stored in the cloud, so processing should be done by bringing the computation to the data rather than moving the data. It also outlines some key considerations for building a cloud-based event processing solution such as infrastructure provisioning, coding for data ingestion and egress, resilience planning, solution design, and monitoring.
Designing azure compute and storage infrastructureAbhishek Sur
How to design compute and storage, description of premium tier machines and demonstration using Iometer to compare two different tier machines comparing cost and performance.
The document outlines the debugging tools available in Microsoft Edge browser. It describes the DOM Explorer (CTRL + 1), Console (CTRL + 2), Debugger (CTRL + 3), Network (CTRL + 4), Performance (CTRL + 5), Memory (CTRL + 6), Emulator (CTRL + 7), and Experiment (CTRL + 8) tools. Each tool is used for inspecting different aspects of web pages, like the DOM structure, running JavaScript, network activity, performance issues and more. The document provides details on the features available in each debugging tool to help developers test and troubleshoot web pages.
The document provides an overview of Microsoft Azure Mobile Services, including features like structured storage, authentication, backend logic, push notifications, scheduling, and more. It discusses the REST API, JSON to SQL type mappings, auto-generated columns, server-side table scripts, custom APIs, file storage, notification hubs, offline synchronization, the command line interface, and scaling options. Live demos are presented on topics like adding data validation logic, push notifications, authentication, and using the CLI.
The document discusses Windows Azure Pack, which brings key capabilities of Microsoft Azure to an organization's on-premises infrastructure. It allows organizations to build and manage a private cloud using familiar Windows Server and System Center technologies. Windows Azure Pack supports multi-tenant cloud services, virtual networking, automation, and integration with third-party applications and developer tools. It provides a way for enterprises and service providers to offer Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) capabilities on private infrastructure.
AMicrosoft azure hyper v recovery manager overviewAbhishek Sur
This document discusses Windows Azure Hyper-V Recovery Manager, a service that protects virtual machines running in a private cloud by replicating them to a secondary site. It monitors the health of System Center Virtual Machine Manager clouds and orchestrates quick recovery of VMs if an outage occurs at the primary site. The service automates replication using in-box technologies and cloud-based recovery plans. It works by configuring protection through a recovery plan, replicating VMs from Site A to Site B using Hyper-V Replica, and enabling the recovery of services through the orchestrated failover of VMs if Site A fails. Customers can use it to reduce the impact of planned downtime when a secondary site is available.
The document discusses different methods for integrating third party systems with SAP Business One (B1), including DI API, B1 Web Services (B1WS), and DI Server. It provides details on each method, including advantages and limitations. The presenter is Abhishek Sur, Product Head for InSync Solutions, who has expertise in Microsoft technologies and SAP B1 integration.
This document provides tips for improving the performance of ASP.NET applications. It discusses ways to optimize ASP.NET pages by reducing page size, minimizing viewstate, and adding caching. It also recommends optimizing database queries, using asynchronous calls judiciously, and profiling SQL to identify inefficient queries. Configuration tips include enabling compression, removing unnecessary HTTP modules, and setting the application pool start mode to AlwaysRunning.
This document provides an overview of key features of the Windows Presentation Foundation (WPF) including resolution independence, XAML usage, data binding, control templates, graphics and animation support, the MVVM pattern, triggers, data templates, and value converters. WPF allows building visually stunning Windows applications with vector graphics, templates, bindings, and animations while remaining resolution independent. It follows an MVVM pattern to separate user interface from application logic and data access.
This document discusses new features in SQL Server 2012 including Always On, contained databases, columnstore indexes, Visual Studio integration, and TSQL enhancements. It provides details on columnstore indexes, query pagination using new features, windowing functions using the OVER clause, sequences, metadata discovery using new DMVs and stored procedures, enhanced functions, and general TSQL improvements including THROW and extended events.
Dev days Visual Studio 2012 EnhancementsAbhishek Sur
This document discusses various enhancements to the user interface in Visual Studio 2012. It describes improvements to Solution Explorer including search capabilities, navigation, filtering, and properties previews. It also outlines enhancements to debugging tools such as setting breakpoints, evaluating hover values, and using tracepoints. The document notes expanded JavaScript and HTML5 IntelliSense as well as improved code analysis, metrics, and cloning tools.
This document discusses key aspects of the .NET infrastructure including user interfaces, services, data access layers, and core components. It covers topics like loops and iterators, delegates and events, generics and extension methods, anonymous types and LINQ, dynamic types, and asynchronous patterns in .NET.
ASP.NET 4.5 introduces strongly typed data controls that are bound to models. This provides compile time checking and navigation support. Data controls now support model binding, which allows selecting, filtering, editing and validating data without needing to write additional code. Validation is supported through data annotations. Custom value providers and validation attributes can also be created to extend model binding functionality.
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
Flutter is a popular open source, cross-platform framework developed by Google. In this webinar we'll explore Flutter and its architecture, delve into the Flutter Embedder and Flutter’s Dart language, discover how to leverage Flutter for embedded device development, learn about Automotive Grade Linux (AGL) and its consortium and understand the rationale behind AGL's choice of Flutter for next-gen IVI systems. Don’t miss this opportunity to discover whether Flutter is right for your project.
OpenMetadata Community Meeting - 5th June 2024OpenMetadata
The OpenMetadata Community Meeting was held on June 5th, 2024. In this meeting, we discussed about the data quality capabilities that are integrated with the Incident Manager, providing a complete solution to handle your data observability needs. Watch the end-to-end demo of the data quality features.
* How to run your own data quality framework
* What is the performance impact of running data quality frameworks
* How to run the test cases in your own ETL pipelines
* How the Incident Manager is integrated
* Get notified with alerts when test cases fail
Watch the meeting recording here - https://www.youtube.com/watch?v=UbNOje0kf6E
SMS API Integration in Saudi Arabia| Best SMS API ServiceYara Milbes
Discover the benefits and implementation of SMS API integration in the UAE and Middle East. This comprehensive guide covers the importance of SMS messaging APIs, the advantages of bulk SMS APIs, and real-world case studies. Learn how CEQUENS, a leader in communication solutions, can help your business enhance customer engagement and streamline operations with innovative CPaaS, reliable SMS APIs, and omnichannel solutions, including WhatsApp Business. Perfect for businesses seeking to optimize their communication strategies in the digital age.
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI AppGoogle
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI App
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-fusion-buddy-review
AI Fusion Buddy Review: Key Features
✅Create Stunning AI App Suite Fully Powered By Google's Latest AI technology, Gemini
✅Use Gemini to Build high-converting Converting Sales Video Scripts, ad copies, Trending Articles, blogs, etc.100% unique!
✅Create Ultra-HD graphics with a single keyword or phrase that commands 10x eyeballs!
✅Fully automated AI articles bulk generation!
✅Auto-post or schedule stunning AI content across all your accounts at once—WordPress, Facebook, LinkedIn, Blogger, and more.
✅With one keyword or URL, generate complete websites, landing pages, and more…
✅Automatically create & sell AI content, graphics, websites, landing pages, & all that gets you paid non-stop 24*7.
✅Pre-built High-Converting 100+ website Templates and 2000+ graphic templates logos, banners, and thumbnail images in Trending Niches.
✅Say goodbye to wasting time logging into multiple Chat GPT & AI Apps once & for all!
✅Save over $5000 per year and kick out dependency on third parties completely!
✅Brand New App: Not available anywhere else!
✅ Beginner-friendly!
✅ZERO upfront cost or any extra expenses
✅Risk-Free: 30-Day Money-Back Guarantee!
✅Commercial License included!
See My Other Reviews Article:
(1) AI Genie Review: https://sumonreview.com/ai-genie-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
#AIFusionBuddyReview,
#AIFusionBuddyFeatures,
#AIFusionBuddyPricing,
#AIFusionBuddyProsandCons,
#AIFusionBuddyTutorial,
#AIFusionBuddyUserExperience
#AIFusionBuddyforBeginners,
#AIFusionBuddyBenefits,
#AIFusionBuddyComparison,
#AIFusionBuddyInstallation,
#AIFusionBuddyRefundPolicy,
#AIFusionBuddyDemo,
#AIFusionBuddyMaintenanceFees,
#AIFusionBuddyNewbieFriendly,
#WhatIsAIFusionBuddy?,
#HowDoesAIFusionBuddyWorks
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
UI5con 2024 - Keynote: Latest News about UI5 and it’s EcosystemPeter Muessig
Learn about the latest innovations in and around OpenUI5/SAPUI5: UI5 Tooling, UI5 linter, UI5 Web Components, Web Components Integration, UI5 2.x, UI5 GenAI.
Recording:
https://www.youtube.com/live/MSdGLG2zLy8?si=INxBHTqkwHhxV5Ta&t=0
Hand Rolled Applicative User ValidationCode KataPhilip Schwarz
Could you use a simple piece of Scala validation code (granted, a very simplistic one too!) that you can rewrite, now and again, to refresh your basic understanding of Applicative operators <*>, <*, *>?
The goal is not to write perfect code showcasing validation, but rather, to provide a small, rough-and ready exercise to reinforce your muscle-memory.
Despite its grandiose-sounding title, this deck consists of just three slides showing the Scala 3 code to be rewritten whenever the details of the operators begin to fade away.
The code is my rough and ready translation of a Haskell user-validation program found in a book called Finding Success (and Failure) in Haskell - Fall in love with applicative functors.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
2. Abhishek Sur
Microsoft MVP in ASP.NET/IIS
Twitter : @abhi2434
Facebook : abhi2434
Email : contact@abhisheksur.com
Presented by
3.
4.
5.
6.
7.
8.
9. <?xml version="1.0" encoding="utf-8"?>
<VoiceCommands xmlns="http://schemas.microsoft.com/voicecommands/1.1">
<CommandSet xml:lang="en-us" Name="englishCommands">
<CommandPrefix>MSDN</CommandPrefix>
<Example>How do I add Voice Commands to my application</Example>
<Command Name="FindText">
<Example>Find Install Voice Command Sets</Example>
<ListenFor>Search</ListenFor>
<ListenFor>Search for {dictatedSearchTerms}</ListenFor>
<ListenFor>Find</ListenFor>
<ListenFor>Find {dictatedSearchTerms}</ListenFor>
<Feedback>Search on MSDN</Feedback>
<Navigate Target="MainPage.xaml" />
</Command>
<Command Name="nlpCommand">
<Example>How do I add Voice Commands to my application</Example>
<ListenFor>{dictatedVoiceCommandText}</ListenFor>
<Feedback>Starting MSDN...</Feedback>
<Navigate Target="MainPage.xaml" />
</Command>
<PhraseTopic Label="dictatedVoiceCommandText" Scenario="Dictation">
<Subject>MSDN</Subject>
</PhraseTopic>
<PhraseTopic Label="dictatedSearchTerms" Scenario="Search">
<Subject>MSDN</Subject>
</PhraseTopic>
</CommandSet>
</VoiceCommands>
10. <?xml version="1.0" encoding="utf-8"?>
<VoiceCommands xmlns="http://schemas.microsoft.com/voicecommands/1.1">
<CommandSet xml:lang="en-us" Name="englishCommands">
<CommandPrefix>MSDN</CommandPrefix>
<Example>How do I add Voice Commands to my application</Example>
<Command Name="FindText">
<Example>Find Install Voice Command Sets</Example>
<ListenFor>Search</ListenFor>
<ListenFor>Search for {dictatedSearchTerms}</ListenFor>
<ListenFor>Find</ListenFor>
<ListenFor>Find {dictatedSearchTerms}</ListenFor>
<Feedback>Search on MSDN</Feedback>
<Navigate Target="MainPage.xaml" />
</Command>
<Command Name="nlpCommand">
<Example>How do I add Voice Commands to my application</Example>
<ListenFor>{dictatedVoiceCommandText}</ListenFor>
<Feedback>Starting MSDN...</Feedback>
<Navigate Target="MainPage.xaml" />
</Command>
<PhraseTopic Label="dictatedVoiceCommandText" Scenario="Dictation">
<Subject>MSDN</Subject>
</PhraseTopic>
<PhraseTopic Label="dictatedSearchTerms" Scenario="Search">
<Subject>MSDN</Subject>
</PhraseTopic>
</CommandSet>
</VoiceCommands>
<CommandPrefix>MSDN</CommandPrefix>
11. <?xml version="1.0" encoding="utf-8"?>
<VoiceCommands xmlns="http://schemas.microsoft.com/voicecommands/1.1">
<CommandSet xml:lang="en-us" Name="englishCommands">
<CommandPrefix>MSDN</CommandPrefix>
<Example>How do I add Voice Commands to my application</Example>
<Command Name="FindText">
<Example>Find Install Voice Command Sets</Example>
<ListenFor>Search</ListenFor>
<ListenFor>Search for {dictatedSearchTerms}</ListenFor>
<ListenFor>Find</ListenFor>
<ListenFor>Find {dictatedSearchTerms}</ListenFor>
<Feedback>Search on MSDN</Feedback>
<Navigate Target="MainPage.xaml" />
</Command>
<Command Name="nlpCommand">
<Example>How do I add Voice Commands to my application</Example>
<ListenFor>{dictatedVoiceCommandText}</ListenFor>
<Feedback>Starting MSDN...</Feedback>
<Navigate Target="MainPage.xaml" />
</Command>
<PhraseTopic Label="dictatedVoiceCommandText" Scenario="Dictation">
<Subject>MSDN</Subject>
</PhraseTopic>
<PhraseTopic Label="dictatedSearchTerms" Scenario="Search">
<Subject>MSDN</Subject>
</PhraseTopic>
</CommandSet>
</VoiceCommands>
<Example>How do I add Voice Commands to my application</Example>
12. <?xml version="1.0" encoding="utf-8"?>
<VoiceCommands xmlns="http://schemas.microsoft.com/voicecommands/1.1">
<CommandSet xml:lang="en-us" Name="englishCommands">
<CommandPrefix>MSDN</CommandPrefix>
<Example>How do I add Voice Commands to my application</Example>
<Command Name="FindText">
<Example>Find Install Voice Command Sets</Example>
<ListenFor>Search</ListenFor>
<ListenFor>Search for {dictatedSearchTerms}</ListenFor>
<ListenFor>Find</ListenFor>
<ListenFor>Find {dictatedSearchTerms}</ListenFor>
<Feedback>Search on MSDN</Feedback>
<Navigate Target="MainPage.xaml" />
</Command>
<Command Name="nlpCommand">
<Example>How do I add Voice Commands to my application</Example>
<ListenFor>{dictatedVoiceCommandText}</ListenFor>
<Feedback>Starting MSDN...</Feedback>
<Navigate Target="MainPage.xaml" />
</Command>
<PhraseTopic Label="dictatedVoiceCommandText" Scenario="Dictation">
<Subject>MSDN</Subject>
</PhraseTopic>
<PhraseTopic Label="dictatedSearchTerms" Scenario="Search">
<Subject>MSDN</Subject>
</PhraseTopic>
</CommandSet>
</VoiceCommands>
<Command Name="FindText">
<Example>Find Install Voice Command Sets</Example>
<ListenFor>Search</ListenFor>
<ListenFor>Search for {dictatedSearchTerms}</ListenFor>
<ListenFor>Find</ListenFor>
<ListenFor>Find {dictatedSearchTerms}</ListenFor>
<Feedback>Search on MSDN</Feedback>
<Navigate Target="MainPage.xaml" />
</Command>
<Command Name="nlpCommand">
<Example>How do I add Voice Commands to my application</Example>
<ListenFor>{dictatedVoiceCommandText}</ListenFor>
<Feedback>Starting MSDN...</Feedback>
<Navigate Target="MainPage.xaml" />
/Command>
13. <?xml version="1.0" encoding="utf-8"?>
<VoiceCommands xmlns="http://schemas.microsoft.com/voicecommands/1.1">
<CommandSet xml:lang="en-us" Name="englishCommands">
<CommandPrefix>MSDN</CommandPrefix>
<Example>How do I add Voice Commands to my application</Example>
<Command Name="FindText">
<Example>Find Install Voice Command Sets</Example>
<ListenFor>Search</ListenFor>
<ListenFor>Search for {dictatedSearchTerms}</ListenFor>
<ListenFor>Find</ListenFor>
<ListenFor>Find {dictatedSearchTerms}</ListenFor>
<Feedback>Search on MSDN</Feedback>
<Navigate Target="MainPage.xaml" />
</Command>
<Command Name="nlpCommand">
<Example>How do I add Voice Commands to my application</Example>
<ListenFor>{dictatedVoiceCommandText}</ListenFor>
<Feedback>Starting MSDN...</Feedback>
<Navigate Target="MainPage.xaml" />
</Command>
<PhraseTopic Label="dictatedVoiceCommandText" Scenario="Dictation">
<Subject>MSDN</Subject>
</PhraseTopic>
<PhraseTopic Label="dictatedSearchTerms" Scenario="Search">
<Subject>MSDN</Subject>
</PhraseTopic>
</CommandSet>
</VoiceCommands>
<Example>Find Install Voice Command Sets</Example>
14. <?xml version="1.0" encoding="utf-8"?>
<VoiceCommands xmlns="http://schemas.microsoft.com/voicecommands/1.1">
<CommandSet xml:lang="en-us" Name="englishCommands">
<CommandPrefix>MSDN</CommandPrefix>
<Example>How do I add Voice Commands to my application</Example>
<Command Name="FindText">
<Example>Find Install Voice Command Sets</Example>
<ListenFor>Search</ListenFor>
<ListenFor>Search for {dictatedSearchTerms}</ListenFor>
<ListenFor>Find</ListenFor>
<ListenFor>Find {dictatedSearchTerms}</ListenFor>
<Feedback>Search on MSDN</Feedback>
<Navigate Target="MainPage.xaml" />
</Command>
<Command Name="nlpCommand">
<Example>How do I add Voice Commands to my application</Example>
<ListenFor>{dictatedVoiceCommandText}</ListenFor>
<Feedback>Starting MSDN...</Feedback>
<Navigate Target="MainPage.xaml" />
</Command>
<PhraseTopic Label="dictatedVoiceCommandText" Scenario="Dictation">
<Subject>MSDN</Subject>
</PhraseTopic>
<PhraseTopic Label="dictatedSearchTerms" Scenario="Search">
<Subject>MSDN</Subject>
</PhraseTopic>
</CommandSet>
</VoiceCommands>
<ListenFor>Search</ListenFor>
<ListenFor>Search for {dictatedSearchTerms}</ListenFor>
<ListenFor>Find</ListenFor>
<ListenFor>Find {dictatedSearchTerms}</ListenFor>
15. <?xml version="1.0" encoding="utf-8"?>
<VoiceCommands xmlns="http://schemas.microsoft.com/voicecommands/1.1">
<CommandSet xml:lang="en-us" Name="englishCommands">
<CommandPrefix>MSDN</CommandPrefix>
<Example>How do I add Voice Commands to my application</Example>
<Command Name="FindText">
<Example>Find Install Voice Command Sets</Example>
<ListenFor>Search</ListenFor>
<ListenFor>Search for {dictatedSearchTerms}</ListenFor>
<ListenFor>Find</ListenFor>
<ListenFor>Find {dictatedSearchTerms}</ListenFor>
<Feedback>Search on MSDN</Feedback>
<Navigate Target="MainPage.xaml" />
</Command>
<Command Name="nlpCommand">
<Example>How do I add Voice Commands to my application</Example>
<ListenFor>{dictatedVoiceCommandText}</ListenFor>
<Feedback>Starting MSDN...</Feedback>
<Navigate Target="MainPage.xaml" />
</Command>
<PhraseTopic Label="dictatedVoiceCommandText" Scenario="Dictation">
<Subject>MSDN</Subject>
</PhraseTopic>
<PhraseTopic Label="dictatedSearchTerms" Scenario="Search">
<Subject>MSDN</Subject>
</PhraseTopic>
</CommandSet>
</VoiceCommands>
<Feedback>Search on MSDN</Feedback>
16. <?xml version="1.0" encoding="utf-8"?>
<VoiceCommands xmlns="http://schemas.microsoft.com/voicecommands/1.1">
<CommandSet xml:lang="en-us" Name="englishCommands">
<CommandPrefix>MSDN</CommandPrefix>
<Example>How do I add Voice Commands to my application</Example>
<Command Name="FindText">
<Example>Find Install Voice Command Sets</Example>
<ListenFor>Search</ListenFor>
<ListenFor>Search for {dictatedSearchTerms}</ListenFor>
<ListenFor>Find</ListenFor>
<ListenFor>Find {dictatedSearchTerms}</ListenFor>
<Feedback>Search on MSDN</Feedback>
<Navigate Target="MainPage.xaml" />
</Command>
<Command Name="nlpCommand">
<Example>How do I add Voice Commands to my application</Example>
<ListenFor>{dictatedVoiceCommandText}</ListenFor>
<Feedback>Starting MSDN...</Feedback>
<Navigate Target="MainPage.xaml" />
</Command>
<PhraseTopic Label="dictatedVoiceCommandText" Scenario="Dictation">
<Subject>MSDN</Subject>
</PhraseTopic>
<PhraseTopic Label="dictatedSearchTerms" Scenario="Search">
<Subject>MSDN</Subject>
</PhraseTopic>
</CommandSet>
</VoiceCommands>
<Navigate Target="MainPage.xaml" />
17. <?xml version="1.0" encoding="utf-8"?>
<VoiceCommands xmlns="http://schemas.microsoft.com/voicecommands/1.1">
<CommandSet xml:lang="en-us" Name="englishCommands">
<CommandPrefix>MSDN</CommandPrefix>
<Example>How do I add Voice Commands to my application</Example>
<Command Name="FindText">
<Example>Find Install Voice Command Sets</Example>
<ListenFor>Search</ListenFor>
<ListenFor>Search for {dictatedSearchTerms}</ListenFor>
<ListenFor>Find</ListenFor>
<ListenFor>Find {dictatedSearchTerms}</ListenFor>
<Feedback>Search on MSDN</Feedback>
<Navigate Target="MainPage.xaml" />
</Command>
<Command Name="nlpCommand">
<Example>How do I add Voice Commands to my application</Example>
<ListenFor>{dictatedVoiceCommandText}</ListenFor>
<Feedback>Starting MSDN...</Feedback>
<Navigate Target="MainPage.xaml" />
</Command>
<PhraseTopic Label="dictatedVoiceCommandText" Scenario="Dictation">
<Subject>MSDN</Subject>
</PhraseTopic>
<PhraseTopic Label="dictatedSearchTerms" Scenario="Search">
<Subject>MSDN</Subject>
</PhraseTopic>
</CommandSet>
</VoiceCommands>
<PhraseTopic Label="dictatedVoiceCommandText" Scenario="Dictation">
<Subject>MSDN</Subject>
</PhraseTopic>
<PhraseTopic Label="dictatedSearchTerms" Scenario="Search">
<Subject>MSDN</Subject>
</PhraseTopic>
18. <?xml version="1.0" encoding="utf-8"?>
<VoiceCommands xmlns="http://schemas.microsoft.com/voicecommands/1.0">
<CommandSet xml:lang="en-us" Name="englishCommands">
<CommandPrefix>MSDN</CommandPrefix>
<Example>How do I add Voice Commands to my application</Example>
<Command Name="FindText">
<Example>Find Install Voice Command Sets</Example>
<ListenFor>Search</ListenFor>
<ListenFor>Search for {dictatedSearchTerms}</ListenFor>
<ListenFor>Find</ListenFor>
<ListenFor>Find {dictatedSearchTerms}</ListenFor>
<Feedback>Search on MSDN</Feedback>
<Navigate Target="MainPage.xaml" />
</Command>
<Command Name="nlpCommand">
<Example>How do I add Voice Commands to my application</Example>
<ListenFor>{dictatedVoiceCommandText}</ListenFor>
<Feedback>Starting MSDN...</Feedback>
<Navigate Target="MainPage.xaml" />
</Command>
<PhraseTopic Label="dictatedVoiceCommandText" Scenario="Dictation">
<Subject>MSDN</Subject>
</PhraseTopic>
<PhraseTopic Label="dictatedSearchTerms" Scenario="Search">
<Subject>MSDN</Subject>
</PhraseTopic>
</CommandSet>
</VoiceCommands>
19. Windows Phone Silverlight Apps on Windows Phone 8.1
private async void
// SHOULD BE PERFORMED UNDER TRY/CATCH
Uri new ms-appx:///vcd.xml UriKind.Absolute
await
Windows Runtime Apps on Windows Phone 8.1
private async void
// SHOULD BE PERFORMED UNDER TRY/CATCH
Uri uriVoiceCommands = new Uri("ms-appx:///vcd.xml", UriKind.Absolute);
StorageFile file = await StorageFile.GetFileFromApplicationUriAsync(uriVoiceCommands);
await VoiceCommandManager.InstallCommandSetsFromStorageFileAsync(file);
21. // Windows Phone Silverlight App, in MainPage.xaml.cs
protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e)
{
base.OnNavigatedTo(e);
if (e.NavigationMode == System.Windows.Navigation.NavigationMode.New)
{
string recoText = null; // What did the user say? e.g. MSDN, "Find Windows Phone Voice Commands"
NavigationContext.QueryString.TryGetValue("reco", out recoText);
string voiceCommandName = null; // Which command was recognized in the VCD.XML file? e.g. "FindText"
NavigationContext.QueryString.TryGetValue("voiceCommandName", out voiceCommandName);
string searchTerms = null; // What did the user say, for named phrase topic or list "slots"? e.g. "Windows Phone Voice Commands"
NavigationContext.QueryString.TryGetValue("dictatedSearchTerms", out searchTerms);
switch (voiceCommandName) // What command launched the app?
{
case "FindText":
HandleFindText(searchTerms);
break;
case "nlpCommand":
HandleNlpCommand(recoText);
break;
}
}
}
22. // Windows Phone Silverlight App, in MainPage.xaml.cs
protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e)
{
base.OnNavigatedTo(e);
if (e.NavigationMode == System.Windows.Navigation.NavigationMode.New)
{
string recoText = null; // What did the user say? e.g. MSDN, "Find Windows Phone Voice Commands"
NavigationContext.QueryString.TryGetValue("reco", out recoText);
string voiceCommandName = null; // Which command was recognized in the VCD.XML file? e.g. "FindText"
NavigationContext.QueryString.TryGetValue("voiceCommandName", out voiceCommandName);
string searchTerms = null; // What did the user say, for named phrase topic or list "slots"? e.g. "Windows Phone Voice Commands"
NavigationContext.QueryString.TryGetValue("dictatedSearchTerms", out searchTerms);
switch (voiceCommandName) // What command launched the app?
{
case "FindText":
HandleFindText(searchTerms);
break;
case "nlpCommand":
HandleNlpCommand(recoText);
break;
}
}
}
// Windows Phone Silverlight App, in MainPage.xaml.cs
protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e)
23. // Windows Phone Silverlight App, in MainPage.xaml.cs
protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e)
{
base.OnNavigatedTo(e);
if (e.NavigationMode == System.Windows.Navigation.NavigationMode.New)
{
string recoText = null; // What did the user say? e.g. MSDN, "Find Windows Phone Voice Commands"
NavigationContext.QueryString.TryGetValue("reco", out recoText);
string voiceCommandName = null; // Which command was recognized in the VCD.XML file? e.g. "FindText"
NavigationContext.QueryString.TryGetValue("voiceCommandName", out voiceCommandName);
string searchTerms = null; // What did the user say, for named phrase topic or list "slots"? e.g. "Windows Phone Voice Commands"
NavigationContext.QueryString.TryGetValue("dictatedSearchTerms", out searchTerms);
switch (voiceCommandName) // What command launched the app?
{
case "FindText":
HandleFindText(searchTerms);
break;
case "nlpCommand":
HandleNlpCommand(recoText);
break;
}
}
}
// What did the user say? e.g. MSDN, "Find Windows Phone Voice Commands"
string recoText = null;
NavigationContext.QueryString.TryGetValue("reco", out recoText);
// Which command was recognized in the VCD.XML file? e.g. "FindText"
string voiceCommandName = null;
NavigationContext.QueryString.TryGetValue("voiceCommandName", out voiceCommandName);
// What did the user say, for named phrase topic or list "slots"? e.g. "Windows
Phone Voice Commands"
string searchTerms = null;
NavigationContext.QueryString.TryGetValue("dictatedSearchTerms", out searchTerms);
24. // Windows Phone Silverlight App, in MainPage.xaml.cs
protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e)
{
base.OnNavigatedTo(e);
if (e.NavigationMode == System.Windows.Navigation.NavigationMode.New)
{
string recoText = null; // What did the user say? e.g. MSDN, "Find Windows Phone Voice Commands"
NavigationContext.QueryString.TryGetValue("reco", out recoText);
string voiceCommandName = null; // Which command was recognized in the VCD.XML file? e.g. "FindText"
NavigationContext.QueryString.TryGetValue("voiceCommandName", out voiceCommandName);
string searchTerms = null; // What did the user say, for named phrase topic or list "slots"? e.g. "Windows Phone Voice Commands"
NavigationContext.QueryString.TryGetValue("dictatedSearchTerms", out searchTerms);
switch (voiceCommandName) // What command launched the app?
{
case "FindText":
HandleFindText(searchTerms);
break;
case "nlpCommand":
HandleNlpCommand(recoText);
break;
}
}
}
switch (voiceCommandName) // What command launched the app?
{
case "FindText":
HandleFindText(searchTerms);
break;
case "nlpCommand":
HandleNlpCommand(recoText);
break;
}
26. // Windows Runtime App on Windows Phone 8.1, inside OnActivated override in App class
if (args.Kind == ActivationKind.VoiceCommand)
{
VoiceCommandActivatedEventArgs vcArgs = (VoiceCommandActivatedEventArgs)args;
string voiceCommandName = vcArgs.Result.RulePath.First(); // What command launched the app?
switch (voiceCommandName) // Navigate to right page for the voice command
{
case "FindText": // User said "find" or "search"
rootFrame.Navigate(typeof(MSDN.FindText), vcArgs.Result);
break;
case "nlpCommand": // User said something else
rootFrame.Navigate(typeof(MSDN.NlpCommand), vcArgs.Result);
break;
}
}
27. // Windows Runtime App on Windows Phone 8.1, inside OnActivated override in App class
if (args.Kind == ActivationKind.VoiceCommand)
{
VoiceCommandActivatedEventArgs vcArgs = (VoiceCommandActivatedEventArgs)args;
string voiceCommandName = vcArgs.Result.RulePath.First(); // What command launched the app?
switch (voiceCommandName) // Navigate to right page for the voice command
{
case "FindText": // User said "find" or "search"
rootFrame.Navigate(typeof(MSDN.FindText), vcArgs.Result);
break;
case "nlpCommand": // User said something else
rootFrame.Navigate(typeof(MSDN.NlpCommand), vcArgs.Result);
break;
}
}
// Windows Runtime App on Windows Phone 8.1, inside OnActivated override
// in App class
if (args.Kind == ActivationKind.VoiceCommand)
28. // Windows Runtime App on Windows Phone 8.1, inside OnActivated override in App class
if (args.Kind == ActivationKind.VoiceCommand)
{
VoiceCommandActivatedEventArgs vcArgs = (VoiceCommandActivatedEventArgs)args;
string voiceCommandName = vcArgs.Result.RulePath.First(); // What command launched the app?
switch (voiceCommandName) // Navigate to right page for the voice command
{
case "FindText": // User said "find" or "search"
rootFrame.Navigate(typeof(MSDN.FindText), vcArgs.Result);
break;
case "nlpCommand": // User said something else
rootFrame.Navigate(typeof(MSDN.NlpCommand), vcArgs.Result);
break;
}
}
VoiceCommandActivatedEventArgs vcArgs = (VoiceCommandActivatedEventArgs)args;
// What command launched the app?
string voiceCommandName = vcArgs.Result.RulePath.First();
// Navigate to right page for the voice command
switch (voiceCommandName)
29. // Windows Runtime App on Windows Phone 8.1, inside OnActivated override in App class
if (args.Kind == ActivationKind.VoiceCommand)
{
VoiceCommandActivatedEventArgs vcArgs = (VoiceCommandActivatedEventArgs)args;
string voiceCommandName = vcArgs.Result.RulePath.First(); // What command launched the app?
switch (voiceCommandName) // Navigate to right page for the voice command
{
case "FindText": // User said "find" or "search"
rootFrame.Navigate(typeof(MSDN.FindText), vcArgs.Result);
break;
case "nlpCommand": // User said something else
rootFrame.Navigate(typeof(MSDN.NlpCommand), vcArgs.Result);
break;
}
}
// Navigate to right page for the voice command
switch (voiceCommandName)
{
case "FindText": // User said "find" or "search"
rootFrame.Navigate(typeof(MSDN.FindText), vcArgs.Result);
break;
case "nlpCommand": // User said something else
rootFrame.Navigate(typeof(MSDN.NlpCommand), vcArgs.Result);
break;
}
30. // Windows Runtime App on Windows Phone 8.1, inside OnNavigatedTo in FindText.xaml.cs
protected override void OnNavigatedTo(NavigationEventArgs e)
{
// Get recognition result from parameter passed in frame.Navigate call
SpeechRecognitionResult vcResult = e.Parameter as SpeechRecognitionResult;
if (vcResult!=null)
{
// What did the user say? e.g. MSDN, "Find Windows Phone Voice Commands"
string recoText = vcResult.Text;
// Store the semantics dictionary for later use
IReadOnlyDictionary<string,IReadOnlyList<string>> semantics = vcResult.SemanticInterpretation.Properties;
string voiceCommandName = vcResult.RulePath.First();
if (voiceCommandName == "FindText")
{
// What did the user say, for named phrase topic or list "slots"? e.g. "Windows Phone Voice Commands"
if (semantics.ContainsKey("dictatedSearchTerms"))
{
HandleFindTextWithSearchTerms(semantics["dictatedSearchTerms"][0]);
}
else
{
HandleNoSearchTerms();
}
}
// Else handle other voice commands
} navigationHelper.OnNavigatedTo(e)
}
31. // Windows Runtime App on Windows Phone 8.1, inside OnNavigatedTo In FindText.xaml.cs
protected override void OnNavigatedTo(NavigationEventArgs e)
{
// Get recognition result from parameter passed in frame.Navigate call
SpeechRecognitionResult vcResult = e.Parameter as SpeechRecognitionResult;
if (vcResult!=null)
{
// What did the user say? e.g. MSDN, "Find Windows Phone Voice Commands"
string recoText = vcResult.Text;
// Store the semantics dictionary for later use
IReadOnlyDictionary<string,IReadOnlyList<string>> semantics = vcResult.SemanticInterpretation.Properties;
string voiceCommandName = vcResult.RulePath.First();
if (voiceCommandName == "FindText")
{
// What did the user say, for named phrase topic or list "slots"? e.g. "Windows Phone Voice Commands"
if (semantics.ContainsKey("dictatedSearchTerms"))
{
HandleFindTextWithSearchTerms(semantics["dictatedSearchTerms"][0]);
}
else
{
HandleNoSearchTerms();
}
}
// Else handle other voice commands
} navigationHelper.OnNavigatedTo(e)
}
// Windows Runtime App on Windows Phone 8.1, inside
// OnNavigatedTo in FindText.xaml.cs
protected override void OnNavigatedTo(NavigationEventArgs e)
{
// Get recognition result from parameter passed in frame.Navigate call
SpeechRecognitionResult vcResult = e.Parameter as SpeechRecognitionResult;
if (vcResult!=null)
{
// What did the user say? e.g. MSDN, "Find Windows Phone Voice Commands"
string recoText = vcResult.Text;
// Store the semantics dictionary for later use
IReadOnlyDictionary<string,IReadOnlyList<string>> semantics =
vcResult.SemanticInterpretation.Properties;
string voiceCommandName = vcResult.RulePath.First();
32. // Windows Runtime App on Windows Phone 8.1, inside OnNavigatedTo In FindText.xaml.cs
protected override void OnNavigatedTo(NavigationEventArgs e)
{
// Get recognition result from parameter passed in frame.Navigate call
SpeechRecognitionResult vcResult = e.Parameter as SpeechRecognitionResult;
if (vcResult!=null)
{
// What did the user say? e.g. MSDN, "Find Windows Phone Voice Commands"
string recoText = vcResult.Text;
// Store the semantics dictionary for later use
IReadOnlyDictionary<string,IReadOnlyList<string>> semantics = vcResult.SemanticInterpretation.Properties;
string voiceCommandName = vcResult.RulePath.First();
if (voiceCommandName == "FindText")
{
// What did the user say, for named phrase topic or list "slots"? e.g. "Windows Phone Voice Commands"
if (semantics.ContainsKey("dictatedSearchTerms"))
{
HandleFindTextWithSearchTerms(semantics["dictatedSearchTerms"][0]);
}
else
{
HandleNoSearchTerms();
}
}
// Else handle other voice commands
} navigationHelper.OnNavigatedTo(e)
}
if (voiceCommandName == "FindText")
{
// What did the user say, for named phrase topic or list "slots"?
// e.g. "Windows Phone Voice Commands"
if (semantics.ContainsKey("dictatedSearchTerms"))
{
HandleFindTextWithSearchTerms(semantics["dictatedSearchTerms"][0]);
}
else
{
HandleNoSearchTerms();
}
}
33. // Windows Runtime App on Windows Phone 8.1, inside OnNavigatedTo in NlpCommand.xaml.cs
protected override void OnNavigatedTo(NavigationEventArgs e)
{
base.OnNavigatedTo(e);
// Get recognition result from parameter passed in frame.Navigate call
SpeechRecognitionResult vcResult = e.Parameter as SpeechRecognitionResult;
// Check for null!
string commandMode = vcResult.SemanticInterpretation.Properties["commandMode"][0];
if (commandMode == "voice") // Did the user speak or type the command?
{
SpeakText(audioPlayer, String.Format("MSDN app heard you say {0}", vcResult.Text));
HandleNlpCommand(vcResult);
}
else if(commandMode=="text")
{
messageTextBox.Text = string.Format("Working on your request "{0}"", vcResult.Text);
HandleNlpCommand(vcResult);
}
}
34. // Windows Runtime App on Windows Phone 8.1, inside OnNavigatedTo In NlpCommand.xaml.cs
protected override void OnNavigatedTo(NavigationEventArgs e)
{
base.OnNavigatedTo(e);
// Get recognition result from parameter passed in frame.Navigate call
SpeechRecognitionResult vcResult = e.Parameter as SpeechRecognitionResult;
// Check for null!
string commandMode = vcResult.SemanticInterpretation.Properties["commandMode"][0];
if (commandMode == "voice") // Did the user speak or type the command?
{
SpeakText(audioPlayer, String.Format("MSDN app heard you say {0}", vcResult.Text));
HandleNlpCommand(vcResult);
}
else if(commandMode=="text")
{
messageTextBox.Text = string.Format("Working on your request "{0}"", vcResult.Text);
HandleNlpCommand(vcResult);
}
}
// Windows Runtime App on Windows Phone 8.1, in OnNavigatedTo in NlpCommand.xaml.cs
protected override void OnNavigatedTo(NavigationEventArgs e)
{
// Get recognition result from parameter passed in frame.Navigate call
SpeechRecognitionResult vcResult = e.Parameter as SpeechRecognitionResult;
string commandMode = vcResult.SemanticInterpretation.Properties["commandMode"][0];
35. // Windows Runtime App on Windows Phone 8.1, inside OnNavigatedTo In NlpCommand.xaml.cs
protected override void OnNavigatedTo(NavigationEventArgs e)
{
base.OnNavigatedTo(e);
// Get recognition result from parameter passed in frame.Navigate call
SpeechRecognitionResult vcResult = e.Parameter as SpeechRecognitionResult;
// Check for null!
string commandMode = vcResult.SemanticInterpretation.Properties["commandMode"][0];
if (commandMode == "voice") // Did the user speak or type the command?
{
SpeakText(audioPlayer, String.Format("MSDN app heard you say {0}", vcResult.Text));
HandleNlpCommand(vcResult);
}
else if(commandMode=="text")
{
messageTextBox.Text = string.Format("Working on your request "{0}"", vcResult.Text);
HandleNlpCommand(vcResult);
}
}
string commandMode = vcResult.SemanticInterpretation.Properties["commandMode"][0];
if (commandMode == "voice") // Did the user speak or type the command?
{
SpeakText(audioPlayer, String.Format("MSDN voice heard you say {0}", vcResult.Text));
HandleNlpCommand(vcResult);
}
else if(commandMode=="text")
{
msgTextBox.Text= string.Format("Working on your request "{0}"", vcResult.Text);
HandleNlpCommand(vcResult);
}
36.
37. private void HandleNlpCommand(string recoText, bool actSilently)
{
string action = null;
string navigateTo = null;
string searchFor = null;
recoText = recoText.ToLower();
if (recoText.Contains("go to ") || recoText.Contains("goto ") ||
recoText.Contains("find ") || recoText.Contains("search ") ||
recoText.Contains("show me "))
{
action = "navigate";
if (recoText.Contains("windows phone dev center"))
{
navigateTo = "http://dev.windowsphone.com";
}
}
else if (recoText.Contains("learn how to "))
{
action = "find";
searchFor = recoText.Substring(recoText.IndexOf("learn how to ") + 13);
}
else
{
action = "find";
searchFor = recoText;
}
switch (action)
{
case "find":
// ...
38. private void HandleNlpCommand(string recoText, bool actSilently)
{
if (recoText.Contains("go to ") || recoText.Contains("goto ") ||
string action = null;
string navigateTo = null;
string searchFor = null;
recoText = recoText.ToLower();
if (recoText.Contains("go to ") || recoText.Contains("goto ") ||
recoText.Contains("find ") || recoText.Contains("search ") ||
recoText.Contains("show me "))
{
action = "navigate";
if (recoText.Contains("windows phone dev center"))
{
navigateTo = "http://dev.windowsphone.com";
}
}
else if (recoText.Contains("learn how to "))
{
action = "find";
searchFor = recoText.Substring(recoText.IndexOf("learn how to ") + 13);
}
else
{
action = "find";
searchFor = recoText;
}
switch (action)
{
case "find":
// ...
recoText.Contains("find ") || recoText.Contains("search ") ||
recoText.Contains("show me "))
{
action = "navigate";
if (recoText.Contains("windows phone dev center"))
{
navigateTo = "http://dev.windowsphone.com";
}
}
39. private void HandleNlpCommand(string recoText, bool actSilently)
{
string action = null;
string navigateTo = null;
string searchFor = null;
recoText = recoText.ToLower();
if (recoText.Contains("go to ") || recoText.Contains("goto ") ||
recoText.Contains("find ") || recoText.Contains("search ") ||
recoText.Contains("show me "))
{
else if (recoText.Contains("learn how to "))
{
action = "navigate";
if (recoText.Contains("windows phone dev center"))
{
navigateTo = "http://dev.windowsphone.com";
}
}
else if (recoText.Contains("learn how to "))
{
action = "find";
searchFor = recoText.Substring(recoText.IndexOf("learn how to ") + 13);
}
else
{
action = "find";
searchFor = recoText;
}
switch (action)
{
case "find":
// ...
action = "find";
searchFor = recoText.Substring(recoText.IndexOf("learn how to ") + 13);
}
40. private void HandleNlpCommand(string recoText, bool actSilently)
{
string action = null;
string navigateTo = null;
string searchFor = null;
recoText = recoText.ToLower();
if (recoText.Contains("go to ") || recoText.Contains("goto ") ||
else
{
recoText.Contains("find ") || recoText.Contains("search ") ||
recoText.Contains("show me "))
{
action = "navigate";
if (recoText.Contains("windows phone dev center"))
{
navigateTo = "http://dev.windowsphone.com";
}
}
else if (recoText.Contains("learn how to "))
{
action = "find";
searchFor = recoText.Substring(recoText.IndexOf("learn how to ") + 13);
}
else
{
action = "find";
searchFor = recoText;
}
switch (action)
{
case "find":
// ...
action = "find";
searchFor = recoText;
}
41.
42. // Windows Phone Silverlight App
// Synthesis
private async void SpeakText(string textToSpeak)
{
SpeechSynthesizer synthesizer = new SpeechSynthesizer();
await synthesizer.SpeakTextAsync(textToSpeak);
}
// Recognition
private async Task<SpeechRecognitionUIResult> RecognizeSpeech()
{
SpeechRecognizerUI recognizer = new SpeechRecognizerUI();
// One of three Grammar types available
recognizer.Recognizer.Grammars.AddGrammarFromPredefinedType(
"key1", SpeechPredefinedGrammar.WebSearch);
await recognizer.Recognizer.PreloadGrammarsAsync(); // Optional but recommended
// Put up UI and recognize user's utterance
SpeechRecognitionUIResult result = await recognizer.RecognizeWithUIAsync();
return result;
}
// Calling code uses result.RecognitionResult.Text or result.RecognitionResult.Semantics
43. // Windows Phone Silverlight App
// Synthesis
private async void SpeakText(string textToSpeak)
{
SpeechSynthesizer synthesizer = new SpeechSynthesizer();
await synthesizer.SpeakTextAsync(textToSpeak);
}
// Synthesis
private async void SpeakText(string textToSpeak)
{
// Recognition
private async Task<SpeechRecognitionUIResult> RecognizeSpeech()
{
SpeechSynthesizer synthesizer = new SpeechSynthesizer();
await synthesizer.SpeakTextAsync(textToSpeak);
SpeechRecognizerUI recognizer = new SpeechRecognizerUI();
}
// One of three Grammar types available
recognizer.Recognizer.Grammars.AddGrammarFromPredefinedType(
"key1", SpeechPredefinedGrammar.WebSearch);
await recognizer.Recognizer.PreloadGrammarsAsync(); // Optional but recommended
// Put up UI and recognize user's utterance
SpeechRecognitionUIResult result = await recognizer.RecognizeWithUIAsync();
return result;
}
// Calling code uses result.RecognitionResult.Text or result.RecognitionResult.Semantics
44. // Recognition
private async Task<SpeechRecognitionUIResult> RecognizeSpeech()
{
// Windows Phone Silverlight App
// Synthesis
private async void SpeakText(string textToSpeak)
{
SpeechRecognizerUI recognizer = new SpeechRecognizerUI();
SpeechSynthesizer synthesizer = new SpeechSynthesizer();
await synthesizer.SpeakTextAsync(textToSpeak);
}
// One of three Grammar types available
recognizer.Recognizer.Grammars.AddGrammarFromPredefinedType(
"key1", SpeechPredefinedGrammar.WebSearch);
// Recognition
private async Task<SpeechRecognitionUIResult> RecognizeSpeech()
{
// Optional but recommended
await recognizer.Recognizer.PreloadGrammarsAsync();
SpeechRecognizerUI recognizer = new SpeechRecognizerUI();
// One of three Grammar types available
recognizer.Recognizer.Grammars.AddGrammarFromPredefinedType(
"key1", SpeechPredefinedGrammar.WebSearch);
await recognizer.Recognizer.PreloadGrammarsAsync(); // Optional but recommended
// Put up UI and recognize user's utterance
SpeechRecognitionUIResult result = await recognizer.RecognizeWithUIAsync();
return result;
}
// Calling code uses result.RecognitionResult.Text or result.RecognitionResult.Semantics
45. // Windows Phone Silverlight App
// Synthesis
private async void SpeakText(string textToSpeak)
{
SpeechSynthesizer synthesizer = new SpeechSynthesizer();
await synthesizer.SpeakTextAsync(textToSpeak);
}
// Put up UI and recognize user's utterance
SpeechRecognitionUIResult result
// Recognition
private async Task<SpeechRecognitionUIResult> RecognizeSpeech()
{
= await recognizer.RecognizeWithUIAsync();
SpeechRecognizerUI recognizer = new SpeechRecognizerUI();
return result;
// One of three Grammar types available
recognizer.Recognizer.Grammars.AddGrammarFromPredefinedType(
"key1", SpeechPredefinedGrammar.WebSearch);
await recognizer.Recognizer.PreloadGrammarsAsync(); // Optional but recommended
// Put up UI and recognize user's utterance
SpeechRecognitionUIResult result = await recognizer.RecognizeWithUIAsync();
return result;
}
// Calling code uses result.RecognitionResult.Text or result.RecognitionResult.Semantics
46. // Windows Phone Store App
// Synthesis
<!--MediaElement in xaml file-->
<MediaElement Name="audioPlayer" AutoPlay="True" .../>
// C# code behind
// Function to speak a text string
private async void SpeakText(MediaElement audioPlayer, string textToSpeak)
{
SpeechSynthesizer synthesizer = new SpeechSynthesizer();
SpeechSynthesisStream ttsStream = await synthesizer.SynthesizeTextToStreamAsync(textToSpeak);
audioPlayer.SetSource(ttsStream, ""); // This starts the player because AutoPlay="True"
}
47. // Windows Phone Store App
// Synthesis
<!--MediaElement in xaml file-->
<MediaElement Name="audioPlayer" AutoPlay="True" .../>
// C# code behind
// Function to speak a text string
private async void SpeakText(MediaElement audioPlayer, string textToSpeak)
{
SpeechSynthesizer synthesizer = new SpeechSynthesizer();
SpeechSynthesisStream ttsStream = await synthesizer.SynthesizeTextToStreamAsync(textToSpeak);
audioPlayer.SetSource(ttsStream, ""); // This starts the player because AutoPlay="True"
}
// Synthesis
<!--MediaElement in xaml file-->
<MediaElement Name="audioPlayer" AutoPlay="True" .../>
48. // C# code behind
// Function to speak a text string
private async void SpeakText(MediaElement audioPlayer, string textToSpeak)
{
// Windows Phone Store App
// Synthesis
<!--MediaElement in xaml file-->
<MediaElement Name="audioPlayer" AutoPlay="True" .../>
// C# code behind
// Function to speak a text string
private async void SpeakText(MediaElement audioPlayer, string textToSpeak)
{
SpeechSynthesizer synthesizer = new SpeechSynthesizer();
SpeechSynthesisStream ttsStream = await synthesizer.SynthesizeTextToStreamAsync(textToSpeak);
audioPlayer.SetSource(ttsStream, ""); // This starts the player because AutoPlay="True"
}
SpeechSynthesizer synthesizer = new SpeechSynthesizer();
SpeechSynthesisStream ttsStream
= await synthesizer.SynthesizeTextToStreamAsync(textToSpeak);
audioPlayer.SetSource(ttsStream, "");
// This starts the player because AutoPlay="True"
}
49. // Windows Phone Store App
// Recognition
private async Task<SpeechRecognitionResult> RecognizeSpeech()
{
SpeechRecognizer recognizer = new SpeechRecognizer();
// One of three Constraint types available
SpeechRecognitionTopicConstraint topicConstraint
= new SpeechRecognitionTopicConstraint(SpeechRecognitionScenario.WebSearch, "MSDN");
recognizer.Constraints.Add(topicConstraint);
await recognizer.CompileConstraintsAsync(); // Required
// Put up UI and recognize user's utterance
SpeechRecognitionResult result = await recognizer.RecognizeWithUIAsync();
return result;
}
// Calling code uses result.RecognitionResult.Text or
// result.RecognitionResult.SemanticInterpretation
50. SpeechRecognizer recognizer = new SpeechRecognizer();
// Windows Phone Store App
// One of three Constraint types available
SpeechRecognitionTopicConstraint topicConstraint
// Recognition
private async Task<SpeechRecognitionResult> RecognizeSpeech()
{
= new SpeechRecognitionTopicConstraint(
SpeechRecognitionScenario.WebSearch, "MSDN");
SpeechRecognizer recognizer = new SpeechRecognizer();
// One of three Constraint types available
SpeechRecognitionTopicConstraint topicConstraint
recognizer.Constraints.Add(topicConstraint);
= new SpeechRecognitionTopicConstraint(SpeechRecognitionScenario.WebSearch, "MSDN");
await recognizer.CompileConstraintsAsync(); // Required
recognizer.Constraints.Add(topicConstraint);
await recognizer.CompileConstraintsAsync(); // Required
// Put up UI and recognize user's utterance
SpeechRecognitionResult result = await recognizer.RecognizeWithUIAsync();
return result;
}
// Calling code uses result.RecognitionResult.Text or
// result.RecognitionResult.SemanticInterpretation
51. // Windows Phone Store App
// Put up UI and recognize user's utterance
SpeechRecognitionResult result
// Recognition
private async Task<SpeechRecognitionResult> RecognizeSpeech()
{
= await recognizer.RecognizeWithUIAsync();
SpeechRecognizer recognizer = new SpeechRecognizer();
return result;
// One of three Constraint types available
SpeechRecognitionTopicConstraint topicConstraint
}
// Calling code uses result.RecognitionResult.Text or
// result.RecognitionResult.SemanticInterpretation
= new SpeechRecognitionTopicConstraint(SpeechRecognitionScenario.WebSearch, "MSDN");
recognizer.Constraints.Add(topicConstraint);
await recognizer.CompileConstraintsAsync(); // Required
// Put up UI and recognize user's utterance
SpeechRecognitionResult result = await recognizer.RecognizeWithUIAsync();
return result;
}
// Calling code uses result.RecognitionResult.Text or
// result.RecognitionResult.SemanticInterpretation