This document describes the development of a voice-based virtual personal assistant using Google Dialogflow and machine learning. The authors developed an assistant called ERAA using Dialogflow's natural language understanding capabilities. Dialogflow agents contain intents that match user queries to trigger responses. The authors designed a user interface for ERAA using the Flutter platform and integrated it with Dialogflow to handle conversations. They compared Dialogflow to IBM Watson and determined Dialogflow was better for this project due to its ease of maintenance, ability to handle structured data, integration, pricing, and language support. The authors aim to implement ERAA as a smartphone app initially and potentially as a desktop application in the future.
Online interview is not a new thing but in this covid-19 situation it seems to be the only option. However, assessing the candidate on a video call may not be that effective. Having an AI based Interview Assessment System could prove to be useful, which would take input as speech and will give output as detailed analysis of that speech. While most the research work currently done focuses only on finding sentiment or personality from speech, our system aims to extract multiple information from the speech and provide a detailed analysis. The analysis would include a detailed report containing results about confidence level of the person, his/her emotional state, speed of the speech, frequently repeated words and also personality reflected by that speech. An interview panel consists of various members focusing on different aspect of the answer given by the candidate, some focus on technical correctness while, some simply want to check the communication skills of the candidate. Having an AI system giving a report on the soft skills part would reduce the work for interviewer and he/she could give complete focus on the technical correctness of the answer. This could eventually help save time and resources used by organizations for hiring process. This intention of creating this system is to assist the interview process and give analysis report based on the speech input instead a giving a verdict about selection of the candidate. Thus, this system could use not only by the interviewers but also by the candidates. The output provided would be a detailed report which could prove to be a good feedback for the students who are preparing for the interview. Having a feedback would help candidates work on their week points and thus perform better in further interviews.
Online interview is not a new thing but in this covid-19 situation it seems to be the only option. However, assessing the candidate on a video call may not be that effective. Having an AI based Interview Assessment System could prove to be useful, which would take input as speech and will give output as detailed analysis of that speech. While most the research work currently done focuses only on finding sentiment or personality from speech, our system aims to extract multiple information from the speech and provide a detailed analysis. The analysis would include a detailed report containing results about confidence level of the person, his/her emotional state, speed of the speech, frequently repeated words and also personality reflected by that speech. An interview panel consists of various members focusing on different aspect of the answer given by the candidate, some focus on technical correctness while, some simply want to check the communication skills of the candidate. Having an AI system giving a report on the soft skills part would reduce the work for interviewer and he/she could give complete focus on the technical correctness of the answer. This could eventually help save time and resources used by organizations for hiring process. This intention of creating this system is to assist the interview process and give analysis report based on the speech input instead a giving a verdict about selection of the candidate. Thus, this system could use not only by the interviewers but also by the candidates. The output provided would be a detailed report which could prove to be a good feedback for the students who are preparing for the interview. Having a feedback would help candidates work on their week points and thus perform better in further interviews.
Assistive Examination System for Visually ImpairedEditor IJCATR
This paper presents a design of voice enabled examination system which can be used by the visually challenged students.
The system uses Text-to-Speech (TTS) and Speech-to-Text (STT) technology. The text-to-speech and speech-to-text web based
academic testing software would provide an interaction for blind students to enhance their educational experiences by providing them
with a tool to give the exams. This system will aid the differently-abled to appear for online tests and enable them to come at par with
the other students. This system can also be used by students with learning disabilities or by people who wish to take the examination in
a combined auditory and visual way.
Abstract
In this paper we propose a new product inwhich speech is used to interact with computers. Speech is a man’s most powerful form
of communication. The user will be able to give various voice commands to the system, which the system will recognize and
execute tasks based on the input command. This system will provide another form of input (apart from mouse and keyboard) for
daily users. It will also be of great assistance to physically challenged users. The user will be able to perform all the operations
using his voice as input which he is able to perform normally using mouse and keyboard. The system however requires a little bit
training from the user, so that the system will understand the user better. The need of training is due to the fact that every person
has different voice. Also the voice of women is totally distinct from men. More training will result in faster and accurate response.
Extensive experiments are conducted in order to check the accuracy and efficiency of the system, and the results show that our
system is reliable and achieves better accuracy and efficiency than previous systems.
KeyWords: Speech Technology, Voice Response System, Voice User Interface, Voice Recognition.
Advanced Virtual Assistant Based on Speech Processing Oriented Technology on ...ijtsrd
With the advancement of technology, the need for a virtual assistant is increasing tremendously. The development of virtual assistants is booming on all platforms. Cortana, Siri are some of the best examples for virtual assistants. We focus on improving the efficiency of virtual assistant by reducing the response time for a particular action. The primary development criterion of any virtual assistant is by developing a simple U.I. for assistant in all platforms and core functioning in the backend so that it could perform well in multi plat formed or cross plat formed manner by applying the backend code for all the platforms. We try a different research approach in this paper. That is, we give computation and processing power to edge devices itself. So that it could perform well by doing actions in a short period, think about the normal working of a typical virtual assistant. That is taking command from the user, transfer that command to the backend server, analyze it on the server, transfer back the action or result to the end user and finally get a response if we could do all this thing in a single machine itself, the response time will get reduced to a considerable amount. In this paper, we will develop a new algorithm by keeping a local database for speech recognition and creating various helpful functions to do particular action on the end device. Akhilesh L "Advanced Virtual Assistant Based on Speech Processing Oriented Technology on Edge Concept (S.P.O.T)" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-6 , October 2020, URL: https://www.ijtsrd.com/papers/ijtsrd33289.pdf Paper Url: https://www.ijtsrd.com/computer-science/realtime-computing/33289/advanced-virtual-assistant-based-on-speech-processing-oriented-technology-on-edge-concept-spot/akhilesh-l
It is voice based Assistant ppt presentation to help user for their project formation and module description information. A voice assistant is a technology based on artificial intelligence. The software uses a device’s microphone to receive voice requests while the voice output takes place at the speaker. But the most exciting thing happens between these two actions.
It is a combination of several different technologies: voice recognition, voice analysis and language processing.
It is completely developed using one of the most powerful language python.
Applications
Voice assistant applications have been making lives easier by providing custom services based on voice commands. We help businesses to utilize this technology to expand their functionality and streamline their business operations with efficiency.
We utilize voice assistant applications to deliver intuitive, automated experiences and build customer engagement.
DASHBOARD REPORTS
We enable you to track your business performance and better understand your data on your dashboard assistant to discover valuable insights in real-time.
USER-CENTERED SUPPORT
Allow your users to navigate and ask questions with ease. Our in-app voice assistant supports users by responding to their inquiries in real-time.
Scope
The voice assistant application market is projected to grow at 27.3% CAGR during the forecast period of 2021-2026.
A voice assistant is primarily a digital assistant built upon using AI, machine learning, and voice recognition technologies. 1. Customer satisfaction When it comes to determining the effectiveness of voice assistants in customer service, client happiness is essential.
2. Completion rate Voice chat assists in the reduction of customer service tickets.
3. Return on investment
A virtual voice assistant is a software agent that can interpret human speech and respond via
synthesis voices. It is a tool for search, for reminders and to write notes just by speaking it up. Voice
Window assistant is used to create voice apps for intelligent assistant when user needs to open any
other application or for any searching purposes, he can use the command open. It will detect the
speech and save it in database. This device created to take inputs either from commands or from
microphone. ASR(Automatic speech recognition) is the main principle behind the working of
AIbased voice assistant. ASR systems, at first it records the speech then the wave file has been
created by device Then give the output which we want. User can do lot of tasks with this assistants
like they can ask questions, they can ask them to do particular tasks like send on gmail , play songs
etc. This system is being designed in such a way that all the services provided by mobile devices are
accessible by end user on the user voice commands. These voice assistant are embedded in
smartphones or in the form of speakers at home. It communicates with the user in natural language.
1 Social
Highly Engaging
Top 5 Machine Learning Tools for Software Development in 2024.pdfPolyxer Systems
Machine learning has been widely used by various industries in 2023. The software development industry can take great advantage of machine learning in 2024 as well.
It has great potential to revolutionize various aspects of software development including task automation, boosting user experience, and easy software development and deployment.
Movie recommender chatbot based on DialogflowIJECEIAES
Currently, the online movie streaming business is growing rapidly, such as Netflix, Disney+, Amazon Prime Video, HBO, and Apple TV. The recommender system helps customers in getting information about movies that are in accordance with their wishes. Meanwhile, the development of messaging platform technology has made it easier for many people to communicate instantly. Utilizing a messaging platform to build a recommender system for movies, provides special benefits because people often access the messaging platform all the time. In the Indonesian language, there are many slang terms that the system must recognize. In this study, we build a chatbot on a messaging platform which users can interact with the system in natural language (in Indonesian language) and get recommendations. We use rule-based and maximum likelihood as a method in natural language processing (NLP), and content-based filtering for the recommendation process. The recommender system interaction is built through a conversation mechanism that will form a conversational recommender system. The interaction is based on a chatbot which is built using Dialogflow and implemented on the telegram. We use the accuracy of recommendations and user satisfaction to evaluate the system performance. The results obtained from the user study indicate that the NLP approach provides a positive experience for users. In addition, the system also produces an accuracy value of 83%.
Hdjdjdjdjdj Hdjdjdjdjdj ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok broHdjdjdjdjdj Hdjdjdjdjdj ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks you for your way of caring for her to you bro for the best bro I Hdjdjdjdjdj Hdjdjdjdjdj ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok broHdjdjdjdjdj Hdjdjdjdjdj ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks you for your way of caring for her to you bro for the best bro I Hdjdjdjdjdj Hdjdjdjdjdj ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok broHdjdjdjdjdj Hdjdjdjdjdj ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks you for your way of caring for her to you bro for the best bro I Hdjdjdjdjdj Hdjdjdjdjdj ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok broHdjdjdjdjdj Hdjdjdjdjdj ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro okHdjdjdjdjdj Hdjdjdjdjdj ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok broHdjdjdjdjdj Hdjdjdjdjdj ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok
Assistive Examination System for Visually ImpairedEditor IJCATR
This paper presents a design of voice enabled examination system which can be used by the visually challenged students.
The system uses Text-to-Speech (TTS) and Speech-to-Text (STT) technology. The text-to-speech and speech-to-text web based
academic testing software would provide an interaction for blind students to enhance their educational experiences by providing them
with a tool to give the exams. This system will aid the differently-abled to appear for online tests and enable them to come at par with
the other students. This system can also be used by students with learning disabilities or by people who wish to take the examination in
a combined auditory and visual way.
Abstract
In this paper we propose a new product inwhich speech is used to interact with computers. Speech is a man’s most powerful form
of communication. The user will be able to give various voice commands to the system, which the system will recognize and
execute tasks based on the input command. This system will provide another form of input (apart from mouse and keyboard) for
daily users. It will also be of great assistance to physically challenged users. The user will be able to perform all the operations
using his voice as input which he is able to perform normally using mouse and keyboard. The system however requires a little bit
training from the user, so that the system will understand the user better. The need of training is due to the fact that every person
has different voice. Also the voice of women is totally distinct from men. More training will result in faster and accurate response.
Extensive experiments are conducted in order to check the accuracy and efficiency of the system, and the results show that our
system is reliable and achieves better accuracy and efficiency than previous systems.
KeyWords: Speech Technology, Voice Response System, Voice User Interface, Voice Recognition.
Advanced Virtual Assistant Based on Speech Processing Oriented Technology on ...ijtsrd
With the advancement of technology, the need for a virtual assistant is increasing tremendously. The development of virtual assistants is booming on all platforms. Cortana, Siri are some of the best examples for virtual assistants. We focus on improving the efficiency of virtual assistant by reducing the response time for a particular action. The primary development criterion of any virtual assistant is by developing a simple U.I. for assistant in all platforms and core functioning in the backend so that it could perform well in multi plat formed or cross plat formed manner by applying the backend code for all the platforms. We try a different research approach in this paper. That is, we give computation and processing power to edge devices itself. So that it could perform well by doing actions in a short period, think about the normal working of a typical virtual assistant. That is taking command from the user, transfer that command to the backend server, analyze it on the server, transfer back the action or result to the end user and finally get a response if we could do all this thing in a single machine itself, the response time will get reduced to a considerable amount. In this paper, we will develop a new algorithm by keeping a local database for speech recognition and creating various helpful functions to do particular action on the end device. Akhilesh L "Advanced Virtual Assistant Based on Speech Processing Oriented Technology on Edge Concept (S.P.O.T)" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-6 , October 2020, URL: https://www.ijtsrd.com/papers/ijtsrd33289.pdf Paper Url: https://www.ijtsrd.com/computer-science/realtime-computing/33289/advanced-virtual-assistant-based-on-speech-processing-oriented-technology-on-edge-concept-spot/akhilesh-l
It is voice based Assistant ppt presentation to help user for their project formation and module description information. A voice assistant is a technology based on artificial intelligence. The software uses a device’s microphone to receive voice requests while the voice output takes place at the speaker. But the most exciting thing happens between these two actions.
It is a combination of several different technologies: voice recognition, voice analysis and language processing.
It is completely developed using one of the most powerful language python.
Applications
Voice assistant applications have been making lives easier by providing custom services based on voice commands. We help businesses to utilize this technology to expand their functionality and streamline their business operations with efficiency.
We utilize voice assistant applications to deliver intuitive, automated experiences and build customer engagement.
DASHBOARD REPORTS
We enable you to track your business performance and better understand your data on your dashboard assistant to discover valuable insights in real-time.
USER-CENTERED SUPPORT
Allow your users to navigate and ask questions with ease. Our in-app voice assistant supports users by responding to their inquiries in real-time.
Scope
The voice assistant application market is projected to grow at 27.3% CAGR during the forecast period of 2021-2026.
A voice assistant is primarily a digital assistant built upon using AI, machine learning, and voice recognition technologies. 1. Customer satisfaction When it comes to determining the effectiveness of voice assistants in customer service, client happiness is essential.
2. Completion rate Voice chat assists in the reduction of customer service tickets.
3. Return on investment
A virtual voice assistant is a software agent that can interpret human speech and respond via
synthesis voices. It is a tool for search, for reminders and to write notes just by speaking it up. Voice
Window assistant is used to create voice apps for intelligent assistant when user needs to open any
other application or for any searching purposes, he can use the command open. It will detect the
speech and save it in database. This device created to take inputs either from commands or from
microphone. ASR(Automatic speech recognition) is the main principle behind the working of
AIbased voice assistant. ASR systems, at first it records the speech then the wave file has been
created by device Then give the output which we want. User can do lot of tasks with this assistants
like they can ask questions, they can ask them to do particular tasks like send on gmail , play songs
etc. This system is being designed in such a way that all the services provided by mobile devices are
accessible by end user on the user voice commands. These voice assistant are embedded in
smartphones or in the form of speakers at home. It communicates with the user in natural language.
1 Social
Highly Engaging
Top 5 Machine Learning Tools for Software Development in 2024.pdfPolyxer Systems
Machine learning has been widely used by various industries in 2023. The software development industry can take great advantage of machine learning in 2024 as well.
It has great potential to revolutionize various aspects of software development including task automation, boosting user experience, and easy software development and deployment.
Movie recommender chatbot based on DialogflowIJECEIAES
Currently, the online movie streaming business is growing rapidly, such as Netflix, Disney+, Amazon Prime Video, HBO, and Apple TV. The recommender system helps customers in getting information about movies that are in accordance with their wishes. Meanwhile, the development of messaging platform technology has made it easier for many people to communicate instantly. Utilizing a messaging platform to build a recommender system for movies, provides special benefits because people often access the messaging platform all the time. In the Indonesian language, there are many slang terms that the system must recognize. In this study, we build a chatbot on a messaging platform which users can interact with the system in natural language (in Indonesian language) and get recommendations. We use rule-based and maximum likelihood as a method in natural language processing (NLP), and content-based filtering for the recommendation process. The recommender system interaction is built through a conversation mechanism that will form a conversational recommender system. The interaction is based on a chatbot which is built using Dialogflow and implemented on the telegram. We use the accuracy of recommendations and user satisfaction to evaluate the system performance. The results obtained from the user study indicate that the NLP approach provides a positive experience for users. In addition, the system also produces an accuracy value of 83%.
Hdjdjdjdjdj Hdjdjdjdjdj ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok broHdjdjdjdjdj Hdjdjdjdjdj ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks you for your way of caring for her to you bro for the best bro I Hdjdjdjdjdj Hdjdjdjdjdj ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok broHdjdjdjdjdj Hdjdjdjdjdj ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks you for your way of caring for her to you bro for the best bro I Hdjdjdjdjdj Hdjdjdjdjdj ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok broHdjdjdjdjdj Hdjdjdjdjdj ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks you for your way of caring for her to you bro for the best bro I Hdjdjdjdjdj Hdjdjdjdjdj ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok broHdjdjdjdjdj Hdjdjdjdjdj ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro okHdjdjdjdjdj Hdjdjdjdjdj ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok broHdjdjdjdjdj Hdjdjdjdjdj ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok bro thanks vro ok
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
2. International Journal of Scientific Research in Science and Technology (www.ijsrst.com) | Volume 8 | Issue 3
Dr. Jaydeep Patil et al Int J Sci Res Sci & Technol. May-June-2021, 8 (3) : 06-17
7
I. INTRODUCTION
In this new age of technology and innovation, the
use of artificial intelligence and machine learning
has made our life much easier. These technologies
have proved to be beneficial to the society in various
fields such as education, industries, e-commerce, etc;
and one of the most prominent is communication.
Right from the 1960s, when IBM introduced the first
digital speech recognition tool, i.e. IBM Shoebox, the
idea of having a conversation with a computer
seemed quite futuristic. An Intelligent Virtual
Assistant is a digital life assistant made to contribute
maximum convenience to the user. It is a highly
developed software with a powerful speech
recognition system which focuses on the processing
of audio signal into the system, converting it to text
and perform the required task.
Most of the Virtual Assistants work basically on
voice as communication. It focuses on processing of
audio signal into the system, converting them to text
and performing the required task. In general, speech
processing consists the following: A Speech-To-Text
Module that converts speech signals to text, A Parser
that extracts the semantic context, A Dialog Manager
that determines system response through machine
learning algorithms, An Answer Generator that
provides the system response in text and A Speech
Synthesizer that converts text to the speech signal
[11].
When developed by a normal user, he/she may
experience many issues, in terms of accuracy in
recognition, robustness in performing operations, etc.
and at times may not be able to understand the issues
faced. Therefore, in this paper, we have tried to give
an overview which will help user understand the
methodologies and steps involved in the making of a
Virtual Personal Assistant. We have taken into
consideration the different methodologies, results
and limitations published by different researchers.
II. LITERATURE REVIEW
The Speech Recognition Model is one of the most
important part of a Virtual Assistant. Considering
the various Neural Networks that are required for
building up of a speech recognition system, it was
necessary to survey the models that provided the
insight by determining the accuracy and other
factors of each Model. It was observed that High
Accuracy and less Validation Accuracy was
achieved for Convolutional Neural Network (CNN)
model as compared to Basic Neural Network. Thus,
proving that CNN is a better choice for speech
recognition systems[1]. Considering the Limitations
for the Model, other parameters such as Word Error
rate, throughput of the system was not taken into
consideration.
Various Machine Learning Algorithms are used for
speech recognition. It was found that on application
of Auto- WEKA on various algorithms, determined
Random Forest as the best algorithm which is useful
for learning the dataset based on the training set.
However in this survey, Speech samples consisting
of noise was not tested for determining the
scalability and robustness of the models[3]. In the
Survey of scaling speech recognition using CNN,
following metrics were taken into consideration:
(i) throughput,
(ii) Real-Time Factor(RTF) and latency, and
(iii) Word Error Rate (WER)
for the overall framework, helped in achieving an
efficient model. But due to the increase in the
number of the layers the implementation of the
same was difficult[4]. Other Algorithms such as the
Long Short-Term Memory (LSTM) is very powerful
in speech recognition and Hybrid model of Hidden
Markov Model (HMM) and Gaussian Mixture
Models (GMM) can give excellent results[5]. In
various Projects of developing a Virtual Assistant it
was observed that the platform failed to support
various other languages of the countries including
3. International Journal of Scientific Research in Science and Technology (www.ijsrst.com) | Volume 8 | Issue 3
Dr. Jaydeep Patil et al Int J Sci Res Sci & Technol. May-June-2021, 8 (3) : 06-17
8
China, Japan, India, etc.[6]. The survey paper
provided with detailed study on the Recurrent
Neural Networks (RNNs) that can be used for
Speech Recognition System but with more research
to be carried out on the same. However, the survey
focused more on the Supervised Learning Models
and less importance was given to the Unsupervised
Learning Models.
A Survey included the detailed comparison of the
Personal Voice-Based Assistants available in the
market namely, Google Assistant, Cortana, Alexa
and Siri. It concluded that Google assistant gave
good results in VR and HFI by achieving 60%
accuracy. Siri achieved 44% accuracy in VR and HFI.
Cortana was observed with decrease in accuracy
close to 30%. Other results included that Alexa
wasn’t suitable with simple questions whereas
Cortana was poor in basic voice recognition[7]. The
illustration to use AI-enabled content analysis has
been discussed in one of the survey paper. The
system can examine text of leadership speeches,
content related to a specific organization. However,
Only one type of content was analysed with limited
samples and a Pre-defined Coding Scheme was used
for the Project[9].
Since, the survey of this paper also included the
comparison of IBM Watson and Google DialogFlow,
various Projects carried on these platforms were
studied before arriving at the conclusion for a better
Platform for the Project. A Project was based on
successful implementation of IBM Watson in
developing an application for health care
purpose[12]. This Project thus provided a base for
building up of an AI Application with the help of
IBM Watson. Various other Projects used IBM
Watson as their platform for building the system
which processed various queries with the help of its
in-built Natural Language Processing (NLP) and
Natural Language Understanding (NLU)
Algorithms[8].
Projects based on successful implementation of the
Google DialogFlow for an Organization were
studied. It provided an insight to the various
technologies like the Google Cloud Platform,
Google Cloud Vision API for integrating detection
features in the system and Firebase Real Time
Database for developing the Application. The
system ensures security of database with the help of
OAuth Authentication for accessing the system.
However, most of the actions carried out with the
help of Google services required Internet
Connectivity while accessing the system and
thereby failed to service the queries offline[11].
Other Project, aimed to design a system for
Educational purpose using Google DialogFlow. The
proposed methodology consists of two main phases:
Knowledge Abstraction and Response Generation.
The methodology studied the deep learning model,
The Decision Tree, that is been used in
implementing the Dialogflow[10].
III. IBM WATSON VERSUS GOOGLE
DIALOGFLOW
1. IBM Watson: Benefits and Limitations
In terms of software ranking, IBM Watson leads the
software industry among the other Artificial
Intelligence (AI) Platforms. IBM Watson proves to
be a better choice for business looking for trusted
computer program. It ensures focusing on behaviour
of the customer, with the help of repository of data
and analysis we can access more information that
helps in future customer interactions and
engagements. IBM Cloud also provides the user
with IBM Discovery Services that includes services
of Machine Learning Algorithms, Speech-To-Text
and Text-To-Speech Modules, Artificial Intelligence
(AI) Services, Cloud Functions (to be written in
jQuery) for integration with the interface,
Webhooks for connecting with the Web and so on.
These services along with the Watson Assistant
enables user to develop an interactive personal
4. International Journal of Scientific Research in Science and Technology (www.ijsrst.com) | Volume 8 | Issue 3
Dr. Jaydeep Patil et al Int J Sci Res Sci & Technol. May-June-2021, 8 (3) : 06-17
9
assistant. The Chatbots developed with the help of
IBM Watson makes the customer feel as if they are
interacting with a real customer representative on
the other line. It considers customer and employee
behaviour for the benefit of the business. Apart
from providing various advanced services for the
development of an application system, IBM Watson
fails to provide the facility of other languages other
than English. This, therefore limits the usage of it
within few locations in the world.
The Software also proves to be difficult to maintain
and is not capable of processing structured data. The
platform also takes time to integrate with services of
a business and provides accessibility at higher prices
as compared to other platforms available in the
market, limiting the organizations that can afford to
purchase their plans.
2. Google Dialogflow: Benefits and Limitations
The main benefit of Dialogflow is its connection
with Google. The Machine Learning Algorithms
inbuilt within the platform helps in understanding
natural language that is the user’s expression with
the help of agents. Each Agent contains Intents
which is matched with the user’s expression and
action is performed as a response to the query. It not
only provides answers to customers but also enables
agents to carry out small talks with the users. The
Platform also provides pre-built templates that
developers can use as foundations for their project.
3. Feature Comparison between IBM Watson and
Google Dialogflow
i. Machine Learning:
Both IBM Watson and Google Dialogflow provides
user with this feature to analyse the data. It includes
the NLP and NLU Algorithms that are required for
understanding natural language of the user’s
expression as input to the system. The Text Input is
sent to the NLP Module where it is converted into
structured data. The Speech input is handled using
other Speech-To-Text Algorithms provided by the
platforms for further processing.
ii. Chatbots:
Platforms ensure to provide chatbots for interaction
of the user with the system. All the Question-
Answer based queries by the user are handled here.
These Chatbots are designed for providing
interaction with the user just like humans, thus
providing natural experience to the users using it.
iii. Maintenance:
Considering maintenance of software platform,
Google Dialogflow wins the game. Various
organizations and industries found maintaining IBM
Watson difficult to maintain on the other hand
dialogflow provided a platform that was easier to
maintain.
iv. Handling of Structured Data:
Google Dialogflow has the capability of handling
structured data as it converts the user’s expression
into structured data using Natural Language
Processing Module. IBM Watson, however, fails to
handle structured data directly which thereby limits
the usage of the software in many organizations.
v. Services:
IBM provides various services through Watson
Discovery Services, Watson Studio, Speech-to-Text
and Text-to-Speech Services, Machine Learning and
Artificial Language Modules, Deep Learning,
Language Classifier and many more which caters
the need for processing and analysing the data.
Google Dialogflow provides the services of two
editions ES Edition and CX Edition, both having
their capabilities to handle and help in building of
an application system. Various NLP, ML and AI
models, like in IBM, analyse the data for further
processing and carrying out the required action to
be performed.
5. International Journal of Scientific Research in Science and Technology (www.ijsrst.com) | Volume 8 | Issue 3
Dr. Jaydeep Patil et al Int J Sci Res Sci & Technol. May-June-2021, 8 (3) : 06-17
10
vi. Integration and Overall Performance:
In terms of Integration, IBM Watson it takes time
and efforts to integrate its services with the
organization which therefore leads to delay of the
Project. With the increase in the amount of data,
the services in IBM are still limited to cater to their
needs. Considering pricing for the IBM Software, it
turns out to be more than any other software
platforms in the market, thus enabling only those
organizations that can afford Watson.
Google Dialogflow provides user with its two
editions namely, Dialogflow ES and Dialogflow CX.
ES being a Standard version provides facility of
integrating application with dialogflow with the
help of Fulfilment Feature or with the help of API
Service as per the convenience of the user to use the
Platform. Integration, therefore, has never been a
problem with Dialogflow. Pricing in dialogflow is
free for ES Model and is paid for CX Model whose
charges varies according to the quotations and
requests the user makes. Google Dialogflow
promises to handle large amount of data and provide
the services to them both in ES and CX Model. The
Dialogflow platform also provides the facility of 14+
languages which on the other hand was not
provided by IBM Watson as it avails the facility of
only English Language on its platform, limiting the
usage of the software platform in few regions across
the world.
Thus, both IBM Watson and Google Dialogflow
have their fair share of strengths and weaknesses.
Considering factors and requirements for a
particular project, one of the software platforms
must be chosen. It also requires consideration of the
size of business and the capability to access the plans
that enables choosing the most suited software
platform for the system application.
IV. METHODOLOGY
A. User-Interface
Flutter being an Open Source UI Software
Development Platform is used for developing
attractive UI for the Application. Various packages
and graphics library provided by the Platform allow
faster operations of the Application. It promises to
give a stunning look to the Application Interface,
irrespective of the operating platform. It enables the
developer to create Cross-Platform Applications with
ease. Thus the need of developing different
Application to run on Android and iOS is eliminated.
B. Dialog Manager
The back-end includes the development of Dialog
Manager of the Application. For the purpose of
developing a Natural Language Understanding
Platform for the Application we used the Google
Dialogflow’s ES (Standard) version which handles all
the conversations and actions given by the user. It
provides the following features:
Dialogflow Agent:
It is responsible for handling of the user
conversations and converting the voice command
given by the user into the text or the text command
into Structured Data that is understandable to the
application. Each of the Agent consists of Intents and
Entities.
Intents:
It takes care of matching the user expressions
obtained from the previous step to the best intent in
the agent. This matching of intent is also known as
Intent Classification. Figure 1 shows an example of an
Agent Weather which contains an intent forecast.
Figure 1: Intent Classification
A basic Intent contains the following:
1. Training Phrases:
These are sample phrases for what end-users might
say.
6. International Journal of Scientific Research in Science and Technology (www.ijsrst.com) | Volume 8 | Issue 3
Dr. Jaydeep Patil et al Int J Sci Res Sci & Technol. May-June-2021, 8 (3) : 06-17
11
2. Action:
It triggers certain actions for each intent when
activated.
3. Parameters:
When an intent is matched at runtime, Dialogflow
provides the extracted values from the end-user
expression as Parameters. Each parameter has a type,
called the Entity Type, which dictates exactly how
the data is extracted.
4. Response:
It provides the user with the responses for their
queries.
Figure 2 shows the basic flow for intent matching
and responding to the end-user.
Figure 2 : Intent Matching
Entities:
Pre-defined System Entities are provided by
Dialogflow for matching dates, times, email addresses
and so on. Entities can also be user-defined depending
on the type of data handled by the system application
User Interactions with the API
For interacting with the Dialogflow API Service, a
code must be written for direct interaction. Figure 3
shows the processing flow when interacting with
the API Service.
Figure 3: Interaction with the API
1. The end-user types or speaks an expression.
2. Your service then sends this end-user expression to
Dialogflow in a detect intent request message.
3. Dialogflow sends a detect intent response message
to your service. This message contains information
about the matched intent, the action, the parameters,
and the response defined for the intent.
4. Your service performs actions as needed, like
database queries or external API calls.
5. Your service sends a response to the end-user.
6. The end-user sees or hears the response
The feature of Text-to-Speech of the system is
developed by ‘flutter_tts’, a Text-To-Speech Package
provided by Flutter. It helped in providing answers
to the queries of the user in audio format. Thus,
enhancing the usability of the Application.
Dialogflow Console
Dialogflow Console is a web user interface that
enables us to create, build and test agents.
Dialogflow APIs help in building agents for
advanced scenarios.
V. IMPLEMENTATION
After the implementation of the user interface our
main task was to develop a dialog manager that
could handle all user commands either voice/text
and perform the required task. Google Dialogflow,
being an NLU Platform, was required for the same.
The console of Dialogflow is as shown in the below
Figure 4.
Figure 4: Google Dialogflow Console
7. International Journal of Scientific Research in Science and Technology (www.ijsrst.com) | Volume 8 | Issue 3
Dr. Jaydeep Patil et al Int J Sci Res Sci & Technol. May-June-2021, 8 (3) : 06-17
12
With the help of Intents and Entities in each Dialog
Agent the user expressions were matched and the
task of responding to the same was carried out. The
below Figure 5 and Figure 6 shows the various
Intents and Entities included in our Application.
Figure 5: Intents and Entities
Figure 6: Intents and Entities
Then to check whether or not the conversations
were handled we tested the entire dialog manager
through the testing platform available in the Google
Dialogflow. This is shown below Figure 7.
Figure 7: Testing
Speech Recognition
The input to the application could be either in text
or in voice as per the convenience of the user. The
text commands were automatically trained and
tested with the help of in-built Natural Language
Platform in the Google Dialogflow. This the Natural
Language Processing(NLP) for the application was
taken care of by the dialogflow. In terms of voice
commands the Dialogflow also possessed an in-built
feature of Speech-To-Text API that included various
Machine Learning and Neural Network Algorithms
to accomplish the extraction of text from speech
even in noisy environments. Thus, the Speech
Recognition was an added feature in ERAA, that
made it more suitable to the users during their
heavy workloads.
Other Features
The feature of opening device applications required
accessing permission for the same which was
handled by the ‘permission_handler’ plugin offered
by Flutter. This plugin provided a cross-platform
API to request and check permissions for the other
applications present in the device. Thus, ERAA was
then able to handle such requests from the user.
Object Detection Feature
The Object Detection was entirely based on the
Convolutional Neural Network(CNN) Model that
could help in detecting the objects with higher
confidence levels. The Model was trained with
various images and the network for the same was
highly dense to achieve accurate results for the same.
The below Figure 8 shows how CNN helps in
detecting Objects. It includes various layers of the
network namely Convolutional layer and the Max
Pooling layer. Each layer extracts the most
detectable feature from the image and converts
them into a vector. A fully connected layer is
developed within these vectors and then the image
is detected based on its training set. This Neural
8. International Journal of Scientific Research in Science and Technology (www.ijsrst.com) | Volume 8 | Issue 3
Dr. Jaydeep Patil et al Int J Sci Res Sci & Technol. May-June-2021, 8 (3) : 06-17
13
Network consisting of deep layers within itself is
then able to detect the images on application of test
set.
Figure 8: Object Detection with the help of CNN
VI. RESULTS
A. Sign-Up and Login-Page
The Login Page allows the user to access the
application on providing the credentials i.e. user’s
mail id and password. The Sign-Up page is intended
for the users whose information is not stored in the
application. The required information is taken from
the user and stored in Google's database with the
help of Google Dialogflow.
The following section shows the pictures of the User
Interface of the Application ERAA. It shows the
Login page and Sign up module of the Application.
Figure 9: Application Logo
Figure 10: Sign-up/Login Page
Figure 11: Login Page
Figure 12: Sign-Up Page
9. International Journal of Scientific Research in Science and Technology (www.ijsrst.com) | Volume 8 | Issue 3
Dr. Jaydeep Patil et al Int J Sci Res Sci & Technol. May-June-2021, 8 (3) : 06-17
14
B. ERAA’s Launch Page
After the Login Page the launch page is activated by
the Application, the figure is shown as below.
Figure 13: ERAA’s Launch Page
C. Opening of Various Applications through ERAA
ERAA is successfully able to open the Applications
installed in the Device as the Application is provided
with permission to full access to all the applications
with help of Flutter Software Platform. The working
for the same is as shown in the following images.
Figure 14: Accessing of various applications through
ERAA
D. Object Detection Feature
The feature of Object Detection in the Application is
an added feature apart from performing the basic
tasks like accessing other applications. It enables the
user to use Device Camera or the Images present in
the Gallery of the device for the purpose of Object
Detection. The feature provides the Confidence level
of each label it matches to. Thus, this helps the user
to identify the exact identity of the object.
Figure 15(a): Object Detection Feature
Figure 15(b): Object Detection Feature
10. International Journal of Scientific Research in Science and Technology (www.ijsrst.com) | Volume 8 | Issue 3
Dr. Jaydeep Patil et al Int J Sci Res Sci & Technol. May-June-2021, 8 (3) : 06-17
15
Figure 15(c): Object Detection Feature
Thus, ERAA is able to perform all the tasks as
required for being a Virtual Personal Assistant
VII.CONCLUSION
The proposed research will help in understanding
the basic methodologies which are used in
developing their own Virtual Personal Assistant.
The survey given in this paper will help in gaining a
clear understanding of the difference between the
Natural Language Understanding Platforms that is;
IBM Watson and Google Dialog Flow. This will help
in choosing the most appropriate amongst them for
future projects. The survey also provided the
research on various projects carried by using Google
Dialog Flow and IBM Watson thus determining
their features.
In our Project the application, ERAA, developed
with the help of Google Dialog Flow was able to
perform various tasks like accessing the other
applications like WhatsApp, Instagram, Gmail that
are installed in the device. Its User-friendly
Platform developed with the help of Flutter
provided ease in accessing the Application. With the
help of Graphic packages in Flutter we were able to
provide an attractive user-interface. It was able to
perform the basic features as required in an ideal
Personal Assistant. The Speech Recognition feature
in the application allowed users to perform the tasks
by giving Voice Commands. The Application was
also capable of handling small talk with the user.
With the development of the Application, we were
able to gain enough knowledge on Natural Language
Understanding Platforms and Machine Learning
Models which are the foundations for developing
future Artificial Intelligence Models.
VIII. FUTURE SCOPE
In future, the proposed system can be used in
making a software application which could be used
in various sections of the society; be it healthcare
services, educational institutions, etc. Many services
which are externally available to the users could be
incorporated in a single application with the help of
Google Dialogflow and various other NLU Platforms
available in the market and thereby making the
application a multi-functioned software. We would
encourage the readers to gain a deeper insight of
Natural Language Platforms and use to develop
applications which will cater the different needs of
the society.
IX. ACKNOWLEDGEMENT
We would express our sincere gratitude towards our
project guide Dr. Jaydeep Patil for his extensive
support and expertise on the project research,
development and execution.
X. REFERENCES
[1]. Mohit Bansal, Dr. T. K. Thivakaran, “Analysis of
Speech Recognition using Convolutional Neural
Network”, Journal of Engineering Sciences, Vol
11, Issue 1, 2020, Page 285-291.
[2]. J. Huang, J. Li and Y. Gong, "An analysis of
convolutional neural networks for speech
11. International Journal of Scientific Research in Science and Technology (www.ijsrst.com) | Volume 8 | Issue 3
Dr. Jaydeep Patil et al Int J Sci Res Sci & Technol. May-June-2021, 8 (3) : 06-17
16
recognition," 2015 IEEE International
Conference on Acoustics, Speech and Signal
Processing (ICASSP), South Brisbane, QLD,
Australia, 2015, pp 4989-4993, doi:
10.1109/ICASSP.2015.7178920
[3]. T. B. Mokgonyane, T. J. Sefara, T. I. Modipa, M.
M. Mogale, M. J. Manamela and P. J. Manamela,
"Automatic Speaker Recognition System based on
Machine Learning Algorithms," 2019 Southern
African Universities Power Engineering
Conference/Robotics and Mechatronics/Pattern
Recognition Association of South Africa
(SAUPEC/RobMech/PRASA), Bloemfontein,
South Africa, 2019, pp. 141-146, doi:
10.1109/RoboMech.2019.8704837.
[4]. Vineel Pratap, Qiantong Xu, Jacob Kahn, Gilad
Avidov, Tatiana Likhomaneko, Awni Hannun,
Vitaliy Liptchinsky, Gabriel Synnaeve, Ronan
Collobert, “ Scaling up Online Speech
Recognition Systems using ConvNets”, 27th
January 2020.
[5]. A. B. Nassif, I. Shahin, I. Attili, M. Azzeh and K.
Shaalan, "Speech Recognition Using Deep Neural
Networks: A Systematic Review," in IEEE Access,
vol. 7, pp. 19143-19165, 2019, doi:
10.1109/ACCESS.2019.2896880.
[6]. M. A. Khan, A. Tripathi, A. Dixit and M. Dixit,
"Correlative Analysis and Impact of Intelligent
Virtual Assistants on Machine Learning," 2019
11th International Conference on Computational
Intelligence and Communication Networks
(CICN), Honolulu, HI, USA, 2019, pp. 133-139,
doi: 10.1109/CICN.2019.8902424.
[7]. Tulshan A.S., Dhage S.N. (2019) Survey on
Virtual Assistant: Google Assistant, Siri, Cortana,
Alexa. In: Thampi S., Marques O., Krishnan S., Li
KC., Ciuonzo D., Kolekar M. (eds) Advances in
Signal Processing and Intelligent Recognition
Systems. SIRS 2018. Communications in
Computer and Information Science, vol 968.
Springer, Singapore.
[8]. N. A. Godse, S. Deodhar, S. Raut and P. Jagdale,
"Implementation of Chatbot for ITSM
Application Using IBM Watson," 2018 Fourth
International Conference on Computing
Communication Control and Automation
(ICCUBEA), Pune, India, 2018, pp. 1-5, doi:
10.1109/ICCUBEA.2018.8697411.
[9]. Linda W. Lee, Amir Dabirian, Iran Paul
McCarthy, Jan Kietzmann. (2020), “ Making
sense of text: artificial intelligence-enabled
content analysis”, European Journal of
Marketing, Vol.54 No.3, pp 615-644.
[10].Roberto Reyes, David Garza, Leonardo Garrido,
Victor De la Cueva and Jorge Ramirez,
“Methodology for the Implementation of Virtual
Assistants for Education Using Google
Dialogflow.”, Advances in Soft Computing
(pp.440-451).
[11].Chinnapa Reddy Kanakanti and Sabitha R., “Ai
and Ml Based Google Assistant for an
Organization using Google Cloud Platform and
Dialogflow”, International Journal of Recent
Technology and Engineering (IJRTE), Volume-8
Issue-5, January 2020, Page 2722-2727
[12].Mayank Aggarwal and Mani Madhukar, “IBM’s
Watson Analytics for Health Care: A Miracle
Made True.”, Cloud Computing Systems and
Applications in Healthcare. DOI: 10.4018/978-1-
5225-1002-4.ch007.
[13].G. E. Dahl, D. Yu, L. Deng, and A. Acero,
“Contextdependent pre-trained deep neural
networks for largevocabulary speech
recognition,” IEEE Trans. on Audio, Speech and
Language Processing, vol. 20, no. 1, pp. 30– 42,
2012.
[14].Sánchez-Díaz X., Ayala-Bastidas G., Fonseca-
Ortiz P., Garrido L. (2018) A Knowledge-Based
Methodology for Building a Conversational
Chatbot as an Intelligent Tutor. In: Batyrshin I.,
Martínez-Villaseñor M., Ponce Espinosa H. (eds)
Advances in Computational Intelligence. MICAI
2018. Lecture Notes in Computer Science, vol
12. International Journal of Scientific Research in Science and Technology (www.ijsrst.com) | Volume 8 | Issue 3
Dr. Jaydeep Patil et al Int J Sci Res Sci & Technol. May-June-2021, 8 (3) : 06-17
17
11289. Springer, Cham.
https://doi.org/10.1007/978-3-030-04497-8_14.
[15].Winkler, Rainer & Söllner, Matthias. (2018),
“Unleashing the Potential of Chatbots in
Education: A State-Of-The-Art Analysis”,
Academy of Management Proceedings. 2018.
DOI: 10.5465/AMBPP.2018.15903abstract
[16].A. P. Singh, R. Nath and S. Kumar, "A Survey:
Speech Recognition Approaches and
Techniques," 2018 5th IEEE Uttar Pradesh
Section International Conference on Electrical,
Electronics and Computer Engineering
(UPCON), Gorakhpur, India, 2018, pp. 1-4, doi:
10.1109/UPCON.2018.8596954.
[17].Ossama Abdel-Hamid, Abdelrahman Mohamed,
Hui Jiang, Li Deng, Gerald Penn, and Dong Yu,
“Convolutional Neural Networks for Speech
Recognition”, IEEE/ACM Transactions on Audio,
Speech, and Language Processing, vol.22,2010.
[18].Ying Zhang, Mohammad Pezeshki, Philemon
Brakel, Saizheng Zhang, Cesar Laurent Yoshua
Bengio, Aaron Courville, “Towards End-to-End
Speech Recognition with Deep Convolutional
Neural Networks”, arXiv:1701.02720v1,2017.
Cite this article as :
Dr. Jaydeep Patil, Atharva Shewale, Ekta Bhushan,
Alister Fernandes, Rucha Khartadkar, "A Voice Based
Assistant Using Google Dialogflow and Machine
Learning", International Journal of Scientific Research
in Science and Technology (IJSRST), Online ISSN :
2395-602X, Print ISSN : 2395-6011, Volume 8 Issue 3,
pp. 06-17, May-June 2021. Available at
doi : https://doi.org/10.32628/IJSRST218311
Journal URL : https://ijsrst.com/IJSRST218311