Probabilistic Comprehension and Modeling is one of the newest areas in information extraction, text linguistics. Though much of the research vested in linguistics and information extraction is probabilistic, the importance is disappeared in 80’s. This is just because of the input language is noisy, ambiguous and segmented. Probability theory is certainly normative for solving the problems related to uncertainty. Perhaps human language processing is simply non-optimal, non-rational process. Subjective Probabilistic approach fixes this problem, through scenario, evidence and hypothesis.
I’m a young Pakistani Blogger, Academic Writer, Freelancer, Quaidian & MPhil Scholar, Quote Lover, Co-Founder at Essar Student Fund & Blueprism Academia, belonging from Mehdiabad, Skardu, Gilgit Baltistan, Pakistan.
I am an academic writer & freelancer! I can work on Research Paper, Thesis Writing, Academic Research, Research Project, Proposals, Assignments, Business Plans, and Case study research.
Expertise:
Management Sciences, Business Management, Marketing, HRM, Banking, Business Marketing, Corporate Finance, International Business Management
For Order Online:
Whatsapp: +923452502478
Portfolio Link: https://blueprismacademia.wordpress.com/
Email: arguni.hasnain@gmail.com
Follow Me:
Linkedin: arguni_hasnain
Instagram : arguni.hasnain
Facebook: arguni.hasnain
Fusion Confusion? Comments on Nancy Reid: "BFF Four-Are we Converging?"jemille6
D. Mayo's comments on Nancy Reid's "BFF Four-Are we Converging?" given May 2, 2017 at The Fourth Bayesian, Fiducial and Frequentists Workshop held at Harvard University.
I’m a young Pakistani Blogger, Academic Writer, Freelancer, Quaidian & MPhil Scholar, Quote Lover, Co-Founder at Essar Student Fund & Blueprism Academia, belonging from Mehdiabad, Skardu, Gilgit Baltistan, Pakistan.
I am an academic writer & freelancer! I can work on Research Paper, Thesis Writing, Academic Research, Research Project, Proposals, Assignments, Business Plans, and Case study research.
Expertise:
Management Sciences, Business Management, Marketing, HRM, Banking, Business Marketing, Corporate Finance, International Business Management
For Order Online:
Whatsapp: +923452502478
Portfolio Link: https://blueprismacademia.wordpress.com/
Email: arguni.hasnain@gmail.com
Follow Me:
Linkedin: arguni_hasnain
Instagram : arguni.hasnain
Facebook: arguni.hasnain
Fusion Confusion? Comments on Nancy Reid: "BFF Four-Are we Converging?"jemille6
D. Mayo's comments on Nancy Reid's "BFF Four-Are we Converging?" given May 2, 2017 at The Fourth Bayesian, Fiducial and Frequentists Workshop held at Harvard University.
Grounded Theory: A specific methodology developed by Glaser and Strauss (1967) for the purpose of building theory from data. In their book the term grounded theory is used in a more sense to denote theoretical constructs derived form qualitative analysis of data.
Slides from an informal presentation on a special issue of Perspectives on Psychological Science, focusing on issues of theory construction in psychology.
The Logical Implication Table in Binary Propositional Calculus: Justification...ijcsta
Logic is the discipline concerned with providing valid general rules on which scientific reasoning and the resulting
propositions are based. To evaluate the validity of sentences in propositional calculus, we, typically, perform a
complete case analysis of all the possible truth-values assigned to the sentence’s propositional variables. Truth
tables provide a systematic method for performing such analysis in order to determine whether the sentence is valid,
satisfiable, contradictory, consistent, etc. However, in order to validate logical statements, we have to use valid truth
tables, i.e., truth tables that are provably consistent and justifiable by some natural criteria. The justification of the
truth table of some logical connectives is straightforward, due to the support of the table in everyday applications.
Nevertheless, the justification of one of the logical connectives, namely, the implication operator, has always been
difficult to build and understand. Though, the logical implication is arguably the most important operator because
of its applications as an inference engine for reasoning in science in general and control engineering in particular.
In this paper, the author presents this problem introducing a non-exhaustive proof, which justifies the logical
implication’s truth table in one phase. The author then proposes another optimal proof, discussing the points of
optimization and the effects of the resulting linguistic and philosophical interpretation on the scientific reasoning
processes. Finally, the paper envisions possible extension of the proposed methodology to solve similar problems
in various types of logic.
a small and well descriptive presentation on hypothesis topic. it has all the information you need to know about the topic covering the types, characteristics and contributions you want to know about hypothesis.
Mobile Media Revolution - The change in the medialandcape.Monty C. M. Metzger
I spoke at the Internet Congress in Berlin www.iico.de in June, 2008. This is my presentation with the title: Mobile Media Revolution - The changing media landscape.
Grounded Theory: A specific methodology developed by Glaser and Strauss (1967) for the purpose of building theory from data. In their book the term grounded theory is used in a more sense to denote theoretical constructs derived form qualitative analysis of data.
Slides from an informal presentation on a special issue of Perspectives on Psychological Science, focusing on issues of theory construction in psychology.
The Logical Implication Table in Binary Propositional Calculus: Justification...ijcsta
Logic is the discipline concerned with providing valid general rules on which scientific reasoning and the resulting
propositions are based. To evaluate the validity of sentences in propositional calculus, we, typically, perform a
complete case analysis of all the possible truth-values assigned to the sentence’s propositional variables. Truth
tables provide a systematic method for performing such analysis in order to determine whether the sentence is valid,
satisfiable, contradictory, consistent, etc. However, in order to validate logical statements, we have to use valid truth
tables, i.e., truth tables that are provably consistent and justifiable by some natural criteria. The justification of the
truth table of some logical connectives is straightforward, due to the support of the table in everyday applications.
Nevertheless, the justification of one of the logical connectives, namely, the implication operator, has always been
difficult to build and understand. Though, the logical implication is arguably the most important operator because
of its applications as an inference engine for reasoning in science in general and control engineering in particular.
In this paper, the author presents this problem introducing a non-exhaustive proof, which justifies the logical
implication’s truth table in one phase. The author then proposes another optimal proof, discussing the points of
optimization and the effects of the resulting linguistic and philosophical interpretation on the scientific reasoning
processes. Finally, the paper envisions possible extension of the proposed methodology to solve similar problems
in various types of logic.
a small and well descriptive presentation on hypothesis topic. it has all the information you need to know about the topic covering the types, characteristics and contributions you want to know about hypothesis.
Mobile Media Revolution - The change in the medialandcape.Monty C. M. Metzger
I spoke at the Internet Congress in Berlin www.iico.de in June, 2008. This is my presentation with the title: Mobile Media Revolution - The changing media landscape.
How Is Each Related To Deductive Inquiry?
Deduction Vs Deductive Reasoning
Deductive Approach Paper
Inductive Approach
Deductive Reasoning Strengths And Weaknesses
Deductive Approach In Research Approach
Deductive Reasoning Case
Examples Of Unsound Valid Deductive Argument
Deductive Critical Thinking
Social Work And Violence Essay
Deductive Reasoning
Advantages And Disadvantages Of Deductive Method
What Makes A Deductive Argument
Deductive and Inductive Reasoning
Example Of An Unsound Deductive Argument
Deductive Bible Studies
Deductive and Inductive Grammar Teaching
Inductive & Deductive Research
Research Approach And Inductive Approach
Unit 7. Theoritical & Conceptual Framework.pptxshakirRahman10
THEORETICAL AND CONCEPTUAL FRAMEWORKS:
objectives:
1. Discuss the different types of models and frameworks used in research framework
2. Discuss the use of theoretical/conceptual frameworks and models in the research.
3. Differentiate theoretical/conceptual frameworks and models
4. Recognize the best suit theory or theoretical model/framework for particular research study
5. Develop conceptual models/framework, best suit for particular research study.
What is a theoretical framework?
A theoretical framework is a summary of the researcher’s theory regarding a particular problem that is developed through a review of previously tested knowledge of variables involved. It identifies a plan for investigation and interpretation of the findings.
It relates to philosophical basis on which the research takes place and form the link between the theoretical aspects and practical components of the investigation undertaken. Therefore it’’ has implications for every decision made in the research process".
Theoretical framework can be considered as a conceptual model that establishes a sense of structure that guides the research process. It includes the variables a researcher intends to measure and relationships he/she seeks to understand. Essentially, this is where a researcher develops a “theory” and build his/her case for investigating that theory.
The theoretical framework is the researcher’s presentation of a theory that explains a particular problem and it is not based on his/her suspicions alone.
Theoretical framework is presented in the early section of a dissertation mainly in chapter two of the report and provides the rationale for conducting your research to investigate a particular research problem.
It involves a well-supported rationale and is organized in a manner that helps the reader understand and asses the perspective of the researcher.
When developing a theoretical framework:
The researcher start by describing what is known about the variables involved, what is known about their relationship, and what can be explained thus far.
One need to investigate other researchers’ theories behind these relationships and identify a theory (or a combination of theories) that explain his/her major research problem.
The researcher need to consider alternative theories that might challenge his/her perspective.
One also considers the limitations associated with his/her theory and quite possibly that problem could be better understood by other theoretical frameworks.
Significance of a theoretical framework:
It helps the researcher to consider other possible frameworks and to reduce biases that may sway the researcher’s interpretation.
It clarifies researcher’s implicit theory in a manner that is more clearly defined.
It demonstrates that the relationships proposed by the researcher are not based on his/her personal instincts or guesses, but rather formed from facts obtained from authors of previous research.
The Use of Java Swing’s Components to Develop a WidgetWaqas Tariq
Widget is a kind of application provides a single service such as a map, news feed, simple clock, battery-life indicators, etc. This kind of interactive software object has been developed to facilitate user interface (UI) design. A user interface (UI) function may be implemented using different widgets with the same function. In this article, we present the widget as a platform that is generally used in various applications, such as in desktop, web browser, and mobile phone. We also describe a visual menu of Java Swing’s components that will be used to establish widget. It will assume that we have successfully compiled and run a program that uses Swing components.
3D Human Hand Posture Reconstruction Using a Single 2D ImageWaqas Tariq
Passive sensing of the 3D geometric posture of the human hand has been studied extensively over the past decade. However, these research efforts have been hampered by the computational complexity caused by inverse kinematics and 3D reconstruction. In this paper, our objective focuses on 3D hand posture estimation based on a single 2D image with aim of robotic applications. We introduce the human hand model with 27 degrees of freedom (DOFs) and analyze some of its constraints to reduce the DOFs without any significant degradation of performance. A novel algorithm to estimate the 3D hand posture from eight 2D projected feature points is proposed. Experimental results using real images confirm that our algorithm gives good estimates of the 3D hand pose. Keywords: 3D hand posture estimation; Model-based approach; Gesture recognition; human- computer interface; machine vision.
Camera as Mouse and Keyboard for Handicap Person with Troubleshooting Ability...Waqas Tariq
Camera mouse has been widely used for handicap person to interact with computer. The utmost important of the use of camera mouse is must be able to replace all roles of typical mouse and keyboard. It must be able to provide all mouse click events and keyboard functions (include all shortcut keys) when it is used by handicap person. Also, the use of camera mouse must allow users troubleshooting by themselves. Moreover, it must be able to eliminate neck fatigue effect when it is used during long period. In this paper, we propose camera mouse system with timer as left click event and blinking as right click event. Also, we modify original screen keyboard layout by add two additional buttons (button “drag/ drop” is used to do drag and drop of mouse events and another button is used to call task manager (for troubleshooting)) and change behavior of CTRL, ALT, SHIFT, and CAPS LOCK keys in order to provide shortcut keys of keyboard. Also, we develop recovery method which allows users go from camera and then come back again in order to eliminate neck fatigue effect. The experiments which involve several users have been done in our laboratory. The results show that the use of our camera mouse able to allow users do typing, left and right click events, drag and drop events, and troubleshooting without hand. By implement this system, handicap person can use computer more comfortable and reduce the dryness of eyes.
A Proposed Web Accessibility Framework for the Arab DisabledWaqas Tariq
The Web is providing unprecedented access to information and interaction for people with disabilities. This paper presents a Web accessibility framework which offers the ease of the Web accessing for the disabled Arab users and facilitates their lifelong learning as well. The proposed framework system provides the disabled Arab user with an easy means of access using their mother language so they don’t have to overcome the barrier of learning the target-spoken language. This framework is based on analyzing the web page meta-language, extracting its content and reformulating it in a suitable format for the disabled users. The basic objective of this framework is supporting the equal rights of the Arab disabled people for their access to the education and training with non disabled people. Key Words : Arabic Moon code, Arabic Sign Language, Deaf, Deaf-blind, E-learning Interactivity, Moon code, Web accessibility , Web framework , Web System, WWW.
Real Time Blinking Detection Based on Gabor FilterWaqas Tariq
New method of blinking detection is proposed. The utmost important of blinking detections method is robust against different users, noise, and also change of eye shape. In this paper, we propose blinking detections method by measuring of distance between two arcs of eye (upper part and lower part). We detect eye arcs by apply Gabor filter onto eye image. As we know that Gabor filter has advantage on image processing application since it able to extract spatial localized spectral features, such line, arch, and other shape are more easily detected. After two of eye arcs are detected, we measure the distance between both by using connected labeling method. The open eye is marked by the distance between two arcs is more than threshold and otherwise, the closed eye is marked by the distance less than threshold. The experiment result shows that our proposed method robust enough against different users, noise, and eye shape changes with perfectly accuracy.
Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...Waqas Tariq
A method for computer input with human eyes-only using two Purkinje images which works in a real time basis without calibration is proposed. Experimental results shows that cornea curvature can be estimated by using two light sources derived Purkinje images so that no calibration for reducing person-to-person difference of cornea curvature. It is found that the proposed system allows usersf movements of 30 degrees in roll direction and 15 degrees in pitch direction utilizing detected face attitude which is derived from the face plane consisting three feature points on the face, two eyes and nose or mouth. Also it is found that the proposed system does work in a real time basis.
Toward a More Robust Usability concept with Perceived Enjoyment in the contex...Waqas Tariq
Mobile multimedia service is relatively new but has quickly dominated people¡¯s lives, especially among young people. To explain this popularity, this study applies and modifies the Technology Acceptance Model (TAM) to propose a research model and conduct an empirical study. The goal of study is to examine the role of Perceived Enjoyment (PE) and what determinants can contribute to PE in the context of using mobile multimedia service. The result indicates that PE is influencing on Perceived Usefulness (PU) and Perceived Ease of Use (PEOU) and directly Behavior Intention (BI). Aesthetics and flow are key determinants to explain Perceived Enjoyment (PE) in mobile multimedia usage.
Collaborative Learning of Organisational KnolwedgeWaqas Tariq
This paper presents recent research into methods used in Australian Indigenous Knowledge sharing and looks at how these can support the creation of suitable collaborative envi- ronments for timely organisational learning. The protocols and practices as used today and in the past by Indigenous communities are presented and discussed in relation to their relevance to a personalised system of knowledge sharing in modern organisational cultures. This research focuses on user models, knowledge acquisition and integration of data for constructivist learning in a networked repository of or- ganisational knowledge. The data collected in the repository is searched to provide collections of up-to-date and relevant material for training in a work environment. The aim is to improve knowledge collection and sharing in a team envi- ronment. This knowledge can then be collated into a story or workflow that represents the present knowledge in the organisation.
Our research aims to propose a global approach for specification, design and verification of context awareness Human Computer Interface (HCI). This is a Model Based Design approach (MBD). This methodology describes the ubiquitous environment by ontologies. OWL is the standard used for this purpose. The specification and modeling of Human-Computer Interaction are based on Petri nets (PN). This raises the question of representation of Petri nets with XML. We use for this purpose, the standard of modeling PNML. In this paper, we propose an extension of this standard for specification, generation and verification of HCI. This extension is a methodological approach for the construction of PNML with Petri nets. The design principle uses the concept of composition of elementary structures of Petri nets as PNML Modular. The objective is to obtain a valid interface through verification of properties of elementary Petri nets represented with PNML.
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 BoardWaqas Tariq
The main aim of this paper is to build a system that is capable of detecting and recognizing the hand gesture in an image captured by using a camera. The system is built based on Altera’s FPGA DE2 board, which contains a Nios II soft core processor. Image processing techniques and a simple but effective algorithm are implemented to achieve this purpose. Image processing techniques are used to smooth the image in order to ease the subsequent processes in translating the hand sign signal. The algorithm is built for translating the numerical hand sign signal and the result are displayed on the seven segment display. Altera’s Quartus II, SOPC Builder and Nios II EDS software are used to construct the system. By using SOPC Builder, the related components on the DE2 board can be interconnected easily and orderly compared to traditional method that requires lengthy source code and time consuming. Quartus II is used to compile and download the design to the DE2 board. Then, under Nios II EDS, C programming language is used to code the hand sign translation algorithm. Being able to recognize the hand sign signal from images can helps human in controlling a robot and other applications which require only a simple set of instructions provided a CMOS sensor is included in the system.
An overview on Advanced Research Works on Brain-Computer InterfaceWaqas Tariq
A brain–computer interface (BCI) is a proficient result in the research field of human- computer synergy, where direct articulation between brain and an external device occurs resulting in augmenting, assisting and repairing human cognitive. Advanced works like generating brain-computer interface switch technologies for intermittent (or asynchronous) control in natural environments or developing brain-computer interface by Fuzzy logic Systems or by implementing wavelet theory to drive its efficacies are still going on and some useful results has also been found out. The requirements to develop this brain machine interface is also growing day by day i.e. like neuropsychological rehabilitation, emotion control, etc. An overview on the control theory and some advanced works on the field of brain machine interface are shown in this paper.
Exploring the Relationship Between Mobile Phone and Senior Citizens: A Malays...Waqas Tariq
There is growing ageing phenomena with the rise of ageing population throughout the world. According to the World Health Organization (2002), the growing ageing population indicates 694 million, or 223% is expected for people aged 60 and over, since 1970 and 2025.The growth is especially significant in some advanced countries such as North America, Japan, Italy, Germany, United Kingdom and so forth. This growing older adult population has significantly impact the social-culture, lifestyle, healthcare system, economy, infrastructure and government policy of a nation. However, there are limited research studies on the perception and usage of a mobile phone and its service for senior citizens in a developing nation like Malaysia. This paper explores the relationship between mobile phones and senior citizens in Malaysia from the perspective of a developing country. We conducted an exploratory study using contextual interviews with 5 senior citizens of how they perceive their mobile phones. This paper reveals 4 interesting themes from this preliminary study, in addition to the findings of the desirable mobile requirements for local senior citizens with respect of health, safety and communication purposes. The findings of this study bring interesting insight to local telecommunication industries as a whole, and will also serve as groundwork for more in-depth study in the future.
Principles of Good Screen Design in WebsitesWaqas Tariq
Visual techniques for proper arrangement of the elements on the user screen have helped the designers to make the screen look good and attractive. Several visual techniques emphasize the arrangement and ordering of the screen elements based on particular criteria for best appearance of the screen. This paper investigates few significant visual techniques in various web user interfaces and showcases the results for better understanding and their presence.
Virtual teams are used more and more by companies and other organizations to receive benefits. They are a great way to enable teamwork in situations where people are not sitting in the same physical place at the same time. As companies seek to increase the use of virtual teams, a need exists to explore the context of these teams, the virtuality of a team and software that may help these teams working virtualy. Virtual teams have the same basic principles as traditional teams, but there is one big difference. This difference is the way the team members communicate. Instead of using the dynamics of in-office face-to-face exchange, they now rely on special communication channels enabled by modern technologies, such as e-mails, faxes, phone calls and teleconferences, virtual meetings etc. This is why this paper is focused on the issues regarding virtual teams, and how these teams are created and progressing in Albania.
Cognitive Approach Towards the Maintenance of Web-Sites Through Quality Evalu...Waqas Tariq
It is a well established fact that the Web-Applications require frequent maintenance because of cutting– edge business competitions. The authors have worked on quality evaluation of web-site of Indian ecommerce domain. As a result of that work they have made a quality-wise ranking of these sites. According to their work and also the survey done by various other groups Futurebazaar web-site is considered to be one of the best Indian e-shopping sites. In this research paper the authors are assessing the maintenance of the same site by incorporating the problems incurred during this evaluation. This exercise gives a real world maintainability problem of web-sites. This work will give a clear picture of all the quality metrics which are directly or indirectly related with the maintainability of the web-site.
USEFul: A Framework to Mainstream Web Site Usability through Automated Evalua...Waqas Tariq
A paradox has been observed whereby web site usability is proven to be an essential element in a web site, yet at the same time there exist an abundance of web pages with poor usability. This discrepancy is the result of limitations that are currently preventing web developers in the commercial sector from producing usable web sites. In this paper we propose a framework whose objective is to alleviate this problem by automating certain aspects of the usability evaluation process. Mainstreaming comes as a result of automation, therefore enabling a non-expert in the field of usability to conduct the evaluation. This results in reducing the costs associated with such evaluation. Additionally, the framework allows the flexibility of adding, modifying or deleting guidelines without altering the code that references them since the guidelines and the code are two separate components. A comparison of the evaluation results carried out using the framework against published evaluations of web sites carried out by web site usability professionals reveals that the framework is able to automatically identify the majority of usability violations. Due to the consistency with which it evaluates, it identified additional guideline-related violations that were not identified by the human evaluators.
Robot Arm Utilized Having Meal Support System Based on Computer Input by Huma...Waqas Tariq
A robot arm utilized having meal support system based on computer input by human eyes only is proposed. The proposed system is developed for handicap/disabled persons as well as elderly persons and tested with able persons with several shapes and size of eyes under a variety of illumination conditions. The test results with normal persons show the proposed system does work well for selection of the desired foods and for retrieve the foods as appropriate as usersf requirements. It is found that the proposed system is 21% much faster than the manually controlled robotics.
Dynamic Construction of Telugu Speech Corpus for Voice Enabled Text EditorWaqas Tariq
In recent decades speech interactive systems have gained increasing importance. Performance of an ASR system mainly depends on the availability of large corpus of speech. The conventional method of building a large vocabulary speech recognizer for any language uses a top-down approach to speech. This approach requires large speech corpus with sentence or phoneme level transcription of the speech utterances. The transcriptions must also include different speech order so that the recognizer can build models for all the sounds present. But, for Telugu language, because of its complex nature, a very large, well annotated speech database is very difficult to build. It is very difficult, if not impossible, to cover all the words of any Indian language, where each word may have thousands and millions of word forms. A significant part of grammar that is handled by syntax in English (and other similar languages) is handled within morphology in Telugu. Phrases including several words (that is, tokens) in English would be mapped on to a single word in Telugu.Telugu language is phonetic in nature in addition to rich in morphology. That is why the speech technology developed for English cannot be applied to Telugu language. This paper highlights the work carried out in an attempt to build a voice enabled text editor with capability of automatic term suggestion. Main claim of the paper is the recognition enhancement process developed by us for suitability of highly inflecting, rich morphological languages. This method results in increased speech recognition accuracy with very much reduction in corpus size. It also adapts Telugu words to the database dynamically, resulting in growth of the corpus.
An Improved Approach for Word Ambiguity RemovalWaqas Tariq
Word ambiguity removal is a task of removing ambiguity from a word, i.e. correct sense of word is identified from ambiguous sentences. This paper describes a model that uses Part of Speech tagger and three categories for word sense disambiguation (WSD). Human Computer Interaction is very needful to improve interactions between users and computers. For this, the Supervised and Unsupervised methods are combined. The WSD algorithm is used to find the efficient and accurate sense of a word based on domain information. The accuracy of this work is evaluated with the aim of finding best suitable domain of word. Keywords: Human Computer Interaction, Supervised Training, Unsupervised Learning, Word Ambiguity, Word sense disambiguation
Parameters Optimization for Improving ASR Performance in Adverse Real World N...Waqas Tariq
From the existing research it has been observed that many techniques and methodologies are available for performing every step of Automatic Speech Recognition (ASR) system, but the performance (Minimization of Word Error Recognition-WER and Maximization of Word Accuracy Rate- WAR) of the methodology is not dependent on the only technique applied in that method. The research work indicates that, performance mainly depends on the category of the noise, the level of the noise and the variable size of the window, frame, frame overlap etc is considered in the existing methods. The main aim of the work presented in this paper is to use variable size of parameters like window size, frame size and frame overlap percentage to observe the performance of algorithms for various categories of noise with different levels and also train the system for all size of parameters and category of real world noisy environment to improve the performance of the speech recognition system. This paper presents the results of Signal-to-Noise Ratio (SNR) and Accuracy test by applying variable size of parameters. It is observed that, it is really very hard to evaluate test results and decide parameter size for ASR performance improvement for its resultant optimization. Hence, this study further suggests the feasible and optimum parameter size using Fuzzy Inference System (FIS) for enhancing resultant accuracy in adverse real world noisy environmental conditions. This work will be helpful to give discriminative training of ubiquitous ASR system for better Human Computer Interaction (HCI). Keywords: ASR Performance, ASR Parameters Optimization, Multi-Environmental Training, Fuzzy Inference System for ASR, ubiquitous ASR system, Human Computer Interaction (HCI)
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
The Indian economy is classified into different sectors to simplify the analysis and understanding of economic activities. For Class 10, it's essential to grasp the sectors of the Indian economy, understand their characteristics, and recognize their importance. This guide will provide detailed notes on the Sectors of the Indian Economy Class 10, using specific long-tail keywords to enhance comprehension.
For more information, visit-www.vavaclasses.com
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
1.4 modern child centered education - mahatma gandhi-2.pptx
Subjective Probabilistic Knowledge Grading and Comprehension
1. A.Suresh Babu, Dr P.Premchand & Dr A.Govardhan
International Journal of Data Engineering (IJDE) Volume (1): Issue (5) 70
Subjective Probabilistic Knowledge Grading and
Comprehension
A.Suresh Babu asureshjntu@gmail.com
Asst. Professor
Department of Computer
Science Engineering JNTUA,
Anantapur
Dr P.Premchand p.premchand@uceou.edu
Professor, Department of Computer
Science Engineering Osmania
University Hyderabad
Dr A.Govardhan govardhan_cse@yahoo.co.in
Professor, Department of Computer
Science Engineering
JNTUHCEJ, Jagityala
Abstract
Probabilistic Comprehension and Modeling is one of the newest areas in
information extraction, text linguistics. Though much of the research vested in
linguistics and information extraction is probabilistic, the importance is
disappeared in 80’s. This is just because of the input language is noisy,
ambiguous and segmented. Probability theory is certainly normative for solving
the problems related to uncertainty. Perhaps human language processing is
simply non-optimal, non-rational process. Subjective Probabilistic approach fixes
this problem, through scenario, evidence and hypothesis.
Keywords: Probability & Statistics, Probabilistic Comprehension, Subjective Grading, Subjective
Probability.
1. INTRODUCTION
It is not possible to stop grading which is an objective standardized measure for valuating the
texts. Subjective and objective measures declare the values in the texts. Subjective grading is
percussive than objective grading, as the previous researches prove to be mechanistic towards
grading the knowledge when objective measure are used.
One of the central problems in the field of knowledge extraction or discovery is the availability and
the development methods to determine good measures of interestingness of discovered patterns.
Such measures of interestingness are divided into objective measures - those that depend only
on the structure of a pattern and the underlying data used in the discovery process, and the
subjective measures - those that also depend on the class of users who examine the pattern.
Objectiveness describes absolute grading of knowledge, whereas subjectiveness describes
relative grading. Probabilistic belief is the most important concern that paves out the clarity in
selection of measures.
2. A.Suresh Babu, Dr P.Premchand & Dr A.Govardhan
International Journal of Data Engineering (IJDE) Volume (1): Issue (5) 71
1.1 Important Definitions
Objective – is a statement that is completely unbiased. It is not touched by the speaker’s previous
experiences or tastes. It is verifiable by looking up facts or performing mathematical calculations.
Subjective – is a statement that has been colored by the character of the speaker or writer. It
often has a basis in reality, but reflects the perspective through with the speaker views reality. It
cannot be verified using concrete facts and figures.
1.2 When to Be Objective and Subjective
Objective – it is important to be objective when you are making any kind of a rational decision. It
might involve purchasing something or deciding which job offer to take. You should also be
objective when you are reading, especially news sources. Being objective when you are meeting
and having discussions with new people helps you to keep your concentration focused on your
goal, rather than on any emotions your meeting might trigger. In essence, accomplishing the goal
without any distractions is objective oriented.
Subjective – can be used when nothing tangible is at stake. When you are watching a movie or
reading a book for pleasure, being subjective and getting caught up in the world of the characters
makes your experience more enjoyable. If you are discussing any type of art, you have to keep in
mind that everyone’s opinions on a particular piece are subjective. In essence, accomplishing the
goal with the complete knowledge and the supplements are not distractions rather support
enriching the knowledge to fulfill the goal.
Natural language degree expressions denote degrees or relations between degrees and norms of
expectation that lie on a scale [3]. What has not been clarified yet is what kind of relations is
expressed by degree expressions. Degree expressions are not only a means of denoting graded
properties, but they also allow for the adaptation to varying precision requirements as well as for
very efficient communication by referring to entities that are only available implicitly or derivable
from the context of the phrase.
1.3 Basis
The Dempster-Shafer theory, also known as the theory of belief functions, is a generalization of
the Bayesian theory of subjective probability. Whereas the Bayesian theory requires probabilities
for each question of interest, belief functions allow us to base degrees of belief for one question
on probabilities for a related question. These degrees of belief may or may not have the
mathematical properties of probabilities; how much they differ from probabilities will depend on
how closely the two questions are related [4].
The Dempster-Shafer theory is based on two ideas: the idea of obtaining degrees of belief for one
question from subjective probabilities for a related question, and Dempster's rule for combining
such degrees of belief when they are based on independent items of evidence.
Implementing the Dempster-Shafer theory in a specific problem generally involves solving two
related problems. First, we must sort the uncertainties in the problem into a priori independent
items of evidence. Second, we must carry out Dempster's rule computationally. These two
problems and their solutions are closely related. Sorting the uncertainties into independent items
leads to a structure involving items of evidence that bear on different but related questions, and
this structure can be used to make computations feasible.
3. A.Suresh Babu, Dr P.Premchand & Dr A.Govardhan
International Journal of Data Engineering (IJDE) Volume (1): Issue (5) 72
2. RELATED WORK
2.1 Dempster-Shafer theory
The method of reasoning with uncertain information known as Dempster-Shafer theory arose
from the reinterpretation and development of work of Arthur Dempster and by Glenn Shafer in his
book a mathematical theory of evidence, and further publications. Dempster-Shafer theory is a
belief system that deals with the evidence available for a hypothesis on uncertainty. The
uncertainty is represented as Bel(U), the belief which is an evidence for hypothesis, Plaus(U), the
plausibility which is an evidence that does not contradict the hypothesis.
Suppose, for example, that Betty and Sally testify independently that they heard a burglar enter
my house. They might both have mistaken the noise of a dog for that of a burglar, and because of
this common uncertainty, it is not possible to combine degrees of belief based on their evidence
directly by the Dempster’s rule. But if to consider explicitly the possibility of a dog’s presence,
then three independent items of evidence can be identified: evidence for or against the presence
of a dog, evidence for Betty’s reliability and the evidence for Sally’s reliability. These items of
evidence can be combined by Dempster’s rule and the computations are facilitated by the
structure that relates the different questions to arise.
Belief and Plausibility can be viewed as providing a lower and upper bounds respectively on the
likelihood of U. Over all the possible states and worlds the Dempster and Shafer belief and
plausibility functions are defined.
]1..0[2:)( →W
xBel
]1..0[2:)( →W
xPlaus
Where },...,,{ 21 nwwwW =
Since belief and plausibility encode evidence, they cannot be defined solely on individual states
1)( ≤∑i iwBel
1)( ≥∑i iwPlaus
2.2 Grading Knowledge
Information Extraction is akin to “Knowledge Extraction” in text maps natural language onto a
formal representation of the facts contained in the texts. Common text knowledge extraction
methods show a severe lack of methods for understanding natural language “degree
expressions”, which describe gradable properties like price and quality, respectively [3]. However,
without an adequate understanding of such degree expressions it is often impossible to grasp the
central meaning of a text. Degree expressions describe gradable attributes of objects, events or
other ontological entities. Complex Lexical semantics is to be instrumented when describing what
remains constant when a word is put into different contexts. Fuzzy Logic is even implemented for
defining various grades and scales for measuring quantitative text. The special calculi for the text
are available to derive valid conclusions from a representation that sticks near to the surface of
the utterance and can also handle non-gradable text. Ontological research degree expressions
are denoted as their own entities, the upper ontology consisting of the categories such as,
Physical-Object and Action augmented by the primitive concept Degree. Some researchers
believe that ontological entities are too unparsimonious; the lexical approach only seems to be
the one that overcomes the problems. Adjectives of the text propose much easier method of
grading. The gradability of (most) adjectives is more obvious than the gradability of members of
the other word classes. Besides the fine frequency of gradable adjectives another reason to focus
on them is that any advancement of more general mechanisms for degree expressions can easily
4. A.Suresh Babu, Dr P.Premchand & Dr A.Govardhan
International Journal of Data Engineering (IJDE) Volume (1): Issue (5) 73
be contrasted with other authors’ research on adjectives, while — but as a fact — degree
expressions in general have not been considered in any approach towards deep understanding of
language.
A well accepted and approved classification of adjectives is much more important for classifying
minor and major groups. Metonymous collection of adjectives plays a fair role in describing the
adjective classes. Multiple word senses, even disregarding the metonymous figurative
interpretations many relative adjectives still cannot be attributed a single meaning. A common
example for this observation is the dichotomy between “physical — mental” and “physical —
logical”. Depending on the context “physical” is put into, the appropriate conceptual scale must be
chosen. Word sense disambiguation, hypallage, deserves far more attention to be developed
technologically.
3. PROPOSED WORK
3.1 Probabilistic Grading
Degree expressions, gradability on comparisons, sub-categorizations, ontological significances,
metonymy, word senses and sense disambiguation are some of the concrete and qualified
methods for grading text. These methods are employed with priori information about the
characteristics of the text corpus in the experiments related to determining the knowledge. The
preponderance on the methods of grading has evidence in assigning or grading text or
knowledge to some degree by means of comparisons and with probabilistic distinctions is the
quintessential area of the research that has been working around by IE personnel waiting with
eagerness. Current models for this problem have been studied mostly from a linguistic
perspective and less so from one of real-world text understanding. The semantic significance is
bought into the picture of research only to a small extent that promises the development of rich
cohesive frameworks as a future work. A large number of implausible readings that deal with
realistic outputs on the literature generate a negative impact on the natural language analysts.
Syntax oriented approaches methodically fail to account for interpretations that depend entirely
on semantic or conceptual criteria.
In the present work, probabilistic grading employs statistical and probabilistic methods for grading
knowledge. There are several works that have been implemented for measuring correlation to
express the relationship between two or more variables. Canonical correlation is an additional
procedure for assessing the relationship between variables. As there are computational issues in
canonical correlation which are viewed as limitations for determining the degree by comparison
for grading the text, the subjective probability is applied for the successive discovery of grading
the text and knowledge from the large corpus. The method proposed is feasible to implement in
the theories of “discourse and corpus” that endorses the quality results in grading.
3.2 Dempster-Shafer Algorithm
Dempster-Shafer evidence theory and subjective probability provides a useful computational
scheme for integrating uncertainty information from multiple sources [4]. The algorithm initiates
with; the frame of discernment of the problem domain that is, to determine the grade of text.
Assume U be the set of mutually and exhaustive hypotheses i.e., all possibilities of expressing
the grade of text. The degree of evidence is used to determine the degree of the text. In general,
the evidence is calculated for the individual of the hypotheses, but in the context the evidence
determines the grade probabilistically for the valued text. That the more of its value the text is
graded as good; evidence holds the fact of good grade for the text. The evidence is the evidence
is observed using a function m(x) that provides the following Basic Probability Assignments on U.
Given a certain piece of evidence about the value of the text, the belief that one is willing to
commit for the specified value for the text exactly to A is represented by m(A), this holds for any
A U. The subset A of frame U is called the focus element of m, if m(A)>0. That is; to determine
grade of a particular text is within the scope of a set of specified values.
5. A.Suresh Babu, Dr P.Premchand & Dr A.Govardhan
International Journal of Data Engineering (IJDE) Volume (1): Issue (5) 74
∑
=
=
⊂
0)(
1)(
φm
Am
UA
The DS theory starts by the assumption of Universe of Discourse θ, also called a Frame of
Discernment. This is a set of mutually exclusive alternatives. Considering the current case study
of determining the grade of a text, θ would be the set consisting of all possible grades.
Elements of 2θ
, i.e., subsets are the class of general propositions in the domain. For example,
the proposition “The grade is highly starred” corresponds to the set of the elements of θ which are
high graded stars.
A function m:2θ
→[0,1] is called a basic probability assignment if it satisfies m(φ)=0 and
1)( =∑⊆θA
Am
The quantity m(A) is defined as A’s basic probability number. It represents the strength of some
evidence; our exact belief in the proposition represented by A.
A function m:2θ
→[0,1] is called a belief function if it satisfies Bel(φ)=0, Bel(φ)=1, and for any
collection A1,…An of subsets of θ.
∑
≠
⊆ ∈
+
−≥∪∪
φI
n i
i
I
n ABelAABel
}...1{1 1
1||
1 )1(),...( I
A belief function assigns to each subset of θ a measure of our total belief in the proposition
represented by the subset.
There corresponds to each belief function one and only one basic probability assignment.
Conversely, there corresponds to each basic probability assignment one and only one belief
function. They are related by the following two formulae:
∑⊆
=
AB
BmABel )()(
, for all A ⊆θ
m(A) =
∑⊆
−
−
AB
BA
BBel )()1( ||
Thus a belief function and a basic probability assignment convey exactly the same information.
Corresponding to each belief function are three other commonly used quantities that convey the
same information:
A function Q:2θ
→[0,1] is called a commonality function if there is a basic probability assignment,
m, such that
∑⊆
=
BA
BmAQ )()(
for all A⊆θ
The doubt function is given by
6. A.Suresh Babu, Dr P.Premchand & Dr A.Govardhan
International Journal of Data Engineering (IJDE) Volume (1): Issue (5) 75
Dou(A) = Bel (~A)
And the upper probability function is given by
P*(A) = 1 - Dou(A)
This expresses how much we should belief in A if all currently unknown facts were to support A.
This the true belief in A will be somewhere in the interval )(),( *
APABel
3.3 Probabilistic Comprehensive Grading
It ought to be certainly counted that probabilistic approaches focus a paradox in process
modeling and data comprehension. And it is simultaneously one of the oldest and one of the
newest research areas in linguistics, where already much of the research that was done is of
statistical and probabilistic in nature [1][2]. Actually, the probabilistic comprehension and
modeling is drawn from early Bayesian precursors.
Probability theory is certainly the best normative model for solving problems of decision making
under uncertainty [1]. But perhaps it is a good normative model, but a bad descriptive one.
Despite the fact that probability theory was originally invented as a cognitive model of human
reasoning under uncertainty, perhaps people do not use probabilistic reasoning in cognitive tasks
like language production and comprehension.
Probabilistic modeling provides evidence for knowledge comprehension and as well as for
knowledge grading. Since, they are not definitely descriptive, a fuzzy set of graded values are
attributed to the knowledge for subject grading.
Since one of the oldest and most robust effects of the linguistic texts is the word frequency effect,
word frequency plays an important role in auditory, comprehensive, productive and visual
modalities [2]. Since, variations of word frequency induce noise, grading of knowledge which
includes word frequency effect requires probabilistic comprehension to determine the suitable
importance of the knowledge. Grammatical subjects make more impact in frequency
identification. In an instance of research experiments, “When subjects made recognition errors,
they responded with words that were higher in frequency than the words that were presented”. In
an another instance of research experiment “gating paradigm”, in which subjects hear iteratively
more and more of the waveform of a spoken word, to show that high-frequency words were
recognized earlier (i.e. given less of the speech waveform) than low frequency words [2].
Using the Dempster-Shafer algorithm, it is perfectly possible to perform an experiment for
subjective grading of knowledge. The different components that are involved in the
experimentation are scenario, evidence, hypothesis and their relationships. Various probabilistic
factors are assigned to the attributes that belong to the knowledge. In different hypothesis various
grades are attributed to the knowledge, in a selected scenario and an instance of hypothesis the
knowledge is graded with select set of qualities. While working with Dempster-Shafer Engine, it is
evidently found that a graphical assessment of grading knowledge is possible. In a corpus of
linguistic text where a select set of nuggets can be applied with scenario, evidence and
hypotheses may be performed to determine the suitable grade.
7. A.Suresh Babu, Dr P.Premchand & Dr A.Govardhan
International Journal of Data Engineering (IJDE) Volume (1): Issue (5) 76
4. CONCLUSIONS
In this paper the work is aimed at comparison of the probabilistic approaches. The probability
theory works on crispy results, where grading requires a scalable demarcation to qualify the
knowledge units. The scalable demaracative units are described as evidences and various
hypotheses are prepared. The Dempster-Shafer algorithm using the Dempster-Shafer Engine
(DSE) has become more practicable to implement the concept of evaluating the grade of the
knowledge units using subjective probability.
5. REFERENCES
[1] Rieko Matsuda,Yuzuru Hayashi,Chikako Yomota,Yoko Tagashira,Mari Katsumine and
Kazuo Iwaki, "Statistical and probabilistic approaches to confidence intervals of linear
calibration in liquid chromatography", The Analyst, www.rsc.org/analyst, (C) 2001.
[2] Dan Jurafsky, "Probabilistic Modeling in Psycholinguistics: Linguistic Comprehension and
Production", appeared in ‘Probabilistic Linguistics’, edited by Rens Bod, Jennifer Hay, and
Stefanie Jannedy, MIT Press, (C) 2002.
[3] Steffen Staab, "Grading Knowledge: Extracting Degree Information from Texts", ISBN 3-
540-66934-5 (C) Springer-Verlag Berlin Heidelberg 1999.
[4] Nic Wilson, "Algorithms for Dempster-Shafer Theory", School of Computing and
Mathematical Sciences, Oxford Brookes University.