International Journal of Research in Advent Technology (IJRAT),
VOLUME-7 ISSUE-11, NOVEMBER 2019,
ISSN: 2321-9637 (Online),
Published By: MG Aricent Pvt Ltd
HIGH-LEVEL SEMANTICS OF IMAGES IN WEB DOCUMENTS USING WEIGHTED TAGS AND STREN...IJCSEA Journal
The multimedia information retrieval from World Wide Web is a challenging issue. Describing multimedia object in general, images in particular with low-level features increases the semantic gap. From WWW, information present in a HTML document as textual keywords can be extracted for capturing semantic information with the view to narrow the semantic gap. The high-level textual information of images can be extracted and associated with the textual keywords, which narrow down the search space and improve the precision of retrieval. In this paper, a strength matrix is being proposed, which is based on the frequency of occurrence of keywords and the textual information pertaining to image URLs. The strength of these textual keywords are estimated and used for associating these keywords with the images present in the documents. The high-level semantics of the image is described in the HTML documents in the form of image name, ALT tag, optional description, etc., is used for estimating the strength. In addition, word position and weighting mechanism is also used for further improving the association textual keywords with the image related text. The effectiveness of information retrieval of the proposed technique is found to be comparatively better than many of the recently proposed retrieval techniques. The experimental results of the proposed method endorse the fact that image retrieval using image information and textual keywords is better than those of the text based and the content-based approaches.
IRJET- PDF Extraction using Data Mining TechniquesIRJET Journal
This document discusses techniques for extracting information from PDF documents using data mining. It presents a proposed system that would allow users to upload a PDF file and receive a summarized output of the most important information from the file. The system is intended to reduce the time needed to understand large documents by automatically identifying and presenting the key points. The conclusion states that the proposed web application would implement text summarization using clustering and diversity-based methods to generate a summary preserving the overall meaning while removing redundancy.
A BOOTSTRAPPING METHOD FOR AUTOMATIC CONSTRUCTING OF THE WEB ONTOLOGY INSTANC...IJwest
With the phenomenal growth of the Web resources, to construct ontologies by using existing resources structured in the Web has gotten more and more attention. Previous studies for constructing ontologies from the Web have not carefully considered all the semantic features of the Web documents. Hereby it is difficult to correctly construct ontology elements from the Web documents that are increasing daily. The machine learning methods play an important role in automatic constructing of the Web ontology. Bootstrapping technique is a semi-supervised learning method that can automatically generate many terms from the few seed terms entered by human. This paper proposes bootstrapping method that can automatically construct instances and data type properties of the Web ontology, taking proper noun as semantic core element of the Web table. Experimental result shows that proposed method can rapidly and effectually construct instances and its properties of the Web ontology
An Extensible Web Mining Framework for Real KnowledgeIJEACS
With the emergence of Web 2.0 applications that bestow rich user experience and convenience without time and geographical restrictions, web usage logs became a goldmine to researchers across the globe. User behavior analysis in different domains based on web logs has its utility for enterprises to have strategic decision making. Business growth of enterprises depends on customer-centric approaches that need to know the knowledge of customer behavior to succeed. The rationale behind this is that customers have alternatives and there is intense competition. Therefore business community needs business intelligence to have expert decisions besides focusing customer relationship management. Many researchers contributed towards this end. However, the need for a comprehensive framework that caters to the needs of businesses to ascertain real needs of web users. This paper presents a framework named eXtensible Web Usage Mining Framework (XWUMF) for discovering actionable knowledge from web log data. The framework employs a hybrid approach that exploits fuzzy clustering methods and methods for user behavior analysis. Moreover the framework is extensible as it can accommodate new algorithms for fuzzy clustering and user behavior analysis. We proposed an algorithm known as Sequential Web Usage Miner (SWUM) for efficient mining of web usage patterns from different data sets. We built a prototype application to validate our framework. Our empirical results revealed that the framework helps in discovering actionable knowledge.
This document provides an overview of opinion mining techniques. It discusses how opinion mining is used to analyze opinions expressed on the internet through blogs, social media, and other user-generated content. The document reviews several existing opinion mining techniques and models. It also categorizes opinion mining based on whether opinions are regular, comparative, direct, or indirect. Finally, it discusses how opinion mining analyzes opinions based on features or attributes of entities to draw conclusions.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Text mining has turned out to be one of the in vogue handle that has been joined in a few research
fields, for example, computational etymology, Information Retrieval (IR) and data mining. Natural
Language Processing (NLP) methods were utilized to extricate learning from the textual text that is
composed by people. Text mining peruses an unstructured form of data to give important
information designs in a most brief day and age. Long range interpersonal communication locales
are an awesome wellspring of correspondence as the vast majority of the general population in this
day and age utilize these destinations in their everyday lives to keep associated with each other. It
turns into a typical practice to not compose a sentence with remedy punctuation and spelling. This
training may prompt various types of ambiguities like lexical, syntactic, and semantic and because of
this kind of indistinct data; it is elusive out the genuine data arrange. As needs be, we are directing
an examination with the point of searching for various text mining techniques to get different
textual requests via web-based networking media sites. This review expects to depict how
contemplates in online networking have utilized text investigation and text mining methods to
identify the key topics in the data. This study concentrated on examining the text mining
contemplates identified with Facebook and Twitter; the two prevailing web-based social networking
on the planet. Aftereffects of this overview can fill in as the baselines for future text mining research.
The document discusses text classification and summarization techniques for complex domain-specific documents like research papers. It reviews various preprocessing approaches like stopword removal, lemmatizing, tokenization, and stemming. It also compares different machine learning algorithms for text classification, including Naive Bayes, decision trees, SVM, KNN, and neural networks. The document surveys works analyzing domain-specific documents using these techniques, such as biomedical document relation extraction and research paper topic classification.
HIGH-LEVEL SEMANTICS OF IMAGES IN WEB DOCUMENTS USING WEIGHTED TAGS AND STREN...IJCSEA Journal
The multimedia information retrieval from World Wide Web is a challenging issue. Describing multimedia object in general, images in particular with low-level features increases the semantic gap. From WWW, information present in a HTML document as textual keywords can be extracted for capturing semantic information with the view to narrow the semantic gap. The high-level textual information of images can be extracted and associated with the textual keywords, which narrow down the search space and improve the precision of retrieval. In this paper, a strength matrix is being proposed, which is based on the frequency of occurrence of keywords and the textual information pertaining to image URLs. The strength of these textual keywords are estimated and used for associating these keywords with the images present in the documents. The high-level semantics of the image is described in the HTML documents in the form of image name, ALT tag, optional description, etc., is used for estimating the strength. In addition, word position and weighting mechanism is also used for further improving the association textual keywords with the image related text. The effectiveness of information retrieval of the proposed technique is found to be comparatively better than many of the recently proposed retrieval techniques. The experimental results of the proposed method endorse the fact that image retrieval using image information and textual keywords is better than those of the text based and the content-based approaches.
IRJET- PDF Extraction using Data Mining TechniquesIRJET Journal
This document discusses techniques for extracting information from PDF documents using data mining. It presents a proposed system that would allow users to upload a PDF file and receive a summarized output of the most important information from the file. The system is intended to reduce the time needed to understand large documents by automatically identifying and presenting the key points. The conclusion states that the proposed web application would implement text summarization using clustering and diversity-based methods to generate a summary preserving the overall meaning while removing redundancy.
A BOOTSTRAPPING METHOD FOR AUTOMATIC CONSTRUCTING OF THE WEB ONTOLOGY INSTANC...IJwest
With the phenomenal growth of the Web resources, to construct ontologies by using existing resources structured in the Web has gotten more and more attention. Previous studies for constructing ontologies from the Web have not carefully considered all the semantic features of the Web documents. Hereby it is difficult to correctly construct ontology elements from the Web documents that are increasing daily. The machine learning methods play an important role in automatic constructing of the Web ontology. Bootstrapping technique is a semi-supervised learning method that can automatically generate many terms from the few seed terms entered by human. This paper proposes bootstrapping method that can automatically construct instances and data type properties of the Web ontology, taking proper noun as semantic core element of the Web table. Experimental result shows that proposed method can rapidly and effectually construct instances and its properties of the Web ontology
An Extensible Web Mining Framework for Real KnowledgeIJEACS
With the emergence of Web 2.0 applications that bestow rich user experience and convenience without time and geographical restrictions, web usage logs became a goldmine to researchers across the globe. User behavior analysis in different domains based on web logs has its utility for enterprises to have strategic decision making. Business growth of enterprises depends on customer-centric approaches that need to know the knowledge of customer behavior to succeed. The rationale behind this is that customers have alternatives and there is intense competition. Therefore business community needs business intelligence to have expert decisions besides focusing customer relationship management. Many researchers contributed towards this end. However, the need for a comprehensive framework that caters to the needs of businesses to ascertain real needs of web users. This paper presents a framework named eXtensible Web Usage Mining Framework (XWUMF) for discovering actionable knowledge from web log data. The framework employs a hybrid approach that exploits fuzzy clustering methods and methods for user behavior analysis. Moreover the framework is extensible as it can accommodate new algorithms for fuzzy clustering and user behavior analysis. We proposed an algorithm known as Sequential Web Usage Miner (SWUM) for efficient mining of web usage patterns from different data sets. We built a prototype application to validate our framework. Our empirical results revealed that the framework helps in discovering actionable knowledge.
This document provides an overview of opinion mining techniques. It discusses how opinion mining is used to analyze opinions expressed on the internet through blogs, social media, and other user-generated content. The document reviews several existing opinion mining techniques and models. It also categorizes opinion mining based on whether opinions are regular, comparative, direct, or indirect. Finally, it discusses how opinion mining analyzes opinions based on features or attributes of entities to draw conclusions.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Text mining has turned out to be one of the in vogue handle that has been joined in a few research
fields, for example, computational etymology, Information Retrieval (IR) and data mining. Natural
Language Processing (NLP) methods were utilized to extricate learning from the textual text that is
composed by people. Text mining peruses an unstructured form of data to give important
information designs in a most brief day and age. Long range interpersonal communication locales
are an awesome wellspring of correspondence as the vast majority of the general population in this
day and age utilize these destinations in their everyday lives to keep associated with each other. It
turns into a typical practice to not compose a sentence with remedy punctuation and spelling. This
training may prompt various types of ambiguities like lexical, syntactic, and semantic and because of
this kind of indistinct data; it is elusive out the genuine data arrange. As needs be, we are directing
an examination with the point of searching for various text mining techniques to get different
textual requests via web-based networking media sites. This review expects to depict how
contemplates in online networking have utilized text investigation and text mining methods to
identify the key topics in the data. This study concentrated on examining the text mining
contemplates identified with Facebook and Twitter; the two prevailing web-based social networking
on the planet. Aftereffects of this overview can fill in as the baselines for future text mining research.
The document discusses text classification and summarization techniques for complex domain-specific documents like research papers. It reviews various preprocessing approaches like stopword removal, lemmatizing, tokenization, and stemming. It also compares different machine learning algorithms for text classification, including Naive Bayes, decision trees, SVM, KNN, and neural networks. The document surveys works analyzing domain-specific documents using these techniques, such as biomedical document relation extraction and research paper topic classification.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Abstract In today’s competitive world paper less work is gaining utmost importance. For this to happen role of Web Based Systems are incomparable. Different sectors like banking, retail are fully ported towards Web Based Systems, whereas the education sector is also not far behind to them. All most all universities or institutions are providing their own web portal for notification of news related to seminar/workshop/examination/result. In this article we have considered web portal of BPUT, Odisha with url http://results.bput.ac.in. More specifically we put our interest on the way results are being published or displayed. In this web portal for some cases the results are being displayed in an unorganized manner over multiple pages. On this unorganized data we are applying the concepts of Web Content Mining and providing a Web Content Mining tool for an organized access/view to the above said web contents. Keywords: Web Based System, Web Content, Web Content Mining, Web Content Mining Tool, Organized view of unorganized Web Content
This document summarizes a research paper on opinion mining from Twitter data. It discusses the challenges of sentiment analysis on short Twitter posts, including named entity recognition, anaphora resolution, parsing, and detecting sarcasm. It also reviews several papers on related topics, such as frameworks for Twitter opinion mining using classification techniques, using Twitter as a corpus for sentiment analysis, and analyzing opinions during the 2012 Korean presidential election on Twitter. Overall, it covers key techniques in opinion mining like identifying opinion targets and orientation. It proposes future work to develop a web application to compare Twitter opinion mining performance and use supervised learning to improve accuracy.
Multi-objective NSGA-II based community detection using dynamical evolution s...IJECEIAES
Community detection is becoming a highly demanded topic in social networking-based applications. It involves finding the maximum intraconnected and minimum inter-connected sub-graphs in given social networks. Many approaches have been developed for community’s detection and less of them have focused on the dynamical aspect of the social network. The decision of the community has to consider the pattern of changes in the social network and to be smooth enough. This is to enable smooth operation for other community detection dependent application. Unlike dynamical community detection Algorithms, this article presents a non-dominated aware searching Algorithm designated as non-dominated sorting based community detection with dynamical awareness (NDS-CD-DA). The Algorithm uses a non-dominated sorting genetic algorithm NSGA-II with two objectives: modularity and normalized mutual information (NMI). Experimental results on synthetic networks and real-world social network datasets have been compared with classical genetic with a single objective and has been shown to provide superiority in terms of the domination as well as the convergence. NDS-CD-DA has accomplished a domination percentage of 100% over dynamic evolutionary community searching DECS for almost all iterations.
This document reviews research on predicting personality from Twitter users' tweets using machine learning algorithms. It discusses how tweets have attracted research interest from diverse fields. Different techniques have been used to predict personality from tweets, but there are still shortcomings to address. The aim is to consider the current state of this research area and explore personality prediction from tweets by reviewing past literature and discussing approaches to issues researchers face. It provides an overview of machine learning methods used for personality prediction from tweets, including data collection, preprocessing, model training and evaluation.
The document summarizes a research paper on DBLP Search Support Engine (SSE), a system that aims to provide intelligent and personalized search beyond traditional search engines. It extracts users' research interests based on publication frequency and recency using interest retention models. The system represents users and their interests using RDF and provides additional functionalities like query refinement, domain analysis and tracking based on users' interests. Future work includes improving the interest prediction model and providing a unified architecture for different system functions.
Cluster Based Web Search Using Support Vector MachineCSCJournals
Now days, searches for the web pages of a person with a given name constitute a notable fraction of queries to Web search engines. This method exploits a variety of semantic information extracted from web pages. The rapid growth of the Internet has made the Web a popular place for collecting information. Today, Internet user access billions of web pages online using search engines. Information in the Web comes from many sources, including websites of companies, organizations, communications and personal homepages, etc. Effective representation of Web search results remains an open problem in the Information Retrieval community. For ambiguous queries, a traditional approach is to organize search results into groups (clusters), one for each meaning of the query. These groups are usually constructed according to the topical similarity of the retrieved documents, but it is possible for documents to be totally dissimilar and still correspond to the same meaning of the query. To overcome this problem, the relevant Web pages are often located close to each other in the Web graph of hyperlinks. It presents a graphical approach for entity resolution & complements the traditional methodology with the analysis of the entity-relationship (ER) graph constructed for the dataset being analyzed. It also demonstrates a technique that measures the degree of interconnectedness between various pairs of nodes in the graph. It can significantly improve the quality of entity resolution. Using Support vector machines (SVMs) which are a set of related Supervised learning methods used for classification of load of user queries to the sever machine to different client machines so that system will be stable. clusters web pages based on their capacities stores whole database on server machine. Keywords: SVM, cluster; ER.
This tutorial, offered at the 10th International Conference on Web Engineering, presents the peculiarities of advanced Web search applications, describes some tools and techniques that can be exploited, and offers a methodological approach to development. The approach proposed in this tutorial is based on the paradigm of Model Driven Development (MDD), where models are the core artifacts of the application life-cycle and model transformations progressively refine models to achieve an executable version of the system. To cope with the process-intensive nature of the main interactions (i.e., content analysis, query management, etc.), we describe the use of Process Models (e.g., BPMN models). Indeed, search-based applications are considered as process- and content-intensive applications, due to the trends towards exploratory search and search as a process visions.
IRJET- A Literature Review and Classification of Semantic Web Approaches for ...IRJET Journal
This document discusses using semantic web approaches for web personalization. It begins with an abstract that outlines how web personalization can help address the problem of information overload by recommending and filtering web pages according to a user's interests. The document then reviews related work on using ontologies and semantic web technologies for personalized e-learning, recommender systems, and other applications. It categorizes different semantic web approaches that have been used for web personalization, including their pros and cons. The overall purpose is to survey semantic web techniques for personalization and how they have been applied in previous research.
Re-Mining Association Mining Results Through Visualization, Data Envelopment ...ertekg
İndirmek için Bağlantı > https://ertekprojects.com/gurdal-ertek-publications/blog/re-mining-association-mining-results-through-visualization-data-envelopment-analysis-and-decision-trees/
Re-mining is a general framework which suggests the execution of additional data mining steps based on the results of an original data mining process. This study investigates the multi-faceted re-mining of association mining results, develops and presents a practical methodology, and shows the applicability of the developed methodology through real world data. The methodology suggests re-mining using data visualization, data envelopment analysis, and decision trees. Six hypotheses, regarding how re-mining can be carried out on association mining results, are answered in the case study through empirical analysis.
Survey of Machine Learning Techniques in Textual Document ClassificationIOSR Journals
Classification of Text Document points towards associating one or more predefined categories based
on the likelihood expressed by the training set of labeled documents. Many machine learning algorithms plays
an important role in training the system with predefined categories. The importance of Machine learning
approach has felt because of which the study has been taken up for text document classification based on the
statistical event models available. The aim of this paper is to present the important techniques and
methodologies that are employed for text documents classification, at the same time making awareness of some
of the interesting challenges that remain to be solved, focused mainly on text representation and machine
learning techniques.
call for papers, research paper publishing, where to publish research paper, journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJEI, call for papers 2012,journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, research and review articles, engineering journal, International Journal of Engineering Inventions, hard copy of journal, hard copy of certificates, journal of engineering, online Submission, where to publish research paper, journal publishing, international journal, publishing a paper, hard copy journal, engineering journal
IRJET- Sentimental Analysis of Twitter Data for Job OpportunitiesIRJET Journal
This document discusses sentiment analysis of tweets related to job opportunities. It begins with an introduction to sentiment analysis and its applications. It then discusses how Twitter is a rich source of data for sentiment analysis due to the large number of daily posts, but that analyzing sentiment in tweets is challenging due to their short length and use of abbreviations. The document then outlines the design and implementation of the sentiment analysis, which involves downloading tweets and sentiment dictionaries, cleaning the tweet data by removing stop words and tokenizing, comparing words to dictionaries to determine sentiment scores, and classifying tweets as positive, negative or neutral based on the scores.
This document summarizes research on ontology-based web personalization. It discusses how web personalization aims to personalize content based on a user's navigational behavior. Ontology-based approaches use formal domain knowledge to build more accurate user profiles than traditional web mining methods alone. The document surveys recent works applying ontologies to areas like user modeling, recommendation systems, and information retrieval. It also outlines challenges in developing personalized systems, such as building accurate user profiles and addressing privacy and scalability issues. Future work opportunities include better integrating ontology and web mining techniques to improve personalization over time as a user's interests evolve.
A survey on ontology based web personalizationeSAT Journals
Abstract Over the last decade the data on World Wide Web has been growing in an exponential manner. According to Google the data is accelerating with a speed of billion pages per day [24]. Internet has around 2 million users accessing the World Wide Web for various information [25].These numbers certainly raise a severe concern over information over load challenges for the users. Many researchers have been working to overcome the challenge with web personalization, many researchers are looking at ontology based web personalization as an answer to the information overload, as each individual is unique. In this paper we present an overview of ontology based web personalization, Challenges and a survey of the work. This paper also points future work in web personalization. Index Terms: Web Personalization, Ontology, User modeling, web usage mining.
This document discusses parsing HTML documents to extract data from websites. It proposes an automated system to parse HTML pages from the SEC website and extract specific data fields, like company financial information, to insert into databases of financial companies. The system will use Java parser libraries to identify patterns in SEC forms, including data in plain text and tables. It analyzes sample SEC forms to understand the structure and focus on extracting data from table sections.
MULTI-DOCUMENT SUMMARIZATION SYSTEM: USING FUZZY LOGIC AND GENETIC ALGORITHM IAEME Publication
In the recent times, the requirement for generation of multi-document summary has gained a lot of attention among the researchers. Mostly, the text summarization technique uses the sentence extraction technique where the salient sentences in the multiple documents are extracted and presented as a summary. In our proposed system, we have developed a sentence extraction based automatic multi-document summarization system that employs fuzzy logic and Genetic Algorithm (GA). At first, the different features are used to identify the significance of sentences in such a way that, each sentence in the documents is specified with the feature score.
Query Sensitive Comparative Summarization of Search Results Using Concept Bas...CSEIJJournal
Query sensitive summarization aims at providing the users with the summary of the contents of single or
multiple web pages based on the search query. This paper proposes a novel idea of generating a
comparative summary from a set of URLs from the search result. User selects a set of web page links from
the search result produced by search engine. Comparative summary of these selected web sites is
generated. This method makes use of HTML DOM tree structure of these web pages. HTML documents are
segmented into set of concept blocks. Sentence score of each concept block is computed with respect to the
query and feature keywords. The important sentences from the concept blocks of different web pages are
extracted to compose the comparative summary on the fly. This system reduces the time and effort required
for the user to browse various web sites to compare the information. The comparative summary of the
contents would help the users in quick decision making.
QUERY SENSITIVE COMPARATIVE SUMMARIZATION OF SEARCH RESULTS USING CONCEPT BAS...cseij
Query sensitive summarization aims at providing the users with the summary of the contents of single or multiple web pages based on the search query. This paper proposes a novel idea of generating a comparative summary from a set of URLs from the search result. User selects a set of web page links from the search result produced by search engine. Comparative summary of these selected web sites is generated. This method makes use of HTML DOM tree structure of these web pages. HTML documents are segmented into set of concept blocks. Sentence score of each concept block is computed with respect to the query and feature keywords. The important sentences from the concept blocks of different web pages are extracted to compose the comparative summary on the fly. This system reduces the time and effort required for the user to browse various web sites to compare the information. The comparative summary of the contents would help the users in quick decision making.
IRJET- Multi-Document Summarization using Fuzzy and Hierarchical ApproachIRJET Journal
This document discusses multi-document summarization using fuzzy and hierarchical approaches. It begins with an abstract describing multi-document summarization as extracting important information from multiple source documents to create a short summary. The introduction discusses the need for efficient multi-document summarization due to the large amount of online information. It then reviews related literature on multi-document summarization techniques including neuro-fuzzy approaches and modified K-nearest neighbor algorithms. Finally, it describes the proposed methodology which uses statistical approaches like similarity measures, page rank and expectation maximization to cluster sentences and extract a summary from the clustered sentences.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Abstract In today’s competitive world paper less work is gaining utmost importance. For this to happen role of Web Based Systems are incomparable. Different sectors like banking, retail are fully ported towards Web Based Systems, whereas the education sector is also not far behind to them. All most all universities or institutions are providing their own web portal for notification of news related to seminar/workshop/examination/result. In this article we have considered web portal of BPUT, Odisha with url http://results.bput.ac.in. More specifically we put our interest on the way results are being published or displayed. In this web portal for some cases the results are being displayed in an unorganized manner over multiple pages. On this unorganized data we are applying the concepts of Web Content Mining and providing a Web Content Mining tool for an organized access/view to the above said web contents. Keywords: Web Based System, Web Content, Web Content Mining, Web Content Mining Tool, Organized view of unorganized Web Content
This document summarizes a research paper on opinion mining from Twitter data. It discusses the challenges of sentiment analysis on short Twitter posts, including named entity recognition, anaphora resolution, parsing, and detecting sarcasm. It also reviews several papers on related topics, such as frameworks for Twitter opinion mining using classification techniques, using Twitter as a corpus for sentiment analysis, and analyzing opinions during the 2012 Korean presidential election on Twitter. Overall, it covers key techniques in opinion mining like identifying opinion targets and orientation. It proposes future work to develop a web application to compare Twitter opinion mining performance and use supervised learning to improve accuracy.
Multi-objective NSGA-II based community detection using dynamical evolution s...IJECEIAES
Community detection is becoming a highly demanded topic in social networking-based applications. It involves finding the maximum intraconnected and minimum inter-connected sub-graphs in given social networks. Many approaches have been developed for community’s detection and less of them have focused on the dynamical aspect of the social network. The decision of the community has to consider the pattern of changes in the social network and to be smooth enough. This is to enable smooth operation for other community detection dependent application. Unlike dynamical community detection Algorithms, this article presents a non-dominated aware searching Algorithm designated as non-dominated sorting based community detection with dynamical awareness (NDS-CD-DA). The Algorithm uses a non-dominated sorting genetic algorithm NSGA-II with two objectives: modularity and normalized mutual information (NMI). Experimental results on synthetic networks and real-world social network datasets have been compared with classical genetic with a single objective and has been shown to provide superiority in terms of the domination as well as the convergence. NDS-CD-DA has accomplished a domination percentage of 100% over dynamic evolutionary community searching DECS for almost all iterations.
This document reviews research on predicting personality from Twitter users' tweets using machine learning algorithms. It discusses how tweets have attracted research interest from diverse fields. Different techniques have been used to predict personality from tweets, but there are still shortcomings to address. The aim is to consider the current state of this research area and explore personality prediction from tweets by reviewing past literature and discussing approaches to issues researchers face. It provides an overview of machine learning methods used for personality prediction from tweets, including data collection, preprocessing, model training and evaluation.
The document summarizes a research paper on DBLP Search Support Engine (SSE), a system that aims to provide intelligent and personalized search beyond traditional search engines. It extracts users' research interests based on publication frequency and recency using interest retention models. The system represents users and their interests using RDF and provides additional functionalities like query refinement, domain analysis and tracking based on users' interests. Future work includes improving the interest prediction model and providing a unified architecture for different system functions.
Cluster Based Web Search Using Support Vector MachineCSCJournals
Now days, searches for the web pages of a person with a given name constitute a notable fraction of queries to Web search engines. This method exploits a variety of semantic information extracted from web pages. The rapid growth of the Internet has made the Web a popular place for collecting information. Today, Internet user access billions of web pages online using search engines. Information in the Web comes from many sources, including websites of companies, organizations, communications and personal homepages, etc. Effective representation of Web search results remains an open problem in the Information Retrieval community. For ambiguous queries, a traditional approach is to organize search results into groups (clusters), one for each meaning of the query. These groups are usually constructed according to the topical similarity of the retrieved documents, but it is possible for documents to be totally dissimilar and still correspond to the same meaning of the query. To overcome this problem, the relevant Web pages are often located close to each other in the Web graph of hyperlinks. It presents a graphical approach for entity resolution & complements the traditional methodology with the analysis of the entity-relationship (ER) graph constructed for the dataset being analyzed. It also demonstrates a technique that measures the degree of interconnectedness between various pairs of nodes in the graph. It can significantly improve the quality of entity resolution. Using Support vector machines (SVMs) which are a set of related Supervised learning methods used for classification of load of user queries to the sever machine to different client machines so that system will be stable. clusters web pages based on their capacities stores whole database on server machine. Keywords: SVM, cluster; ER.
This tutorial, offered at the 10th International Conference on Web Engineering, presents the peculiarities of advanced Web search applications, describes some tools and techniques that can be exploited, and offers a methodological approach to development. The approach proposed in this tutorial is based on the paradigm of Model Driven Development (MDD), where models are the core artifacts of the application life-cycle and model transformations progressively refine models to achieve an executable version of the system. To cope with the process-intensive nature of the main interactions (i.e., content analysis, query management, etc.), we describe the use of Process Models (e.g., BPMN models). Indeed, search-based applications are considered as process- and content-intensive applications, due to the trends towards exploratory search and search as a process visions.
IRJET- A Literature Review and Classification of Semantic Web Approaches for ...IRJET Journal
This document discusses using semantic web approaches for web personalization. It begins with an abstract that outlines how web personalization can help address the problem of information overload by recommending and filtering web pages according to a user's interests. The document then reviews related work on using ontologies and semantic web technologies for personalized e-learning, recommender systems, and other applications. It categorizes different semantic web approaches that have been used for web personalization, including their pros and cons. The overall purpose is to survey semantic web techniques for personalization and how they have been applied in previous research.
Re-Mining Association Mining Results Through Visualization, Data Envelopment ...ertekg
İndirmek için Bağlantı > https://ertekprojects.com/gurdal-ertek-publications/blog/re-mining-association-mining-results-through-visualization-data-envelopment-analysis-and-decision-trees/
Re-mining is a general framework which suggests the execution of additional data mining steps based on the results of an original data mining process. This study investigates the multi-faceted re-mining of association mining results, develops and presents a practical methodology, and shows the applicability of the developed methodology through real world data. The methodology suggests re-mining using data visualization, data envelopment analysis, and decision trees. Six hypotheses, regarding how re-mining can be carried out on association mining results, are answered in the case study through empirical analysis.
Survey of Machine Learning Techniques in Textual Document ClassificationIOSR Journals
Classification of Text Document points towards associating one or more predefined categories based
on the likelihood expressed by the training set of labeled documents. Many machine learning algorithms plays
an important role in training the system with predefined categories. The importance of Machine learning
approach has felt because of which the study has been taken up for text document classification based on the
statistical event models available. The aim of this paper is to present the important techniques and
methodologies that are employed for text documents classification, at the same time making awareness of some
of the interesting challenges that remain to be solved, focused mainly on text representation and machine
learning techniques.
call for papers, research paper publishing, where to publish research paper, journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJEI, call for papers 2012,journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, research and review articles, engineering journal, International Journal of Engineering Inventions, hard copy of journal, hard copy of certificates, journal of engineering, online Submission, where to publish research paper, journal publishing, international journal, publishing a paper, hard copy journal, engineering journal
IRJET- Sentimental Analysis of Twitter Data for Job OpportunitiesIRJET Journal
This document discusses sentiment analysis of tweets related to job opportunities. It begins with an introduction to sentiment analysis and its applications. It then discusses how Twitter is a rich source of data for sentiment analysis due to the large number of daily posts, but that analyzing sentiment in tweets is challenging due to their short length and use of abbreviations. The document then outlines the design and implementation of the sentiment analysis, which involves downloading tweets and sentiment dictionaries, cleaning the tweet data by removing stop words and tokenizing, comparing words to dictionaries to determine sentiment scores, and classifying tweets as positive, negative or neutral based on the scores.
This document summarizes research on ontology-based web personalization. It discusses how web personalization aims to personalize content based on a user's navigational behavior. Ontology-based approaches use formal domain knowledge to build more accurate user profiles than traditional web mining methods alone. The document surveys recent works applying ontologies to areas like user modeling, recommendation systems, and information retrieval. It also outlines challenges in developing personalized systems, such as building accurate user profiles and addressing privacy and scalability issues. Future work opportunities include better integrating ontology and web mining techniques to improve personalization over time as a user's interests evolve.
A survey on ontology based web personalizationeSAT Journals
Abstract Over the last decade the data on World Wide Web has been growing in an exponential manner. According to Google the data is accelerating with a speed of billion pages per day [24]. Internet has around 2 million users accessing the World Wide Web for various information [25].These numbers certainly raise a severe concern over information over load challenges for the users. Many researchers have been working to overcome the challenge with web personalization, many researchers are looking at ontology based web personalization as an answer to the information overload, as each individual is unique. In this paper we present an overview of ontology based web personalization, Challenges and a survey of the work. This paper also points future work in web personalization. Index Terms: Web Personalization, Ontology, User modeling, web usage mining.
This document discusses parsing HTML documents to extract data from websites. It proposes an automated system to parse HTML pages from the SEC website and extract specific data fields, like company financial information, to insert into databases of financial companies. The system will use Java parser libraries to identify patterns in SEC forms, including data in plain text and tables. It analyzes sample SEC forms to understand the structure and focus on extracting data from table sections.
MULTI-DOCUMENT SUMMARIZATION SYSTEM: USING FUZZY LOGIC AND GENETIC ALGORITHM IAEME Publication
In the recent times, the requirement for generation of multi-document summary has gained a lot of attention among the researchers. Mostly, the text summarization technique uses the sentence extraction technique where the salient sentences in the multiple documents are extracted and presented as a summary. In our proposed system, we have developed a sentence extraction based automatic multi-document summarization system that employs fuzzy logic and Genetic Algorithm (GA). At first, the different features are used to identify the significance of sentences in such a way that, each sentence in the documents is specified with the feature score.
Query Sensitive Comparative Summarization of Search Results Using Concept Bas...CSEIJJournal
Query sensitive summarization aims at providing the users with the summary of the contents of single or
multiple web pages based on the search query. This paper proposes a novel idea of generating a
comparative summary from a set of URLs from the search result. User selects a set of web page links from
the search result produced by search engine. Comparative summary of these selected web sites is
generated. This method makes use of HTML DOM tree structure of these web pages. HTML documents are
segmented into set of concept blocks. Sentence score of each concept block is computed with respect to the
query and feature keywords. The important sentences from the concept blocks of different web pages are
extracted to compose the comparative summary on the fly. This system reduces the time and effort required
for the user to browse various web sites to compare the information. The comparative summary of the
contents would help the users in quick decision making.
QUERY SENSITIVE COMPARATIVE SUMMARIZATION OF SEARCH RESULTS USING CONCEPT BAS...cseij
Query sensitive summarization aims at providing the users with the summary of the contents of single or multiple web pages based on the search query. This paper proposes a novel idea of generating a comparative summary from a set of URLs from the search result. User selects a set of web page links from the search result produced by search engine. Comparative summary of these selected web sites is generated. This method makes use of HTML DOM tree structure of these web pages. HTML documents are segmented into set of concept blocks. Sentence score of each concept block is computed with respect to the query and feature keywords. The important sentences from the concept blocks of different web pages are extracted to compose the comparative summary on the fly. This system reduces the time and effort required for the user to browse various web sites to compare the information. The comparative summary of the contents would help the users in quick decision making.
IRJET- Multi-Document Summarization using Fuzzy and Hierarchical ApproachIRJET Journal
This document discusses multi-document summarization using fuzzy and hierarchical approaches. It begins with an abstract describing multi-document summarization as extracting important information from multiple source documents to create a short summary. The introduction discusses the need for efficient multi-document summarization due to the large amount of online information. It then reviews related literature on multi-document summarization techniques including neuro-fuzzy approaches and modified K-nearest neighbor algorithms. Finally, it describes the proposed methodology which uses statistical approaches like similarity measures, page rank and expectation maximization to cluster sentences and extract a summary from the clustered sentences.
Design of optimal search engine using text summarization through artificial i...TELKOMNIKA JOURNAL
Natural language processing is the trending topic in the latest research areas, which allows the developers to create the human-computer interactions to come into existence. The natural language processing is an integration of artificial intelligence, computer science and computer linguistics. The research towards natural Language Processing is focused on creating innovations towards creating the devices or machines which operates basing on the single command of a human. It allows various Bot creations to innovate the instructions from the mobile devices to control the physical devices by allowing the speech-tagging. In our paper, we design a search engine which not only displays the data according to user query but also performs the detailed display of the content or topic user is interested for using the summarization concept. We find the designed search engine is having optimal response time for the user queries by analyzing with number of transactions as inputs. Also, the result findings in the performance analysis show that the text summarization method has been an efficient way for improving the response time in the search engine optimizations.
Applying Clustering Techniques for Efficient Text Mining in Twitter Dataijbuiiir1
Knowledge is the ultimate output of decisions on a dataset. The revolution of the Internet has made the global distance closer with the touch on the hand held electronic devices. Usage of social media sites have increased in the past decades. One of the most popular social media micro blog is Twitter. Twitter has millions of users in the world. In this paper the analysis of Twitter data is performed through the text contained in hash tags. After Preprocessing clustering algorithms are applied on text data. The different clusters formed are compared through various parameters. Visualization techniques are used to portray the results from which inferences like time series and topic flow can be easily made. The observed results show that the hierarchical clustering algorithm performs better than other algorithms.
This document discusses an attempt to create an extractive automatic text summarizer. It splits document paragraphs into sentences and ranks the sentences based on summarization features, with higher ranked sentences considered more important for generating the summary. The proposed system uses the TextRank algorithm to rank sentences based on graph-based features. The paper presents the TextRank approach and compares the proposed system to existing MS Word summarization methods. Evaluation measures are also described to assess the performance of the summarizer.
A Multimodal Approach to Incremental User Profile Building dannyijwest
In the navigational applications, radar and satellite requires a device that is a radar altimeter. The working frequency of this system is 4.2 to 4.3GHz and also requires less weight, low profile, and high gain antennas. The above mentioned application is possible with microstrip antenna as also known as planar antenna. In this paper, the microstrip antennas are designed at 4.3GHz (C-band) in rectangular and circular shape patch antennas in single element and arrays with parasitic elements placed in H-plane coupling. The performance of all these shapes is analyzed in terms of radiation pattern, half power points, and gain and impedance bandwidth in MATLAB. This work extended here with designed in different shapes like Rhombic, Pentagon, Octagon and Edges-12 etc. Further these parameters are simulated in ANSOFT- HFSSTM V9.0 simulator.
A Newly Proposed Technique for Summarizing the Abstractive Newspapers’ Articl...mlaij
In this new era, where tremendous information is available on the internet, it is of most important to
provide the improved mechanism to extract the information quickly and most efficiently. It is very difficult
for human beings to manually extract the summary of large documents of text. Therefore, there is a
problem of searching for relevant documents from the number of documents available, and absorbing
relevant information from it. In order to solve the above two problems, the automatic text summarization is
very much necessary. Text summarization is the process of identifying the most important meaningful
information in a document or set of related documents and compressing them into a shorter version
preserving its overall meanings. More specific, Abstractive Text Summarization (ATS), is the task of
constructing summary sentences by merging facts from different source sentences and condensing them
into a shorter representation while preserving information content and overall meaning. This Paper
introduces a newly proposed technique for Summarizing the abstractive newspapers’ articles based on
deep learning.
The document discusses a framework for web information retrieval using automatic multi-document summarization. It proposes using multi-level document summarization to enhance the effectiveness of web information retrieval by supporting indexing and ranking of retrieved documents with an intelligent decision making system based on fuzzy inference rules. The paper tests the approach on CACM test data and finds that information retrieval results can be improved after performing a multi-document summarization process.
IRJET- Automatic Recapitulation of Text DocumentIRJET Journal
The document describes an approach for automatically generating summaries of text documents. It involves preprocessing the input text by tokenizing, removing stop words, and stemming words. It then extracts features from the preprocessed text, such as term frequency-inverse document frequency (tf-idf) values. Sentences are scored based on these features and sentences with higher scores are selected to form the summary. Keywords from the text are also identified using WordNet to help select the most relevant sentences for the summary. The proposed approach aims to generate concise yet meaningful summaries using natural language processing techniques.
A Comparative Study of Automatic Text Summarization MethodologiesIRJET Journal
The document presents a comparative study of different automatic text summarization methodologies. It provides an overview of various extractive and abstractive text summarization approaches that have been proposed in previous research works. The document also discusses key parameters for classifying and comparing different summarization methods, such as the type of input documents, domain, technique used, and evaluation metrics. It reviews literature on summarization applications in different domains like news, books, legal articles and sports.
Research on Document Indexing in the Search Engines. The main theme of Informational retrieval is to send the exact response of a user for specific Query.
The information search retrieval is a very big process, to achieve this concept we need to develop an application with more effect and we have to use techniques like Document indexing, page ranking, clustering technique. Among all of these Document index is plays avital role while searching why since instead of searching hundreds of thousands of documents it will directly go to the particular index and will give the output here. Here our achievement mainly is indexing, the clear meaning of the indexing is storing an index is to optimize speed and performance in finding the appropriate/corresponding document for the user searched query.
My conclusion is the context based index approach is used in the query retrieval, this is mainly from the source document. Instead of searching every page on server, finding technically is better. Due to this we can save our time, we can reduce the burden of server.
Research Report on Document Indexing-Nithish KumarNithish Kumar
Research on Document Indexing in the Search Engines. The main theme of Informational retrieval is to send the exact response of a user for specific Query.
The information search retrieval is a very big process, to achieve this concept we need to develop an application with more effect and we have to use techniques like Document indexing, page ranking, clustering technique. Among all of these Document index is plays avital role while searching why since instead of searching hundreds of thousands of documents it will directly go to the particular index and will give the output here. Here our achievement mainly is indexing, the clear meaning of the indexing is storing an index is to optimize speed and performance in finding the appropriate/corresponding document for the user searched query.
My conclusion is the context based index approach is used in the query retrieval, this is mainly from the source document. Instead of searching every page on server, finding technically is better. Due to this we can save our time, we can reduce the burden of server.
An in-depth review on News Classification through NLPIRJET Journal
This document provides an in-depth literature review of news classification through natural language processing (NLP). It discusses several existing approaches to news classification, including models that use convolutional neural networks (CNNs), graph-based approaches, and attention mechanisms. The document also notes that current search engines often return too many irrelevant results, so classification could help layer search results. It concludes that while many techniques have been developed, inconsistencies remain in effectively classifying news, so further research on combining NLP, feature extraction, and fuzzy logic is needed.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IRJET- Concept Extraction from Ambiguous Text Document using K-MeansIRJET Journal
This document discusses using a K-means clustering algorithm to extract concepts from ambiguous text documents. It involves preprocessing the text by tokenizing, removing stop words, and stemming words. The words are then represented as vectors and dimensionality reduction using PCA is applied. Finally, K-means clustering is used to group similar words into clusters to identify the overall concepts in the document without reading the entire text. The aim is to help users understand the key topics in a document in a time-efficient manner without having to read the full text.
Data mining is the knowledge discovery in databases and the gaol is to extract patterns and knowledge from
large amounts of data. The important term in data mining is text mining. Text mining extracts the quality
information highly from text. Statistical pattern learning is used to high quality information. High –quality in
text mining defines the combinations of relevance, novelty and interestingness. Tasks in text mining are text
categorization, text clustering, entity extraction and sentiment analysis. Applications of natural language
processing and analytical methods are highly preferred to turn
This document presents a critical review report on a course about algorithm design and analysis. It summarizes a paper that studied graph-based forecasting techniques for social networks. The paper generated a social graph from user activity data and used rule-based mining to predict future social behaviors. It found that this method was more efficient than the Apriori algorithm as it did not require multiple scans of the dataset. The real-world simulation achieved accuracy rates of over 80% for predictions made over 10 days. However, the review critiques that the paper's title was too broad and its impact was limited by only being cited 3 times in the last 7 years. Overall, the paper presented a good recommender system for social networks.
This document summarizes a research paper that examines pricing strategy in a two-stage supply chain consisting of a supplier and retailer. The supplier offers a credit period to the retailer, who then offers credit to customers. A mathematical model is formulated to maximize total profit for the integrated supply chain system. The model considers three cases based on the relative lengths of the credit periods offered at each stage. Equations are developed to represent the profit functions for the supplier, retailer and overall system in each case. The goal is to determine the optimal selling price that maximizes total integrated profit.
The document discusses melanoma skin cancer detection using a computer-aided diagnosis system based on dermoscopic images. It begins with an introduction to skin cancer and melanoma. It then reviews existing literature on automated melanoma detection systems that use techniques like image preprocessing, segmentation, feature extraction and classification. Features extracted in other studies include asymmetry, border irregularity, color, diameter and texture-based features. The proposed system collects dermoscopic images and performs preprocessing, segmentation, extracts 9 features based on the ABCD rule, and classifies images using a neural network classifier to detect melanoma. It aims to develop an automated diagnosis system to eliminate invasive biopsy procedures.
This document summarizes various techniques for image segmentation that have been studied and proposed in previous research. It discusses edge-based, threshold-based, region-based, clustering-based, and other common segmentation methods. It also reviews applications of segmentation in medical imaging, plant disease detection, and other fields. While no single technique can segment all images perfectly, hybrid and adaptive methods combining multiple approaches may provide better results. Overall, image segmentation remains an important but challenging task in digital image processing and computer vision.
This document presents a test for detecting a single upper outlier in a sample from a Johnson SB distribution when the parameters of the distribution are unknown. The test statistic proposed is based on maximum likelihood estimates of the four parameters (location, scale, and two shape) of the Johnson SB distribution. Critical values of the test statistic are obtained through simulation for different sample sizes. The performance of the test is investigated through simulation, showing it performs well at detecting outliers when the contaminant observation represents a large shift from the original distribution parameters. An example application to census data is also provided.
This document summarizes a research paper that proposes a portable device called the "Disha Device" to improve women's safety. The device has features like live location tracking, audio/video recording, automatic messaging to emergency contacts, a buzzer, flashlight, and pepper spray. It is designed using an Arduino microcontroller connected to GPS and GSM modules. When the button is pressed, it sends an alert message with the woman's location, sets off an alarm, activates the flashlight and pepper spray for self-defense. The goal is to provide women a compact, one-click safety system to help them escape dangerous situations or call for help with just a single press of a button.
- The document describes a study that constructed physical fitness norms for female students attending social welfare schools in Andhra Pradesh, India.
- Researchers tested 339 students in classes 6-10 on speed, strength, agility and flexibility tests. Tests included 50m run, bend and reach, medicine ball throw, broad jump, shuttle run, and vertical jump.
- The results showed that 9th class students had the best average time for the 50m run. 10th class students had the highest flexibility on average. Strength and performance generally improved with increased class level.
This document summarizes research on downdraft gasification of biomass. It discusses how downdraft gasifiers effectively convert solid biomass into a combustible producer gas. The gasification process involves pyrolysis and reactions between hot char and gases that produce CO, H2, and CH4. Downdraft gasifiers are well-suited for biomass gasification due to their simple design and ability to manage the gasification process with low tar production. The document also reviews previous studies on gasifier configuration upgrades and their impact on performance, and the principles of downdraft gasifier operation.
This document summarizes the design and manufacturing of a twin spindle drilling attachment. Key points:
- The attachment allows a drilling machine to simultaneously drill two holes in a single setting, improving productivity over a single spindle setup.
- It uses a sun and planet gear arrangement to transmit power from the main spindle to two drilling spindles.
- Components like gears, shafts, and housing were designed using Creo software and manufactured. Drill chucks, bearings, and bits were purchased.
- The attachment was assembled and installed on a vertical drilling machine. It is aimed at improving productivity in mass production applications by combining two drilling operations into one setup.
The document presents a comparative study of different gantry girder profiles for various crane capacities and gantry spans. Bending moments, shear forces, and section properties are calculated and tabulated for 'I'-section with top and bottom plates, symmetrical plate girder, 'I'-section with 'C'-section top flange, plate girder with rolled 'C'-section top flange, and unsymmetrical plate girder sections. Graphs of steel weight required per meter length are presented. The 'I'-section with 'C'-section top flange profile is found to be optimized for biaxial bending but rolled sections may not be available for all spans.
This document summarizes research on analyzing the first ply failure of laminated composite skew plates under concentrated load using finite element analysis. It first describes how a finite element model was developed using shell elements to analyze skew plates of varying skew angles, laminations, and boundary conditions. Three failure criteria (maximum stress, maximum strain, Tsai-Wu) were used to evaluate first ply failure loads. The minimum load from the criteria was taken as the governing failure load. The research aims to determine the effects of various parameters on first ply failure loads and validate the numerical approach through benchmark problems.
This document summarizes a study that investigated the larvicidal effects of Aegle marmelos (bael tree) leaf extracts on Aedes aegypti mosquitoes. Specifically, it assessed the efficacy of methanol extracts from A. marmelos leaves in killing A. aegypti larvae (at the third instar stage) and altering their midgut proteins. The study found that the leaf extract achieved 50% larval mortality (LC50) at a concentration of 49 ppm. Proteomic analysis of larval midguts revealed changes in protein expression levels after exposure to the extract, suggesting its bioactive compounds can disrupt the midgut. The aim is to identify specific inhibitor proteins in the midg
This document presents a system for classifying electrocardiogram (ECG) signals using a convolutional neural network (CNN). The system first preprocesses raw ECG data by removing noise and segmenting the signals. It then uses a CNN to extract features directly from the ECG data and classify arrhythmias without requiring complex feature engineering. The CNN architecture contains 11 convolutional layers and is optimized using techniques like batch normalization and dropout. The system was tested on ECG datasets and achieved classification accuracy of over 93%, demonstrating its effectiveness at automated ECG classification.
This document presents a new algorithm for extracting and summarizing news from online newspapers. The algorithm first extracts news related to the topic using keyword matching. It then distinguishes different types of news about the same topic. A term frequency-based summarization method is used to generate summaries. Sentences are scored based on term frequency and the highest scoring sentences are selected for the summary. The algorithm was evaluated on news datasets from various newspapers and showed good performance in intrinsic evaluation metrics like precision, recall and F-score. Thus, the proposed method can effectively extract and summarize online news for a given keyword or topic.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
1. International Journal of Research in Advent Technology, Vol.7, No.11, November 2019
E-ISSN: 2321-9637
Available online at www.ijrat.org
doi: 10.32622/ijrat.710201947 6
Abstract— Online newspaper plays an important role for
the development of world. But it consists of several types of
labels, titles and links. As online newspapers are collection of
variety of newspaper, it is often much more difficult to extract
and summarize the news. To improve the accuracy a new
algorithm is introduced here based on web extraction and
summarization. Firstly, the news from newspapers are
extracted which are related to the topic. If different types of
news are found about the same topic then it has distinguished.
Then a summarization-based algorithm has proposed to
summarize the news. Basically, term frequency has used for
summarization and evaluate it along with several
newspapers’ contents. Various forms of words are also
compared such as Noun, Adjective, Adverb etc. So that the
term frequency can be counted more accurately. It will be
very helpful for a user who wants to find out very specific
news from the newspapers.
Keywords— Extraction, online news, precision, sentence
scoring, summary, term frequency.
I. INTRODUCTION
Information retrieval is the term that specifies extraction of
relevant information from various documents. Information
retrieval can be done in different ways. Web data extraction
is one of them. Data contained in websites (newspaper) is
increasing exponentially. But much of this information
cannot be used by other applications. As most of the web data
will be in XML format, it will solve the problem in future.
But now this is not the case and information in web have to
be retrieved efficiently. So, their emerged a new source of
information retrieval technique which is extraction of web
data. It is a process through which data can be extracted from
web without loss of information. Web data is in semi-
structured format. To extract data from web, it is necessary to
analyze each word and tag found in the particular website.
Present usability of the online news largely depends on
news summarization(web). Tailoring of the content of Web
documents to match specific displays through web document
summarization in an accessibility purpose, mainly range from
snippet generation by search engines, (e.g. for blind people).
To summarize automatically, plain text document is used.
Manuscript revised November 23, 2019 and published on December 03,
2019
Senjuthi Bhattacharjee, Lecturer, Dept. of Computer Science &
Engineering, Premier University, Chattogram , Bangladesh.
Asma Joshita Trisha, Lecturer, Dept. of Computer Science & Engineering,
Premier University, Chattogram , Bangladesh.
In an HTML document there are many elements like as
pictures, which cannot be summarized and it is difficult to
distinguish the relevant information among many news. In
recent years, many applications are introduced which
particularly works with the content of a HTML document.
Here the context of the document has used where information
is retrieved from all the documents linking to it.
Online news Summarization is a technique that search
newspaper for specific query and returns a compact summary
for a given newspaper to representing its main content. Here
the main purpose is to generate a compatible summary which
are as good as the summaries done by a person.
Textual snippet is the most widespread search-based
summarization (Zhanying He et al., 2013). When a user
submitted a query, web search engine provides the reference
for sequence of top-k documents. Each document contains a
title, a snippet, a URL. When there is less time for browsing
the site, web summary helps user to get idea about the content
of the page. This extracts the sentences which are more
significant from a web page and generates a summary to the
user. The web includes different kind of information like text,
images, video and audio. So, it is necessary to extract relevant
result. The good web page summary must be a clear, a simple
guide what is on the page.
There are two types of summary such as an abstract and an
extract. When the summary consists of remarkable text units
selected from the input then it is called extract summary
(N.Moratanch and S.Chitrakala, 2017). An abstract is a brief
summary of a definite subject, which are generated by
computing the noticeable units selected from an input. Text
units which are not present in input text can also be included
in abstract summary (N. Moratanch and S. Chitrakala, 2016).
II. RELATED WORKS
A well-known method is the centroid-based method
(Xindong Wu et al., 2011), in this method, TFID feature is
used for calculating the sentence score. For each single
feature, the score is calculated and then combine it for the
whole sentence. To extract and summarize online newspaper
for a single phrase it is required to categorize the news firstly.
Then summarization of the particular portion is done. There
is a approach named Conditional Random Fields (CRF) based
Sabrina Akter, Student, Dept. of Computer Science & Engineering,
Chittagong University of Engineering & Technology, Chattogram,
Bangladesh.
An Effective Approach for Online News Extraction and
Summarization for a Single Phase
Senjuthi Bhattacharjee, Asma Joshita Trisha, and Sabrina Akter
2. International Journal of Research in Advent Technology, Vol.7, No.11, November 2019
E-ISSN: 2321-9637
Available online at www.ijrat.org
doi: 10.32622/ijrat.710201947 7
framework to treat the summarization task as a sequence
labeling problem (Dou Shen et al., 2007). The sentences
which has highest scores are extracted in extraction-based
summarization (Xiaojun Wan et al., 2007). There are some
approaches which mainly combines several sentence features
(Minqing Hu and Bing Liu, 2006). Now-a-days there are
various extraction-based approach for web
classification/categorization (Ioannis Antonellis et al., 2006)
and summarization (Furu Wei et al., 2008). Sentence
redundancy is a big obstacle for summary sentences. To
remove redundancy between summary sentences, The MMR
algorithm (Mohammad Al Hasan, 2009) is also another
popular approach. The Frequent Pattern Mining (FPM)
algorithms (Mohammad Al Hasan, 2009) is also used to
calculate the complex features, such as set, sequence, tree,
graph, etc. But large output set size causes lacking of
interpretability, and that’s why potentiality of this approach
is very low comparing to another algorithm.
An online newspaper generally contains a variety of
information cantered around a main title. To get the
summarized news for a single phrase, section-based
categorization (Giuseppe Attardi et al., 1999) is more
workable than other ways. For getting the filter news from
various news there can be used K-nearest algorithm and for
getting the summarized news there can be used pattern
mining or used term frequency.
III. PROPOSED METHOD
Online newspaper contains various types of news. They
show the details of news. Now day’s readers don’t have such
time to read all the news. They want to save their time. In this
project the user only put a keyword for knowing the news
which related with the keyword by extracting news. They also
can know the compact news which can cover the all
newspaper. People can also know the previous news.
A. Architecture of proposed method
The methodology or architecture of the proposed method
is discussed below:
Fig. 1. Architecture of proposed method
B. Step by step description of proposed method
This section gives an analytical description of the system
architecture given in previous parts.
B.1. Initialization and Connection
In the initialization and connection module, at first, the web
pages of each website are stored in separate files. Then each
of these pages will be connected using URL. A table is
created in news database for each website having news no,
name, date, Headline, description.
B.2. News Extraction
The most important part of this method is news extraction
(Y. Sankarasubramaniam et al., 2014). For that at first the
input newspapers are taken. Then the keywords will be given
as input. Matching the keywords with database contents for
extraction. After matching news contents with database
contents, the news will be extracted. So, every news is
separated in topic wise. For A single domain or phase,
different news can be gathered. First, the news of same
domain is collected. Then the news in different parts are
divided. For cricket much news are found. Such as T-20, One
day, Test match etc. Here, the desired news for a particular
phrase can be also found. The divide and conquer approach
are followed for similar text matching for extraction of news.
• Divide and Conquer
In computer science, divide and conquer (D&C) is a
method, in which the whole problem is divided into several
sub segments and then the whole system is combined to get
the solution of the original problem.
• Similar text matching
In this method, the query string uses a parameter, which
divides the string into low frequency and high frequency
group. The low frequency of a group is mainly the more
important terms of the bulk of the query, while the high
frequency group is the not much important terms is used only
for scoring, not for matching.
B.3. News Summarization
The most important part of this method is summarization
(J. Goldstein et al., 1999). Here, the extracted news has
summarized about the input phrase. In this part, first of all at
least two extract news of related phrase has taken from
several newspaper. Then every sentence will be checked or
compared of this news. In the case of similar sentence, it will
take similar sentence at once from both news. The sentences
don’t be repeated. Then it will summarize the news. Then the
process will check, whether there any extract news for
summarized. If it is “Yes” then the new news and summary
of previous news are summarized by continuing this process.
If it is “No”, then it will succeed to get the desired output
summary. Summarization will be done in using term
3. International Journal of Research in Advent Technology, Vol.7, No.11, November 2019
E-ISSN: 2321-9637
Available online at www.ijrat.org
doi: 10.32622/ijrat.710201947 8
frequency. For that some conditions will be applied on the
method.
• Term Frequency
The importance of a word to a document in a collection or
corpus (Xindong Wu et al., 2011) is calculated by term
frequency which is a numerical term. It is mainly used to
retrieve information retrieve and for text mining. The number
of times a word appears in the document, the value of term
frequency increases proportionally. It mainly helps to control
the common words.
• Process of summarization using Term Frequency
Steps of Summarization:
Step 1: First take input Bangla documents as text file.
Step2: In this step tokenized the sentences of input documents
and punctuation character, single word, digits are removed
from the original Bangla text.
Step 3: Replace each word with common synonym for
counting keyword frequency.
Step 4: In this step sort the total term frequency (𝑇𝑇𝐹) in
descending order.
Step 5: Compute the score 𝑆𝐶 𝑘𝑗 the kth sentence of the jth
document by summing up 𝑇𝑇𝐹𝑖 of 𝑚 number of words in that
sentence.
𝑆𝐶 𝑘𝑗 = ∑(𝑇 − 𝑛 + 1) ∗ 𝑇𝑇𝐹𝑖
𝑚
𝑖=1
Step 6: Here all sentence is scored as decreasing order and
take only high score sentences that represent the most
important sentences in the given documents.
Step 7: Here all sentence is scored as decreasing order and
take only high score sentences that represent the most
important sentences in the given documents.
IV. RESULT
The main goal of this system is to develop an automatic
news extractor and summarizer (Vishal Gupta and Gurpreet
Singh, 2013). In this chapter the total implementation process
has explained. This chapter also contain a brief description of
experimental tools.
A. Tool used for Development
The Tools that are used to develop this method ―
✓ Windows 7 Operating System
✓ Xampp
B. User Interface
The Interface enables the user to enter the Home Page.
There are three sections in home page. 1st section shows all
news. It contains all the news in database. Another section is
search and the last section is summary.
Fig. 2. Home page
Here, Fig. 3 shows that if user click all news, they get to
know the know the all the news which are stored in database
for a particular date.
Fig. 3. Output of all news
Fig. 4 describe that of user want to search any keyword for
particular news, they get that news if the news available in
database, else it shows “no found”.
Fig 4. Output of search news
4. International Journal of Research in Advent Technology, Vol.7, No.11, November 2019
E-ISSN: 2321-9637
Available online at www.ijrat.org
doi: 10.32622/ijrat.710201947 9
Fig. 5 shows that if user want to summarize news for a
particular topic or every topic, they can get that by using that
option.
Fig. 5. Options of summary
Here, Fig. 6 shows the desire summary of user which gives
the brief news of related news.
Fig 6. Output of summary
C. Experiment Setup
The system retrieves many news from “The Daily Sun”,
“Bangladesh Independence”, “Prothom Alo” (English
version) in November. This section contains some
experimental results that have been done during experiment.
In the following example. A user wants to know about
BGMEA. So, System extract the news which is related to
BGMEA.
Fig. 7. How to search a keyword
This is the extracting part of this experiment. If user want
the summary, he can get that.
News:
Fig. 8. Input news
D. Term Frequency & Total Term Frequency Count
Most frequent words in the text are the keywords. How
many tomes a word appears in the text is counted by the term
frequency. Now concatenate each document as a cluster to get
total term frequency. Total term frequency is calculated by
summing up the term frequency from every document.
Sentences with the keywords score higher than those of with
fewer keywords. For distinguishing the importance of
keyword, the keywords are multiplied which are positioned
in higher of the sorted total term frequency value. Table 1
shows the calculation of the occurrence of the keywords.
TABLE I. TERM FREQUENCY OF WORDS
E. Sentence Score Generation
Scoring is used to decide on the significance of each line in
the documents. Here at most ten sentences are collected for
the initial summarized content. The sentence score relies on
the word score, which is Total Term Frequency. Final
sentence score is the summation of Total Term Frequency.
• Score of Sentence 1:
32+15+45+80+1+8+28+4+18+45= 276
5. International Journal of Research in Advent Technology, Vol.7, No.11, November 2019
E-ISSN: 2321-9637
Available online at www.ijrat.org
doi: 10.32622/ijrat.710201947 10
• Score of Sentence 2:
32+45+36+80+80= 273
• Score of Sentence 3:
8+ 28+ +15 = 51
• Score of Sentence 4:
28+8=108
Summary:
Fig. 9. Obtained summary
In this summary, it can be observed that most important
sentence is obtained high score. The table is given below,
TABLE II. SCORE OF SENTENCES IN SUMMARY
F. Performance Comparison of the System
To evaluate the system, 7 news sets from different
newspapers have gathered. Summarization evaluation
methods can divide into two categories: intrinsic and extrinsic
(Inderjeet Mani and Mark T. Maybury, 1999).
✓ the quality of summaries directly (e.g., by com-paring
them to ideal summaries) is measured by the Intrinsic
evaluation.
✓ how good the summaries help in performing a particular
task is measured by extrinsic method.
The system has evaluated in this way-
Compute Intrinsic Measures: Precision, Recall, F-Score and
Document Similarity.
TABLE III. INTRINSIC PERFORMANCE ANALYSIS
Fig. 10. Intrinsic Performance Analysis Graph
V. CONCLUSION
In this paper, a method has proposed to extract and
summarize online newspapers (English) using basic
statistical and data mining approaches. Here, challenges have
taken for saving times and solving relevancy. Also, the
extractive summarization has done more easily and concisely.
This work will narrow down the search space for the
researchers and thereby save time providing the summary of
various news. Moreover, as the methodology followed in this
approach is a generic. In future, it can be extended for other
newspapers of another languages. In this report, only online
newspaper has considered as an isolated document.
VI. REFERENCES
[1] Zhanying He,Chun Chen, Jiajun Bu , Can Wang and Lijun Zhang., “
Document summarization based on data reconstruction.”.Zhejiang
Provincial Key Laboratory of Service Robot, College of Computer
Science, 2013.
[2] N.Moratanch and S.Chitrakala, “A Survey on Extractive Text
Summarization”, IEEE International Conference on Computer,
Communication, and Signal Processing (ICCCSP-2017).
[3] N. Moratanch and S. Chitrakala, “A survey on abstractive text sum-
marization,” International Conference on Circuit, Power and
Computing Technologies (ICCPCT) 2016, International Conference
on. IEEE, 2016, pp. 1-7.
[4] Xindong Wu, Fei Xie, Gongqing Wu, Wei Ding. “Personalized News
Filtering and Summarization on the Web”, IEEE 23rd International
Conference on Tools with Artificial Intelligence, 2011.
6. International Journal of Research in Advent Technology, Vol.7, No.11, November 2019
E-ISSN: 2321-9637
Available online at www.ijrat.org
doi: 10.32622/ijrat.710201947 11
[5] Dou Shen, Jian-Tao Sun, Hua Li, Qiang Yang, and Zheng Chen.
“Document summarization using conditional random fields”, In
Proceedings of IJCAI-07.
[6] Xiaojun Wan, Jianwu Yang, and Jianguo Xiao, “Manifold-Ranking
Based Topic Focused Multi-Document Summarization”, IJCAI 7
(2007), 2903–2908, 2007.
[7] Minqing Hu and Bing Liu, “Opinion Extraction and Summarization on
the Web”, Department of Computer Science,University of Illinois at
Chicago,851 South Morgan Street, Chicago, IL 60607-7053, 2006.
[8] Ioannis Antonellis, Christos Bouras, and Vassilis Poulopoulos
“Personalized News Categorization Through Scalable Text
Classification” Research Academic Computer Technology 36 Institute
N. Kazantzaki, University Campus, bGR-26500 Patras, Greece,
Computer Engineering and Informatics Department, University of
Patras, GR-26500 Patras, Greece, 2006.
[9] Furu Wei, Wenjie Li, Qin Lu and Yanxiang He. “Query-sensitive
mutual reinforcement chain and its application in query-oriented
multi-document summarization. “In Proceedings of SIGIR-08.
[10] Mohammad Al Hasan, “Summarization in Pattern Mining”,
Encyclopedia of Data Warehousing and Mining, Second Edition,
pp.1877-1883, 2009.
[11] Giuseppe Attardi, Antonio Gullì, Fabrizio Sebastiani “Automatic Web
Page Categorization by Link and Context Analysis”. Dipartimento di
Informatica, Università di Pisa, Pisa, Italy, 1999.
[12] Y. Sankarasubramaniam, K. Ramanathan, and S. Ghosh, "Text sum-
marization using wikipedia," Information Processing & Management,
vol. 50, no. 3, pp. 443-461, 2014.
[13] J. Goldstein, M. Kantrowitz, V. Mittal and J. Carbonell. “Summarizing
Text Documents: Sentence Selection and Evaluation”
Metrics.Proceedings of ACM SIGIR-99.
[14] Vishal Gupta and Gurpreet Singh Lehal, “Automatic Text
Summarization System for Punjabi Language”, Journal of Emerging
Technologies in Web Intelligence 5, 3(2013), 257–271, 2013.
[15] Inderjeet Mani and Mark T. Maybury, “Advances in Automatic Text
Summarization”, 1999.
AUTHORS PROFILE
Senjuthi Bhattacharjee, B.Sc. in Computer Science &
Engineering, Chittagong University of Engineering &
Technology, Chattogram, Bangladesh. Lecturer, Dept. of
Computer Science & Engineering, Premier University,
Chattogram , Bangladesh. (from: January 2016 to
Present)
Asma Joshita Trisha, B.Sc. in Computer Science &
Engineering, University of Chittagong, Chattogram,
Bangladesh. M.Sc. in Computer Science & Engineering,
University of Chittagong, Chattogram, Bangladesh.
Lecturer, Dept. of Computer Science & Engineering,
Premier University, Chattogram, Bangladesh. (from:
January 2016 to Present)
Sabrina Akter, B.Sc. in Computer Science &
Engineering, Chittagong University of Engineering &
Technology, Chattogram, Bangladesh.