The Application Programming Interface restricts
the types of queries that the Web service can answer. For
instance, a Web service might provide a method that returns the
books of a given author in fast manner, but it might not provide a
method that returns the authors of a given book. If the user asks
for the author of some specific book, then the Web service cannot
be called – even though the underlying database might have the
preferred piece of information, this scenario is called asymmetry.
This asymmetry is particularly problematic if the service is used
in a Web service orchestration system. In this survey, we propose
to use on-the-fly information extraction (IE).IE used to collect
values, and then the value can be used as parameter Bindings for
the Web service. This survey shows how the information
extraction can be integrated into a Web service orchestration
system. The proposed approach is fully implemented in a
prototype called Search Using Services and Information
Extraction (SUSIE). Real-life data and services are used to
demonstrate the practical viability and good performance of our
approach
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
Implemenation of Enhancing Information Retrieval Using Integration of Invisib...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Majority of the computer or mobile phone enthusiasts make use of the web for searching
activity. Web search engines are used for the searching; The results that the search engines get
are provided to it by a software module known as the Web Crawler. The size of this web is
increasing round-the-clock. The principal problem is to search this huge database for specific
information. To state whether a web page is relevant to a search topic is a dilemma. This paper
proposes a crawler called as “PDD crawler” which will follow both a link based as well as a
content based approach. This crawler follows a completely new crawling strategy to compute
the relevance of the page. It analyses the content of the page based on the information contained
in various tags within the HTML source code and then computes the total weight of the page. The page with the highest weight, thus has the maximum content and highest relevance.
For further details contact:
N.RAJASEKARAN B.E M.S 9841091117,9840103301.
IMPULSE TECHNOLOGIES,
Old No 251, New No 304,
2nd Floor,
Arcot road ,
Vadapalani ,
Chennai-26.
User Navigation Pattern Prediction from Web Log Data: A SurveyIJMER
This paper proposes a survey of Web Page Prediction Techniques. Prefetching of Web page has been widely used to reduce the access latency problem of the Web users. However, if Prefetching of Web page is not accurate and Prefetched web pages are not visited by the users in their accesses, the limited bandwidth of network and services of server will not be used efficiently and may face the problem of access delay. Therefore, it is critical that we need an effective prediction method during prefetching.
The Markov models have been widely used to predict and analyze users navigational behavior. All the
activities of web users have been saved in web log files. The stored users session is used to extract
popular web navigation paths and predict current users next web page visit.
WEB SEARCH ENGINE BASED SEMANTIC SIMILARITY MEASURE BETWEEN WORDS USING PATTE...cscpconf
Semantic Similarity measures plays an important role in information retrieval, natural language processing and various tasks on web such as relation extraction, community mining, document clustering, and automatic meta-data extraction. In this paper, we have proposed a Pattern Retrieval Algorithm [PRA] to compute the semantic similarity measure between the words by
combining both page count method and web snippets method. Four association measures are used to find semantic similarity between words in page count method using web search engines. We use a Sequential Minimal Optimization (SMO) support vector machines (SVM) to find the optimal combination of page counts-based similarity scores and top-ranking patterns from the web snippets method. The SVM is trained to classify synonymous word-pairs and nonsynonymous word-pairs. The proposed approach aims to improve the Correlation values,
Precision, Recall, and F-measures, compared to the existing methods. The proposed algorithm outperforms by 89.8 % of correlation value.
Optimization of Search Results with Duplicate Page Elimination using Usage DataIDES Editor
The performance and scalability of search engines
are greatly affected by the presence of enormous amount of
duplicate data on the World Wide Web. The flooded search
results containing a large number of identical or near identical
web pages affect the search efficiency and seek time of the users
to find the desired information within the search results. When
navigating through the results, the only information left behind
by the users is the trace through the pages they accessed. This
data is recorded in the query log files and usually referred to
as Web Usage Data. In this paper, a novel technique for
optimizing search efficiency by removing duplicate data from
search results is being proposed, which utilizes the usage data
stored in the query logs. The duplicate data detection is
performed by the proposed Duplicate Data Detection (D3)
algorithm, which works offline on the basis of favored user
queries found by pre-mining the logs with query clustering.
The proposed result optimization technique is supposed to
enhance the search engine efficiency and effectiveness to a large
scale.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
Implemenation of Enhancing Information Retrieval Using Integration of Invisib...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Majority of the computer or mobile phone enthusiasts make use of the web for searching
activity. Web search engines are used for the searching; The results that the search engines get
are provided to it by a software module known as the Web Crawler. The size of this web is
increasing round-the-clock. The principal problem is to search this huge database for specific
information. To state whether a web page is relevant to a search topic is a dilemma. This paper
proposes a crawler called as “PDD crawler” which will follow both a link based as well as a
content based approach. This crawler follows a completely new crawling strategy to compute
the relevance of the page. It analyses the content of the page based on the information contained
in various tags within the HTML source code and then computes the total weight of the page. The page with the highest weight, thus has the maximum content and highest relevance.
For further details contact:
N.RAJASEKARAN B.E M.S 9841091117,9840103301.
IMPULSE TECHNOLOGIES,
Old No 251, New No 304,
2nd Floor,
Arcot road ,
Vadapalani ,
Chennai-26.
User Navigation Pattern Prediction from Web Log Data: A SurveyIJMER
This paper proposes a survey of Web Page Prediction Techniques. Prefetching of Web page has been widely used to reduce the access latency problem of the Web users. However, if Prefetching of Web page is not accurate and Prefetched web pages are not visited by the users in their accesses, the limited bandwidth of network and services of server will not be used efficiently and may face the problem of access delay. Therefore, it is critical that we need an effective prediction method during prefetching.
The Markov models have been widely used to predict and analyze users navigational behavior. All the
activities of web users have been saved in web log files. The stored users session is used to extract
popular web navigation paths and predict current users next web page visit.
WEB SEARCH ENGINE BASED SEMANTIC SIMILARITY MEASURE BETWEEN WORDS USING PATTE...cscpconf
Semantic Similarity measures plays an important role in information retrieval, natural language processing and various tasks on web such as relation extraction, community mining, document clustering, and automatic meta-data extraction. In this paper, we have proposed a Pattern Retrieval Algorithm [PRA] to compute the semantic similarity measure between the words by
combining both page count method and web snippets method. Four association measures are used to find semantic similarity between words in page count method using web search engines. We use a Sequential Minimal Optimization (SMO) support vector machines (SVM) to find the optimal combination of page counts-based similarity scores and top-ranking patterns from the web snippets method. The SVM is trained to classify synonymous word-pairs and nonsynonymous word-pairs. The proposed approach aims to improve the Correlation values,
Precision, Recall, and F-measures, compared to the existing methods. The proposed algorithm outperforms by 89.8 % of correlation value.
Optimization of Search Results with Duplicate Page Elimination using Usage DataIDES Editor
The performance and scalability of search engines
are greatly affected by the presence of enormous amount of
duplicate data on the World Wide Web. The flooded search
results containing a large number of identical or near identical
web pages affect the search efficiency and seek time of the users
to find the desired information within the search results. When
navigating through the results, the only information left behind
by the users is the trace through the pages they accessed. This
data is recorded in the query log files and usually referred to
as Web Usage Data. In this paper, a novel technique for
optimizing search efficiency by removing duplicate data from
search results is being proposed, which utilizes the usage data
stored in the query logs. The duplicate data detection is
performed by the proposed Duplicate Data Detection (D3)
algorithm, which works offline on the basis of favored user
queries found by pre-mining the logs with query clustering.
The proposed result optimization technique is supposed to
enhance the search engine efficiency and effectiveness to a large
scale.
Computing semantic similarity measure between words using web search enginecsandit
Semantic Similarity measures between words plays an important role in information retrieval,
natural language processing and in various tasks on the web. In this paper, we have proposed a
Modified Pattern Extraction Algorithm to compute the supervised semantic similarity measure
between the words by combining both page count method and web snippets method. Four
association measures are used to find semantic similarity between words in page count method
using web search engines. We use a Sequential Minimal Optimization (SMO) support vector
machines (SVM) to find the optimal combination of page counts-based similarity scores and
top-ranking patterns from the web snippets method. The SVM is trained to classify synonymous
word-pairs and non-synonymous word-pairs. The proposed Modified Pattern Extraction
Algorithm outperforms by 89.8 percent of correlation value.
Sub-Graph Finding Information over Nebula Networksijceronline
Social and information networks have been extensively studied over years. This paper studies a new query on sub graph search on heterogeneous networks. Given an uncertain network of N objects, where each object is associated with a network to an underlying critical problem of discovering, top-k sub graphs of entities with rare and surprising associations returns k objects such that the expected matching sub graph queries efficiently involves, Compute all matching sub graphs which satisfy "Nebula computing requests" and this query is useful in ranking such results based on the rarity and the interestingness of the associations among nebula requests in the sub graphs. "In evaluating Top k-selection queries, "we compute information nebula using a global structural context similarity, and our similarity measure is independent of connection sub graphs". We need to compute the previous work on the matching problem can be harnessed for expected best for a naive ranking after matching for large graphs. Top k-selection sets and search for the optimal selection set with the large graphs; sub graphs may have enormous number of matches. In this paper, we identify several important properties of top-k selection queries, We propose novel top–K mechanisms to exploit these indexes for answering interesting sub graph queries efficiently.
Cluster Based Web Search Using Support Vector MachineCSCJournals
Now days, searches for the web pages of a person with a given name constitute a notable fraction of queries to Web search engines. This method exploits a variety of semantic information extracted from web pages. The rapid growth of the Internet has made the Web a popular place for collecting information. Today, Internet user access billions of web pages online using search engines. Information in the Web comes from many sources, including websites of companies, organizations, communications and personal homepages, etc. Effective representation of Web search results remains an open problem in the Information Retrieval community. For ambiguous queries, a traditional approach is to organize search results into groups (clusters), one for each meaning of the query. These groups are usually constructed according to the topical similarity of the retrieved documents, but it is possible for documents to be totally dissimilar and still correspond to the same meaning of the query. To overcome this problem, the relevant Web pages are often located close to each other in the Web graph of hyperlinks. It presents a graphical approach for entity resolution & complements the traditional methodology with the analysis of the entity-relationship (ER) graph constructed for the dataset being analyzed. It also demonstrates a technique that measures the degree of interconnectedness between various pairs of nodes in the graph. It can significantly improve the quality of entity resolution. Using Support vector machines (SVMs) which are a set of related Supervised learning methods used for classification of load of user queries to the sever machine to different client machines so that system will be stable. clusters web pages based on their capacities stores whole database on server machine. Keywords: SVM, cluster; ER.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Existing work on keyword search relies on an element-level model (data graphs) to compute keyword query results.Elements mentioning keywords are retrieved from this model and paths between them are explored to compute Steiner graphs. KRG (keyword Element Relationship Graph) captures relationships at the keyword level.Relationships captured by a KRG are not direct edges between tuples but stand for paths between keywords.
We propose to route keywords only to relevant sources to reduce the high cost of processing keyword search queries over all sources. A multilevel scoring mechanism is proposed for computing the relevance of routing plans based on scores at the level of keywords, data elements,element sets, and subgraphs that connect these elements
A Survey on: Utilizing of Different Features in Web Behavior PredictionEditor IJMTER
As the web user increases day by day, there are many websites which have a large
number of visitors at the same instant. So handing of these user required different technique. Out of
these requirements one emerging field is next page prediction, where as per the user navigation
pattern different features has been studied and predict the next page for the user. By this overall web
server response time is reduce. In this paper a detailed study of the different researcher paper has
shown, there techniques outcomes and list of features utilization such as web structure, web log, web
content.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
We are good IEEE java projects development center in Chennai and Pondicherry. We guided advanced java technologies projects of cloud computing, data mining, Secure Computing, Networking, Parallel & Distributed Systems, Mobile Computing and Service Computing (Web Service).
For More Details:
http://jpinfotech.org/final-year-ieee-projects/2014-ieee-projects/java-projects/
On the surface a packet is a chunk of information
but at the deeper level a packet is one unit of binary data capable
of being transferred through a network. Delivering data packets
for highly dynamic mobile ad hoc networks in a reliable and
timely manner. Driven by this issue, an efficient Position-based
Opportunistic Routing (POR) protocol which takes advantage of
the stateless property of geographic routing. In proactive routing
protocols the route discovery and recovery procedures are time
and energy consuming process. Once the path breaks, data
packets will get lost or be delayed for a long time until the
reconstruction of the route, causing transmission interruption.
but Geographic routing (GR) uses location information to
forward data packets, in a hop-by-hop routing fashion. Greedy
forwarding is used to select next hop forwarder with the largest
positive progress toward the destination while void handling
mechanism is triggered to route around communication voids. No
end-to-end route need to be maintained, leading to GR’s high
efficiency and scalability. In the operation of greedy forwarding,
the neighbour which is relatively far away from the sender is
chosen as the next hop. If the node moves out of the sender’s
coverage area, the transmission will fail. In GPSR (a very famous
geographic routing protocol), the MAC-layer failure feedback is
used to offer the packet another chance to reroute.
—Stochastic processes have many useful applications
and are taught in several university programmers. In this paper
we are using stochastic process with complex concept on Markov
chains which uses a transition matrix to plot a transition diagram
and there are several examples which explains various type of
transition diagram. The concept behind this topic is simple and
easy to understand.
This study aims to employ low-cost agro waste
biosorbent tamarind (Tamarindus indica) pod shells and
activated carbon prepared by complete and partial pyrolysis of
tamarind pod shell for the removal of hexavalent chromium
ions from aqueous solution. The effect of parameters namely,
initial metal ion concentration, pH, temperature, biomass
loading on chromium removal efficiency were studied. More
than 96.9% removal of Chromium was achieved using crude
tamarind pod shells as biosorbent. The experimental data
obtained were fitted with Langmuir, Freundlich, Temkin and
Redlich-Peterson adsorption isotherm models. The
experimental data fits well to Langmuir, Freundlich and
Temkin isotherms with regression coefficient R2 more than 0.9.
For Redlich-Peterson adsorption isotherm the experimental
data does not fit so well. The crude tamarind had maximum
monolayer adsorption capacity of 40 mg/g and a separation
factor of 0.0416 indicating it as best adsorbent among the three
tested adsorbent. Further, an attempt is made to fit sorption
kinetics with pseudo first order and pseudo second order
reactions. Pseudo second order kinetics model fits well to the
experimental data for all three adsorbents.
This research deals with study of Degradation
behavior of starch blended with different percentage of
polypropylene (PP) .Twin screw extruder at 160- 190 °C and 50
rpm is used for manufacture of blend sheet. Degradation test
achieved according to ASTM standard (D 638 IV and D570-98).
Studies on their degradation properties were carried out by Soil
burial test, Water absorption test and Hydrolysis test. The
morphology test of the polypropylene / starch blend samples
was obviously seen in the (Dino- Light- Digital Microscope),
Results of soil burial test show that tensile strength and
percentage of elongation of polypropylene / starch blend
decrease with increasing the starch content and burial time. The
hydrolysis test show the weight losses increasing with the
increasing amount of starch. High percent of polypropylene
found to decrease the amount of water absorption of the blend.
The physical appearance and morphology studies of
polypropylene / starch blend after burial test in soil and
hydrolysis in water environment showed that all blend samples
was obviously changed after 90-day study period, whereas the
pure polypropylene samples remained unchanged
The technical study had been performed on
many foreign languages like Japanese; Chinese etc. but the
efforts on Indian ancient script is still immature. As the Modi
script language is ancient and cursive type, the OCR of it is still
not widely available. As per our knowledge, Prof. D.N.Besekar,
Dept. of Computer Science, Shri. Shivaji College of Science,
Akola had proposed a system for recognition of offline
handwritten MODI script Vowels. The challenges of
recognition of handwritten Modi characters are very high due
to the varying writing style of each individual. Many vital
documents with precious information have been written in
Modi and currently, these documents have been stored and
preserved in temples and museums. Over a period of time these
documents will wither away if not given due attention. In this
paper we propose a system for recognition of handwritten
Modi script characters; the proposed method uses Image
processing techniques and algorithms which are described
below.
General Terms
Preprocessing techniques: Gray scaling, Thresholding,
Boundary detection, Thinning, cropping, scaling, Template
generation. Other algorithms used- Average method, otsu
method, Stentiford method, Template-based matching method
The combination of steganography and
cryptography is considered as one of the best security methods
used for message protection, due to this reason, in this paper, a
data hiding system that is based on image steganography and
cryptography is proposed to secure data transfer between the
source and destination. Animated GIF image is chosen as a
carrier file format for the steganography due to a wide use in web
pages and a LSB (Least Significant Bits) algorithm is employed to
hide the message inside the colors of the pixels of an animated
GIF image frames. To increase the security of hiding, each frame
of GIF image is converted to 256 color BMP image and the
palette of them is sorted and reassign each pixels to its new index,
furthermore, the message is encrypted by LZW ( Lempel _
Ziv_Welch) compression algorithm before being hidden in the
image frames. The proposed system was evaluated for
effectiveness and the result shows that, the encryption and
decryption methods used for developing the system make the
security of the proposed system more efficient in securing data
from unauthorized users. The system is therefore, recommended
to be used by the Internet users for establishing a more secure
communication
When a ductile material with a crack is loaded in
tension, the deformation energy builds up around the crack tip
and it is understood that at a certain critical condition voids are
formed ahead of the crack tip. The crack extension occurs by
coalescence of voids with the crack tip. The “characteristic
distance” (Lc) defined as the distance b/w the crack tip & the void
responsible for eventual coalescence with the crack tip. Nucleation
of these voids is generally associated with the presence of second
phase particles or grain boundaries in the vicinity of the crack tip.
Although approximate, Lc assumes a special significance since it
links the fracture toughness to the microscopic mechanism
considered responsible for ductile fracture. The knowledge of the
“characteristic distance” is also crucial for designing the size of
mesh in the finite element simulations of material crack growth
using damage mechanics principles. There is not much work
(experimental as well as numerical) available in the literature
related to the dependency of “characteristic distance” on the
fracture specimen geometry. The present research work is an
attempt to understand numerically, the geometry dependency of
“characteristic distance” using three-dimensional FEM analysis.
The variation of “characteristic distance” parameter due to the
change of temperature across the fracture specimen thickness was
also studied. The work also studied the variation of “characteristic
distance”, due to the change in fracture specimen thickness.
Finally, the ASTM requirement of fracture specimen thickness
criteria is evaluated for the “characteristic distance” fracture
parameter. “Characteristic distance” is found to vary across the
fracture specimen thickness. It is dependent on fracture specimen
thickness and it converges after a specified thickness of fracture
specimen. “Characteristic distance” value is also dependent on the
temperature of ductile material. In Armco iron material, it is
found to decrease with the increase in temperature.
-In the field of Agriculture most important things
are fertility of soil, nutrition’s available in soil, water availability
in that area, atmospheric conditions .All these parameters are
playing the measure roll regarding the productivity of crop .In
this paper we are trying to go through the techniques which will
show us how to improve productivity with the minimum use of
natural resources like water, and avoid leaching of soil by using
fertilizers through drip. This can be used in greenhouse or open
environments to efficiently monitor soil moisture and
temperature, ambient temperature, and humidity. Wired
communications, sensor networks, and other complementary
technologies provide the necessary tools to compile and processes
physical variables, including temperature, humidity, and soil
moisture, pH of soil, fertilizer concentrations. Greenhouse and
precision agricultural, in general, demand real-time precise
measurement of these parameters in order to avoid unnecessary
exposure to unhealthy ambient conditions, assure maximum
productivity and provide value-added quality. This paper aims to
implement the basic application of automizing the irrigation field
by programming the components and building the necessary
hardware with ARM7 Processor. This is used to find the exact
field condition and maintaining their levels in the soil
Computing semantic similarity measure between words using web search enginecsandit
Semantic Similarity measures between words plays an important role in information retrieval,
natural language processing and in various tasks on the web. In this paper, we have proposed a
Modified Pattern Extraction Algorithm to compute the supervised semantic similarity measure
between the words by combining both page count method and web snippets method. Four
association measures are used to find semantic similarity between words in page count method
using web search engines. We use a Sequential Minimal Optimization (SMO) support vector
machines (SVM) to find the optimal combination of page counts-based similarity scores and
top-ranking patterns from the web snippets method. The SVM is trained to classify synonymous
word-pairs and non-synonymous word-pairs. The proposed Modified Pattern Extraction
Algorithm outperforms by 89.8 percent of correlation value.
Sub-Graph Finding Information over Nebula Networksijceronline
Social and information networks have been extensively studied over years. This paper studies a new query on sub graph search on heterogeneous networks. Given an uncertain network of N objects, where each object is associated with a network to an underlying critical problem of discovering, top-k sub graphs of entities with rare and surprising associations returns k objects such that the expected matching sub graph queries efficiently involves, Compute all matching sub graphs which satisfy "Nebula computing requests" and this query is useful in ranking such results based on the rarity and the interestingness of the associations among nebula requests in the sub graphs. "In evaluating Top k-selection queries, "we compute information nebula using a global structural context similarity, and our similarity measure is independent of connection sub graphs". We need to compute the previous work on the matching problem can be harnessed for expected best for a naive ranking after matching for large graphs. Top k-selection sets and search for the optimal selection set with the large graphs; sub graphs may have enormous number of matches. In this paper, we identify several important properties of top-k selection queries, We propose novel top–K mechanisms to exploit these indexes for answering interesting sub graph queries efficiently.
Cluster Based Web Search Using Support Vector MachineCSCJournals
Now days, searches for the web pages of a person with a given name constitute a notable fraction of queries to Web search engines. This method exploits a variety of semantic information extracted from web pages. The rapid growth of the Internet has made the Web a popular place for collecting information. Today, Internet user access billions of web pages online using search engines. Information in the Web comes from many sources, including websites of companies, organizations, communications and personal homepages, etc. Effective representation of Web search results remains an open problem in the Information Retrieval community. For ambiguous queries, a traditional approach is to organize search results into groups (clusters), one for each meaning of the query. These groups are usually constructed according to the topical similarity of the retrieved documents, but it is possible for documents to be totally dissimilar and still correspond to the same meaning of the query. To overcome this problem, the relevant Web pages are often located close to each other in the Web graph of hyperlinks. It presents a graphical approach for entity resolution & complements the traditional methodology with the analysis of the entity-relationship (ER) graph constructed for the dataset being analyzed. It also demonstrates a technique that measures the degree of interconnectedness between various pairs of nodes in the graph. It can significantly improve the quality of entity resolution. Using Support vector machines (SVMs) which are a set of related Supervised learning methods used for classification of load of user queries to the sever machine to different client machines so that system will be stable. clusters web pages based on their capacities stores whole database on server machine. Keywords: SVM, cluster; ER.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Existing work on keyword search relies on an element-level model (data graphs) to compute keyword query results.Elements mentioning keywords are retrieved from this model and paths between them are explored to compute Steiner graphs. KRG (keyword Element Relationship Graph) captures relationships at the keyword level.Relationships captured by a KRG are not direct edges between tuples but stand for paths between keywords.
We propose to route keywords only to relevant sources to reduce the high cost of processing keyword search queries over all sources. A multilevel scoring mechanism is proposed for computing the relevance of routing plans based on scores at the level of keywords, data elements,element sets, and subgraphs that connect these elements
A Survey on: Utilizing of Different Features in Web Behavior PredictionEditor IJMTER
As the web user increases day by day, there are many websites which have a large
number of visitors at the same instant. So handing of these user required different technique. Out of
these requirements one emerging field is next page prediction, where as per the user navigation
pattern different features has been studied and predict the next page for the user. By this overall web
server response time is reduce. In this paper a detailed study of the different researcher paper has
shown, there techniques outcomes and list of features utilization such as web structure, web log, web
content.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
We are good IEEE java projects development center in Chennai and Pondicherry. We guided advanced java technologies projects of cloud computing, data mining, Secure Computing, Networking, Parallel & Distributed Systems, Mobile Computing and Service Computing (Web Service).
For More Details:
http://jpinfotech.org/final-year-ieee-projects/2014-ieee-projects/java-projects/
On the surface a packet is a chunk of information
but at the deeper level a packet is one unit of binary data capable
of being transferred through a network. Delivering data packets
for highly dynamic mobile ad hoc networks in a reliable and
timely manner. Driven by this issue, an efficient Position-based
Opportunistic Routing (POR) protocol which takes advantage of
the stateless property of geographic routing. In proactive routing
protocols the route discovery and recovery procedures are time
and energy consuming process. Once the path breaks, data
packets will get lost or be delayed for a long time until the
reconstruction of the route, causing transmission interruption.
but Geographic routing (GR) uses location information to
forward data packets, in a hop-by-hop routing fashion. Greedy
forwarding is used to select next hop forwarder with the largest
positive progress toward the destination while void handling
mechanism is triggered to route around communication voids. No
end-to-end route need to be maintained, leading to GR’s high
efficiency and scalability. In the operation of greedy forwarding,
the neighbour which is relatively far away from the sender is
chosen as the next hop. If the node moves out of the sender’s
coverage area, the transmission will fail. In GPSR (a very famous
geographic routing protocol), the MAC-layer failure feedback is
used to offer the packet another chance to reroute.
—Stochastic processes have many useful applications
and are taught in several university programmers. In this paper
we are using stochastic process with complex concept on Markov
chains which uses a transition matrix to plot a transition diagram
and there are several examples which explains various type of
transition diagram. The concept behind this topic is simple and
easy to understand.
This study aims to employ low-cost agro waste
biosorbent tamarind (Tamarindus indica) pod shells and
activated carbon prepared by complete and partial pyrolysis of
tamarind pod shell for the removal of hexavalent chromium
ions from aqueous solution. The effect of parameters namely,
initial metal ion concentration, pH, temperature, biomass
loading on chromium removal efficiency were studied. More
than 96.9% removal of Chromium was achieved using crude
tamarind pod shells as biosorbent. The experimental data
obtained were fitted with Langmuir, Freundlich, Temkin and
Redlich-Peterson adsorption isotherm models. The
experimental data fits well to Langmuir, Freundlich and
Temkin isotherms with regression coefficient R2 more than 0.9.
For Redlich-Peterson adsorption isotherm the experimental
data does not fit so well. The crude tamarind had maximum
monolayer adsorption capacity of 40 mg/g and a separation
factor of 0.0416 indicating it as best adsorbent among the three
tested adsorbent. Further, an attempt is made to fit sorption
kinetics with pseudo first order and pseudo second order
reactions. Pseudo second order kinetics model fits well to the
experimental data for all three adsorbents.
This research deals with study of Degradation
behavior of starch blended with different percentage of
polypropylene (PP) .Twin screw extruder at 160- 190 °C and 50
rpm is used for manufacture of blend sheet. Degradation test
achieved according to ASTM standard (D 638 IV and D570-98).
Studies on their degradation properties were carried out by Soil
burial test, Water absorption test and Hydrolysis test. The
morphology test of the polypropylene / starch blend samples
was obviously seen in the (Dino- Light- Digital Microscope),
Results of soil burial test show that tensile strength and
percentage of elongation of polypropylene / starch blend
decrease with increasing the starch content and burial time. The
hydrolysis test show the weight losses increasing with the
increasing amount of starch. High percent of polypropylene
found to decrease the amount of water absorption of the blend.
The physical appearance and morphology studies of
polypropylene / starch blend after burial test in soil and
hydrolysis in water environment showed that all blend samples
was obviously changed after 90-day study period, whereas the
pure polypropylene samples remained unchanged
The technical study had been performed on
many foreign languages like Japanese; Chinese etc. but the
efforts on Indian ancient script is still immature. As the Modi
script language is ancient and cursive type, the OCR of it is still
not widely available. As per our knowledge, Prof. D.N.Besekar,
Dept. of Computer Science, Shri. Shivaji College of Science,
Akola had proposed a system for recognition of offline
handwritten MODI script Vowels. The challenges of
recognition of handwritten Modi characters are very high due
to the varying writing style of each individual. Many vital
documents with precious information have been written in
Modi and currently, these documents have been stored and
preserved in temples and museums. Over a period of time these
documents will wither away if not given due attention. In this
paper we propose a system for recognition of handwritten
Modi script characters; the proposed method uses Image
processing techniques and algorithms which are described
below.
General Terms
Preprocessing techniques: Gray scaling, Thresholding,
Boundary detection, Thinning, cropping, scaling, Template
generation. Other algorithms used- Average method, otsu
method, Stentiford method, Template-based matching method
The combination of steganography and
cryptography is considered as one of the best security methods
used for message protection, due to this reason, in this paper, a
data hiding system that is based on image steganography and
cryptography is proposed to secure data transfer between the
source and destination. Animated GIF image is chosen as a
carrier file format for the steganography due to a wide use in web
pages and a LSB (Least Significant Bits) algorithm is employed to
hide the message inside the colors of the pixels of an animated
GIF image frames. To increase the security of hiding, each frame
of GIF image is converted to 256 color BMP image and the
palette of them is sorted and reassign each pixels to its new index,
furthermore, the message is encrypted by LZW ( Lempel _
Ziv_Welch) compression algorithm before being hidden in the
image frames. The proposed system was evaluated for
effectiveness and the result shows that, the encryption and
decryption methods used for developing the system make the
security of the proposed system more efficient in securing data
from unauthorized users. The system is therefore, recommended
to be used by the Internet users for establishing a more secure
communication
When a ductile material with a crack is loaded in
tension, the deformation energy builds up around the crack tip
and it is understood that at a certain critical condition voids are
formed ahead of the crack tip. The crack extension occurs by
coalescence of voids with the crack tip. The “characteristic
distance” (Lc) defined as the distance b/w the crack tip & the void
responsible for eventual coalescence with the crack tip. Nucleation
of these voids is generally associated with the presence of second
phase particles or grain boundaries in the vicinity of the crack tip.
Although approximate, Lc assumes a special significance since it
links the fracture toughness to the microscopic mechanism
considered responsible for ductile fracture. The knowledge of the
“characteristic distance” is also crucial for designing the size of
mesh in the finite element simulations of material crack growth
using damage mechanics principles. There is not much work
(experimental as well as numerical) available in the literature
related to the dependency of “characteristic distance” on the
fracture specimen geometry. The present research work is an
attempt to understand numerically, the geometry dependency of
“characteristic distance” using three-dimensional FEM analysis.
The variation of “characteristic distance” parameter due to the
change of temperature across the fracture specimen thickness was
also studied. The work also studied the variation of “characteristic
distance”, due to the change in fracture specimen thickness.
Finally, the ASTM requirement of fracture specimen thickness
criteria is evaluated for the “characteristic distance” fracture
parameter. “Characteristic distance” is found to vary across the
fracture specimen thickness. It is dependent on fracture specimen
thickness and it converges after a specified thickness of fracture
specimen. “Characteristic distance” value is also dependent on the
temperature of ductile material. In Armco iron material, it is
found to decrease with the increase in temperature.
-In the field of Agriculture most important things
are fertility of soil, nutrition’s available in soil, water availability
in that area, atmospheric conditions .All these parameters are
playing the measure roll regarding the productivity of crop .In
this paper we are trying to go through the techniques which will
show us how to improve productivity with the minimum use of
natural resources like water, and avoid leaching of soil by using
fertilizers through drip. This can be used in greenhouse or open
environments to efficiently monitor soil moisture and
temperature, ambient temperature, and humidity. Wired
communications, sensor networks, and other complementary
technologies provide the necessary tools to compile and processes
physical variables, including temperature, humidity, and soil
moisture, pH of soil, fertilizer concentrations. Greenhouse and
precision agricultural, in general, demand real-time precise
measurement of these parameters in order to avoid unnecessary
exposure to unhealthy ambient conditions, assure maximum
productivity and provide value-added quality. This paper aims to
implement the basic application of automizing the irrigation field
by programming the components and building the necessary
hardware with ARM7 Processor. This is used to find the exact
field condition and maintaining their levels in the soil
A young astronomer’s by now ten years old
results are re-told and put in perspective. The implications are
far-reaching. Angular-momentum shows its clout not only in
quantum mechanics where this is well known, but is also a
major player in the space-time theory of the equivalence
principle and its ramifications. In general relativity, its
fundamental role was largely neglected for the better part of a
century. A children’s device – a friction-free rotating bicycle
wheel suspended from its hub that can be lowered and pulled
up reversibly – serves as an eye-opener. The consequences are
embarrassingly far-reaching in reviving Einstein’s original
dream
This study was designed to evaluate the effect of
70% ethanolic crude extract of Portulaca oleracea L on mice
orgons . (In vivo),In vivo, the acute toxicity of 70 % ethanolic
extract of the plant on normal mice was studied. No toxic effect
was noted on normal mice even at 9500 mg /kg B.W S/C
injection.Histopathological changes due to ethanolic extract of
the plant in healthy mice were summarized in hyperplasia of
white pulp with amyloid deposition, proliferation of
megakaryocytes and mononuclear cell infiltration in the liver and
kidney parenchyma. There were no significant lesions detected in
the brain, heart and ovary in all treated groups.
Cloud computing solves the problem of real
time demand information and visibility at different location by
which information can be delivered with reliability, scalability
and flexibility between the supplier and customer. Logistics
network requires effective information flow for technical support
by which logistics infrastructures can be totally utilized and
tracked the information collection, transmission and operation.
Cloud is fast growing technology which can effectively reduce the
intermediate cost of flow of information and improve the link
between the logistics partners and customers. This paper
analyzes the advantages of cloud based logistics network and
defines in which way a logistics network manages Information
Flow Control (IFC) over the cloud, which allows the logistics
network to do work effectively.
To help corporations survive amidst worldwide
quality competition, the authors have focused on the strategic
development of a Higher-Cycled Product Design CAE Model
employing a Highly Reliable CAE Analysis Technology
Component Model. Their efforts are part of principle-based
research aimed at evolving product design and CAE development
processes to ensure better quality assurance. To satisfy the
requirements of developing and producing high quality products
while also reducing costs and shortening development times, the
effectiveness of this model was verified by successfully applying it
to the technological problems of loosening bolts and other
product design bottlenecks at auto manufacturers.
RFID-based public transport ticketing systems
rely on widespread networks of RFID readers that locate
the user within the transport network in real time to be
able to verify whether he can travel at that time with the
ticket he holds. This paper presents a system that uses that
same RFID-based location information to give the user
navigation indications depending on his current location
provided that the user has indicated beforehand the places
he intends to visit. The system was designed to be costeffectively
deployable on the short term but open for easy
extension. This paper is based on ticketing and
identification of the passenger in the public transport. In
the metropolitan city like Mumbai, Kolkata we have a
severe malfunction of public transport and various
security problems. Firstly, there is a lot of confusion
between the passengers regarding fares which lead to
corruption, Secondly due to mismanagement of public
transport the passengers faces the problem of traffic jam,
thirdly nowadays we have severe security problems in
public transport due anti-social elements.The entire
network comprises of three modules; Base Station Module,
In-Bus Modules and Bus Stop Module. The base station
module consists of monitoring system which includes GSM
and a PC. The In-Bus Modules consists of two
Microcontrollers, GSM Modem, GPS, Zigbee, RFID, LCD
and infrared sensor. RFID for ticketing purpose, GSM,
GPS is used for mobile data transmission and tracking
location. The Zigbee module is also interfaced with the
microcontroller which is used to send the bus information
to bus stop and to get the information from the bus stop to
bus. The Bus Stop Module is fixed at every bus stop
consists of Zigbee node which is interfaced with the
Microcontroller.
In current year, endurable and entire renewable
energy resources are extensively used in electrical energy
generation system. Mainly, solar energy conservation systems
are apply in stand -alone system. Solar panels covert solar
radiation into direct electrical energy. Solar panels are one of
the most potential renewable energy technologies for refreshing
building. In this study, responsibility analysis of a solar system
installed in my collage academic block and hostel is
investigated. The system includes solar panel, battery,
generator, converter and loads. In this study we calculate
overall load in academic block (Electrical engineering
department and round building) and only boy hostel. After
knowing overall loads result for these buildings we simulate
this data through HOMER tool and we obtain the best result
which is presented in this paper.
The result obtained from the optimization gives the initial
capital cost as 296.000$ while operating cost is 2,882$/yr. Total
net present cost (NPC) is 332,846$ and the cost of energy
(COE) is 0.212$/kWh.
The main purpose of this research paper is that the
maximum demand of energy consumption for both academic
block and hostel are simulated through solar panel, for this
purpose which amount of solar panel and battery is required.
A supply chain consists of all parties involved
directly or indirectly, in fulfilling a customer request. The supply
chain includes not only the manufacturers and suppliers, but also
transporters, workhouse, retailers and even customers
themselves. Within each organization, such as a manufactures,
the supply chain includes all functions involved in receiving and
filling a customer request. These functions include, but are not
limited to, new product development, marketing operations,
distributions, finance, and customer service. Supply chain
management (SCM) is the management of an interconnected or
interlinked between network, channel and node businesses
involved in the provision of product and service packages
required by the end customers in a supply chain. Supply chain
management spans the movement and storage of raw materials,
work-in-process inventory, and finished goods from point of
origin to point of consumption. It is also defined as the "design,
planning, execution, control, and monitoring of supply chain
activities with the objective of creating net value, building a
competitive infrastructure, leveraging worldwide logistics,
synchronizing supply with demand and measuring performance
globally.
Bio-char can be produced by thermal conversion of
biomass. Palm shells were obtained from palm fruits (palmira).
They were air-dried to remove moisture. The dried palm shells
were ground to become powder and heated at 600ºC, 800ºC and
1000ºC for 2 h respectively. After heating, bio-char was obtained.
Structural properties of palm shell powder and bio-char were
examined by X-ray diffraction (XRD). Scanning electron
microscopy (SEM) was used to observe microstructure of biochar.
Properties such as hydration capacity, pH were also
evaluated.
This paper focuses on the numerous techniques that
have been proposed over the years for metamaterial
characterization. These techniques are categorized into
analytical, field averaging and experimental methods, which
provide various methods to determine the complex permittivity,
complex permeability and refractive index of metamaterials.
Suspended nanoparticles in conventional fluids,
called nanofluids, have been the subject of intensive study
worldwide since pioneering researchers recently discovered the
anomalous thermal behavior of these fluids. The heat transfer from
smaller area is achieved through microchannels. The heat transfer
principle states that maximum heat transfer is achieved in
microchannels with maximum pressure drop across it. In this
research work the experimental and numerical investigation for
the improved heat transfer characteristics of serpentine shaped
microchannel heat sink using Al2O3/water nanofluid is done. The
fluid flow characteristics is also analyzed for the serpentine
shaped micrchannel. The experimental results of the heat
transfer using Al2O3 nanofluid is compared with the numerical
values. The calculations in this work suggest that the best heat
transfer enhancement can be obtained by using a system with an
Al2O3–water nanofluid-cooled micro channel with serpentine
shaped fluid flow
Space-time adaptive processing (STAP) is a signal
processing technique most commonly used in radar systems where
interference is a problem. The radar signal processor is used to
remove the unintentional cluttering effects caused by ground
reflections and echoes due to sea, desert, forest, etc. and intentional
jamming and make the received signal useful. In this paper a new
approach to STAP based on subspace projection has been described
in detail. According to linear algebra and three dimensional
geometry, if we project a range space on to a subspace spanned by
linearly independent vectors, we can suppress data which is
perpendicular to that subspace. In subspace based technique, the
received data is projected on to a subspace which is orthogonal to
clutter subspace to remove the clutter. The probability of target
detection can be find out in order to analyse the performance of the
proposed algorithm. Two existing algorithms, SMI and DPCA are
chosen to do the comparison. while plotting the detection Probability
against SINR , the results obtained are better for subspace technique
than DPCA and SMI. We got the SINR improved for subspace based
technique for same detection probability. The effect of subspace rank
on SINR was also analysed for understanding the computational load
caused by the technique. We also analysed the convergence of the
algorithm by taking plots of SINR against range snapshots.
Due to increase demand of energy, increasing price
of petroleum fuels, depletion of petroleum fuels, and
environmental pollution by these fuel emissions, it is very
necessary to find the alternative fuels. This work focused on use
of hybrid blends of Karanja and Cottonseed oil Biodiesels. In this
work 20% and 25% blends are used and the performance and
emission tests were conducted on single cylinder, 4-stroke, water
cooled CI engine by running the engine at a speed of 1500rpm, at
a compression ratio of 16.5:1 and at an injection pressure of
205bar and performance parameters like BP, BSFC, BTE and
the emissions like CO, HC and NOx are compared. It was found
that the blends gave comparatively good results in respect of
performance and emissions.
User Navigation Pattern Prediction from Web Log Data: A SurveyIJMER
This paper proposes a survey of Web Page Prediction Techniques. Prefetching of Web page
has been widely used to reduce the access latency problem of the Web users. However, if Prefetching of
Web page is not accurate and Prefetched web pages are not visited by the users in their accesses, the
limited bandwidth of network and services of server will not be used efficiently and may face the problem
of access delay. Therefore, it is critical that we need an effective prediction method during prefetching.
The Markov models have been widely used to predict and analyze users navigational behavior. All the
activities of web users have been saved in web log files. The stored users session is used to extract
popular web navigation paths and predict current users next web page visit
Web Page Recommendation Using Web MiningIJERA Editor
On World Wide Web various kind of content are generated in huge amount, so to give relevant result to user web recommendation become important part of web application. On web different kind of web recommendation are made available to user every day that includes Image, Video, Audio, query suggestion and web page. In this paper we are aiming at providing framework for web page recommendation. 1) First we describe the basics of web mining, types of web mining. 2) Details of each web mining technique.3)We propose the architecture for the personalized web page recommendation.
COST-SENSITIVE TOPICAL DATA ACQUISITION FROM THE WEBIJDKP
The cost of acquiring training data instances for induction of data mining models is one of the main concerns in real-world problems. The web is a comprehensive source for many types of data which can be used for data mining tasks. But the distributed and dynamic nature of web dictates the use of solutions which can handle these characteristics. In this paper, we introduce an automatic method for topical data acquisition from the web. We propose a new type of topical crawlers that use a hybrid link context extraction method for topical crawling to acquire on-topic web pages with minimum bandwidth usage and with the lowest cost. The new link context extraction method which is called Block Text Window (BTW), combines a text window method with a block-based method and overcomes challenges of each of these methods using the advantages of the other one. Experimental results show the predominance of BTW in comparison with state of the art automatic topical web data acquisition methods based on standard metrics.
Farthest first clustering in links reorganizationIJwest
Website can be easily design but to efficient user navigation is not a easy task since user behavior is keep changing and developer view is quite different from what user wants, so to improve navigation one way is reorganization of website structure. For reorganization here proposed strategy is farthest first traversal clustering algorithm perform clustering on two numeric parameters and for finding frequent traversal path of user Apriori algorithm is used. Our aim is to perform reorganization with fewer changes in website structure.
A vague improved markov model approach for web page predictionIJCSES Journal
Today most of the information in all areas is available over the web. It increases the web utilization as
well as attracts the interest of researchers to improve the effectiveness of web access and web utilization.
As the number of web clients gets increased, the bandwidth sharing is performed that decreases the web
access efficiency. Web page prefetching improves the effectiveness of web access by availing the next
required web page before the user demand. It is an intelligent predictive mining that analyze the user web
access history and predict the next page. In this work, vague improved markov model is presented to
perform the prediction. In this work, vague rules are suggested to perform the pruning at different levels of
markov model. Once the prediction table is generated, the association mining will be implemented to
identify the most effective next page. In this paper, an integrated model is suggested to improve the
prediction accuracy and effectiveness.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Integrating Vague Association Mining with Markov Model ijsc
The increasing demand of World Wide Web raises the need of predicting the user’s web page request. The most widely used approach to predict the web pages is the pattern discovery process of Web usage mining. This process involves inevitability of many techniques like Markov model, association rules and clustering. Fuzzy theory with different techniques has been introduced for the better results. Our focus is on Markov models. This paper is introducing the vague Rules with Markov models for more accuracy using the vague set theory.
Integrating vague association mining with markov modelijsc
The increasing demand of World Wide Web raises the need of predicting the user’s web page request. The
most widely used approach to predict the web pages is the pattern discovery process of Web usage mining.
This process involves inevitability of many techniques like Markov model, association rules and clustering.
Fuzzy theory with different techniques has been introduced for the better results. Our focus is on Markov
models. This paper is introducing the vague Rules with Markov models for more accuracy using the vague
set theory.
An Improved Support Vector Machine Classifier Using AdaBoost and Genetic Algo...Eswar Publications
Predicting the objective of internet users has divergent applications in the areas such as e-commerce, entertainment in online, and several internet-based applications. The critical part of the classifying internet queries based on obtainable features namely contextual information, keywords and their semantic relationships. This research paper presents an improved support vector machine classifier that makes use of ad boost genetic algorithmic approach towards web interaction mining. Around 31 participants are chosen and given topics to
search web contents. Parameters such as precision, recall and F1 score are taken for comparing the proposed classifier with the classical support vector machine. Results proved that the proposed classifier achieves better performance than that of the conventional SVM.
Web Content Mining Based on Dom Intersection and Visual Features Conceptijceronline
Structured Data extraction from deep Web pages is a challenging task due to the underlying complex structures of such pages. Also website developer generally follows different web page design technique. Data extraction from webpage is highly useful to build our own database from number applications. A large number of techniques have been proposed to address this problem, but all of them have inherent limitations because they present different limitations and constraints for extracting data from such webpages. This paper presents two different approaches to get structured data extraction. The first approach is non-generic solution which is based on template detection using intersection of Document Object Model Tree of various webpages from the same website. This approach is giving better result in terms of efficiency and accurately locating the main data at the particular webpage. The second approach is based on partial tree alignment mechanism based on using important visual features such as length, size, and position of web table available on the webpages. This approach is a generic solution as it does not depend on one particular website and its webpage template. It is perfectly locating the multiple data regions, data records and data items within a given web page. We have compared our work's result with existing mechanism and found our result much better for number webpage
Methodologies on user Behavior Analysis and Future Request Prediction in Web ...ijbuiiir1
Web Usage Mining is a kind of web mining which provides knowledge about user navigation behavior and gets the interesting patterns from web. Web usage mining refers to the mechanical invention and scrutiny of patterns in click stream and linked data treated as a consequence of user interactions with web resources on one or more web sites. Identify the need and interest of the user and its useful for upgrade web Sources. Web site developers they can update their web site according to their attention. In this paper discuss about the different types of Methodologies which has been carried out in previous research work for Discovering User Behavior and Predicting the Future Request.
Similar to A LITERATURE SURVEY ON INFORMATION EXTRACTION BY PRIORITIZING CALLS (20)
Recent joint surgery studies reveal increased
revisions and resurfacing of the metal on metal hip joints. Metal
on metal hip implants were developed more than thirty years ago
and their application has been refined because of availability of
advanced manufacturing techniques and partly by advancements
in material science and engineering. Development of composite
materials may provide greater durability to metal-on-metal hip
implants .This review article is a study of the latest literature of
metal-on-metal hip implants and its various modeling techniques.
Numbers of methods are used for convergence and numerical
solution to investigate the performance of metal-on-metal hip
implant for accurate stable solution. This paper presents analysis
done by various researchers on metal-on-metal hip implants for
wear, lubrication, fatigue, bio-tribo-corrosion, design, toxicity
and resurfacing. After in vivo and in vitro studies, it is found that
all these methods have limitations. There is a need of more
insight for lubrication analysis, geometry of bearings, materials
and input parameters. The information provided in this work is
intended as an aid in the assessment of metal-on-metal hip joints.
Background Hospital contributes significantly tangible and intangible resources on a concurred plan by the scheduling of surgery on the OT list. Postponement decreases efficiency by declining throughput leads to wastage of resources hence burden to the nation. Patients and their family face economic and emotional implication due to the postponement. Postponement rate being a quality indicator controls check mechanism could be developed from the results. Postponement of elective scheduled operations results in inefficient use of the operating room (OR) time on the day of surgery. Inconvenience to patients and families are also caused by postponements. Moreover, the day of surgery (DOS) postponement creates logistic and financial burden associated with extended hospital stay and repetitions of pre-operative preparations to an extent of repetition of investigations in some cases causing escalated costs, wastage of time and reduced income. Methodology A cross-sectional study was done in the operation theaters of a tertiary care hospital in which total ten operation theaters of General Surgery Data of scheduled, performed and postponed surgeries was collected from all the operation theater with effect from March 1st to September 30th, 2018. A questionnaire was developed to find out the reasons for the postponement for all hospital’s stakeholders (surgeons, Anesthetist, Nursing Officer) and they were further evaluated time series analysis of scheduling of Operation Theater for moving average technique. Results Total 958 surgeries were scheduled and 772 surgeries performed were and 186 surgeries were postponed with a postponement rate of 19.42% in the cardiac surgery department during the study period. Month-wise postponement Rate exponential smoothing of time series data shows the dynamic of operating suits. To test throughput Postponement rate was plotted the postponed surgeries and on regression analysis is in a perfect linear relationship.
Introduction: Postponement of elective scheduled operations results in inefficient use of operating room (OR) time on the day of surgery. Inconvenience to patients and families also caused by postponements. Moreover, day of surgery (DOS) postponement creates logistic and financial burden associated with extended hospital stay and repetitions of pre-operative preparations to an extend of repetition of investigations in some cases causing escalated costs, wastage of time and reduced income. Methodology: A cross sectional study was done in the operation theaters of a tertiary care hospital in which total ten operation theaters of General Surgery Data of scheduled, performed and postponed surgeries was collected from all the operation theater with effect from march 1st to September 30th 2018. A questionnaire was developed to find out the reasons for the postponement for all hospital’s stakeholders (Surgeons, Anesthetist, Nursing officer) and they were further evaluated Time series analysis of scheduling of Operation Theater for Moving average Technique. Results: total 2,466 surgeries were scheduled and 1,980 surgeries were performed and 486 surgeries were postponed in the general surgery department during the study period. Month wise postponement forecast was in accordance with the performed surgeries and on regression analysis postponed surgeries were in perfect linear relationship with the postponement Rate.
In the present paper the experimental study of
Nanotechnology involves high cost for Lab set-up and the
experimentation processes were also slow. Attempt has also
been made to discuss the contributions towards the societal
change in the present convergence of Nano-systems and
information technologies. one cannot rely on experimental
nanotechnology alone. As such, the Computer- simulations and
modeling are one of the foundations of computational
nanotechnology. The computer modeling and simulations
were also referred as computational experimentations. The
accuracy of such Computational nano-technology based
experiment generally depends on the accuracy of the following
things: Intermolecular interaction, Numerical models and
Simulation schemes used. The essence of nanotechnology is
therefore size and control because of the diversity of
applications the plural term nanotechnology is preferred by
some nevertheless they all share the common feature of control
at the nanometer scale the latter focusing on the observation
and study of phenomena at the nanometer scale. In this paper,
a brief study of Computer-Simulation techniques as well as
some Experimental result
Solar cell absorber Kesterite- type Cu2ZnSnS4 (CZTS) thin films have been prepared by Chemical Bath Deposition (CBD). UV–vis absorption spectra measurement indicated that the band gap of as-synthesized CZTS was about1.68 eV, which was near the optimum value for photovoltaic solar conversion in a single-band-gap device. The polycrystalline CZTS thin films with kieserite crystal structure have been obtained by XRD. The average of crystalline size of CZTS is 27 nm
Multilevel inverters play a crucial part in the
areas of high and medium voltage applications. Among the three
main multilevel inverters used, the capacitor clamped multilevel
inverter(CCMLI) has advantage with respect to voltage
redundancies. This work proposes a switching pattern to improve
the performance of chosen H-bridge type CCMLI over
conventional CCMLI. The PWM technique used in this work is
Phase Opposition Disposition PWM(PODPWM). The
performance of proposed H-bridge type CCMLI is verified
through MATLAB-Simulink based simulation. It has been
observed that the THD is low in chosen CCMLI compared to
conventional CCMLI.
- In this paper, we introduce a practical mechanism of
compressing a binary phase code modulation (BPCM) signal
according to Barker code with 13 chips in presence of additive
white Gaussian noise (AWGN) by using a digital matched filter
(DMF) corresponding to time domain convolution algorithm of
input and reference signals using Cyclone II EP2C70F896C6
FPGA from ALTERA placed on education and development
board DE2-70 with the following parameters: frequency of
BPCM signal fIF=2 MHz, sampling frequency
f MHz SAM 50
,pulse period
T 200s
, pulse width
S 13sc
, chip width
CH 1sc
, compressing factor
KCOM 13
, SNRinp=1/1, 1/2, 1/3, 1/4, 1/5 and processing
gain factor SNRout/SNRinp=11.14 dB.
The results of filter operation are evaluated using a digital
oscilloscope GDS-1052U to display the input and output signals
for different SNRinp.
Flooding is one of the most devastating natural
disasters in Nigeria. The impact of flooding on human activities
cannot be overemphasized. It can threaten human lives, their
property, environment and the economy. Different techniques
exist to manage and analyze the impact of flooding. Some of these
techniques have not been effective in management of flood
disaster. Remote sensing technique presents itself as an effective
and efficient means of managing flood disaster. In this study,
SPOT-10 image was used to perform land cover/ land use
classification of the study area. Advanced Space borne Thermal
Emission and Reflection Radiometer (ASTER) image of 2010 was
used to generate the Digital Elevation Model (DEM). The image
focal statistics were generated using the Spatial Analyst/
Neighborhood/Focal Statistics Tool in ArcMap. The contour map
was produced using the Spatial Analyst/ Surface/ Contour Tools.
The DEM generated from the focal statistics was reclassified into
different risk levels based on variation of elevation values. The
depression in the DEM was filled and used to create the flow
direction map. The flow accumulation map was produced using
the flow direction data as input image. The stream network and
watershed were equally generated and the stream vectorized. The
reclassified DEM, stream network and vectorized land cover
classes were integrated and used to analyze the impact of flood on
the classes. The result shows that 27.86% of the area studied will
be affected at very high risk flood level, 35.63% at high risk,
17.90% at moderate risk, 10.72% at low risk, and 7.89% at no
risk flood level. Built up area class will be mostly affected at very
high risk flood level while farmland will be affected at high risk
flood level. Oshoro, Imhekpeme, and Weppa communities will be
affected at very high risk flood inundation while Ivighe, Uneme,
Igoide and Iviari communities will be at risk at high risk flood
inundation level. It is recommended among others that buildings
that fall within the “Very High Risk” area should be identified
and occupants possibly relocated to other areas such as the “No
Risk” area.
Without water, humans cannot live. Since time began,
we have lived by the water and vast tracts of waterless land have
been abandoned as it is too difficult to inhabit. At any given
moment, the earth’s atmosphere contains 4,000 cubic miles of
water, which is just 0.000012% of the 344 million cubic miles of
water on earth. Nature maintains this ratio via evaporation and
condensation, irrespective of the activities of man.
There is a certain need for an alternative to solve the water
scarcity. Obtaining water from the atmosphere is nothing new -
since the beginning of time, nature’s continuous hydrologic cycle
of evaporation and condensation in the form of rain or snow has
been the sole source and means of regenerating wholesome water
for all forms of life on earth.
An effective method to generate water is by the separation of
moisture present in air by condensation. In this study, the water
present in air is condensed on the surface of a container and then
collected in an external jacket provided on the container.
Insulations are provided to optimize the inner temperature of the
container.
The method is although uncommon but has certain advantages
which make it a success. The process is economical and does not
require a lot of utilities. It also helps in further reducing the
carbon footprint.
In every moment of functioning the Li-Ion
battery must provide the power required by the user, to have a
long operating life and to and to provide high reliability in
operation. The methods for analysis and testing batteries are
ensuring that all these conditions imposed to the batteries are
met by being tested depending on their intended use.
The success rate of real estate project is
decreasing as there is large scale of project and participation of
entities. It is necessary to study the risk factors involved in the
project. This paper focused on types of risks involved in the
project, risk factors, risk management tools & techniques.
Identification of risk of the project in terms of the total cost of the
project has been divided under Technical, Financial, Sociopolitical
and Statutory cost centers. Large real estate projects
have to tackle the following issues: land acquisition, skilledlabour
shortage, non-availability of skilled project managers, and
mechanization of the construction process to cater to the growing
demands. Non- availability of supporting infrastructure, political
issues like instability of the government leading to regulatory
issues, social issues, marketing forms an important part in these
projects as this is a onetime investment and the purchase cycle is
long , long development period makes the same project be at
different points in the real estate value cycle.
- In the present scenario carbon emission and sand
mining are major concern due to its hazardous effect to
environment and making serious imbalance to the ecosystem.
Various studies have been conducted to reduce severe effect on
environment, using byproducts like copper slag as partial
replacement of fine aggregate. Different researchers have also
revealed numerous uses of copper slag as a replacing agent in
determining the strength of concrete. A comprehensive review of
studies has been presented in this paper for scope of replacement
of fine aggregate from copper slag in concrete
- Security is a concept similar to being cautious
or alert against any danger. Network security is the condition of
being protected against any danger or loss. Thus safety plays a
important role in bank transactions where disclosure of any data
results in big loss. We can define networking as the combination
of two or more computers for the purpose of resource sharing.
Resources here include files, database, emails etc. It is the
protection of these resources from unauthorized users that
brought the development of network security. It is a measure
incorporated to protect data during their transmission and also
to ensure the transmitted is protected and authentic.
Security of online bank transactions here has been
improved by increasing the number of bits while establishing the
SSL connection as well as in RSA asymmetric key encryption
along with SHA1 used for digital signature to authenticate the
user
Background: Septoplasty is a common surgical
procedure performed by otolaryngologists for the correction of
deviated nasal septum. This surgery may be associated with
numerous complications. To minimize these complications,
otolaryngologists frequently pack both nasal cavities with
different types of nasal packing. Despite all its advantages,
nasal packing is also associated with some disadvantages. To
avoid these issues, many surgeons use suturing techniques to
obviate the need for packing after surgery.
Objective: To determine the efficacy and safety of trans-septal
suture technique in preventing complications and decreasing
morbidity after septoplasty in comparison with nasal packing.
Patients and methods: Prospective comparative study. This
study was conducted in the department of Otolaryngology -
Head and Neck Surgery, Rizgary Teaching Hospital - Erbil,
from the 6th of May 2014 to the 30th of November 2014.
A total of 60 patients aged 18-45 years, undergoing septoplasty,
were included in the study. Before surgery, patients were
randomly divided into two equal groups. Group (A) with transseptal
suture technique was compared with group (B) in which
nasal packing with Merocel was done. Postoperative morbidity
in terms of pain, bleeding, postnasal drip, sleep disturbance,
dysphagia, headache and epiphora along with postoperative
complications including septal hematoma, septal perforation,
crustation and synechiae formation were assessed over a follow
up period of four weeks.
Results: Out of 60 patients, 37 patients were males (61.7%)
and 23 patients were females (38.3%). Patients with nasal
packing had significantly more postoperative pain (P<0.05)><0.05). There was no significant difference between
the two groups with respect to nasal bleeding, septal
hematoma, septal perforation, crustation and synechiae
formation.
Conclusion: Septoplasty can be safely performed using transseptal
suturing technique without nasal packing.
The basic reason behind the need to
monitor water quality is to verify whether the examined
water quality is suitable for intended usage or not. This
study is conducted on Al -Shamiya al- sharqi drain in
Diwaniya city in Iraq to make valid assessment for the
level of parameters measured and to realize their effects
on irrigation. In order to assess the drainage water
quality for irrigation purposes with a high accuracy, the
Irrigation Water Quality Index (IWQI) will be examined
and upgraded (integrated with GIS) to make a
classification for drainage water. For this purpose, ten
samples of drainage water were taken from different ten
location of the stuay area. The collected samples were
analyzed chemically for different elements which affect
water quality for irrigation.These elements are :
Calcium(Ca+2), Sodium(Na+
), Magnesium(Mg+2),
Chloride( ), Potassium(K+
), Bicarbonate(HCO3),
Nitrate(NO3), Sulfate( , Phosphate( , Electrical
Conductivity(EC), Total Dissolved Solids (TDS), Total
Suspended Solids (TSS) and pH-values (PH). Sodium
Adsorption Ratio (SAR) and Sodium Content (Na%)
have been also calculated. Results suggest that, the use of
GIS and Water Quality Index (WQI) methods could
provide an extremely interesting as well as efficient tool
to water resource management. The results analysis of
(IWQI) maps confirms that: 52% of the drainage water
in study area falls within the "Low restriction" (LR) and
47%of study area has water with (Moderate
restriction)(MR),While 1% of drainage water in the
study area classified as (Sever restriction) (SR). So, the
drainage water should be used with the soil having high
permeability with some constraints imposed on types of
plant for specified tolerance of salts
The cable-hoisting method and rail cable-lifting
method are widely used in the construction of suspension bridge.
This paper takes a suspension bridge in Hunan as an example,
and expounds the two construction methods, and analyzes their
respective merits and disadvantages.
Baylis-Hillman reaction has been achieved on
different organic motifs but with completion times of three to
six days. Micellar medium of CTAB in water along with the
organic base DABCO has been used to effect the BaylisHillman
reaction on a steroidal nucleus of Withaferin-A for the
first time with different aromatic aldehydes within a day to
synthesize a library of BH adducts (W1a –W14a) and (W1bW14b)
as a mixture of two isomers and W15 as a single
compound. The isomers were separated on column and the
major components were chosen for bio-evaluation. Cytotoxic
activity of the synthesized compounds was screened against a
panel of four cancer cell lines Lung A-549, Breast MCF-7,
Colon HCT-116 and Leukemia THP-1 along with 5-florouracil
and Mitomycin-C as references. All the compounds exhibited
promising activity against screened cell lines and were found to
possess enhaunced activity than parent compound. BH adducts
with aromatic systems having methoxy and nitro groups were
found to be more active.
This paper presents the details on the
experimental investigation carried out to get the desired fresh
properties of the SCC. Tests were performed on various mixtures
to obtain the required SCC. In the present research work we
have replaced 15% of cement with class F fly ash. By varying the
quantity of water and sand the mortar mix was prepared. Later
varying percentage of coarse aggregate was added to the mortar
to obtain the desired SCC.
The batteries used in electric and hybrid vehicles
consists of several cells with voltages between 3.6V battery and
4.2 V in series or parallel combinations of configurations for
obtaining the necessary available voltages in the operation of a
hybrid electric vehicle. How malfunction of a single cell affects
the behavior of the entire battery pack, BMS main function is to
protect individual cells against over-discharge, overload or
overheating. This is done by correct balancing of the cells. In
addition BMS estimates the battery charge status
This project aims at using (PD-MCPWM) Phase
disposition multi carrier pulse width modulation technique to
reduce leakage current in a transformerless cascaded multilevel
inverter for PV systems. Advantages of transformerless PV
inverter topology is as follows, simple structure, low weight and
provides higher efficiency , but however this topology provides a
path for the leakage current to flow through the parasitic
capacitance formed between the PV module and the ground.
Modulation technique reduces leakage current with an added
advantage without adding any extra components.
More from International Journal of Technical Research & Application (20)
Democratizing Fuzzing at Scale by Abhishek Aryaabh.arya
Presented at NUS: Fuzzing and Software Security Summer School 2024
This keynote talks about the democratization of fuzzing at scale, highlighting the collaboration between open source communities, academia, and industry to advance the field of fuzzing. It delves into the history of fuzzing, the development of scalable fuzzing platforms, and the empowerment of community-driven research. The talk will further discuss recent advancements leveraging AI/ML and offer insights into the future evolution of the fuzzing landscape.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Vaccine management system project report documentation..pdfKamal Acharya
The Division of Vaccine and Immunization is facing increasing difficulty monitoring vaccines and other commodities distribution once they have been distributed from the national stores. With the introduction of new vaccines, more challenges have been anticipated with this additions posing serious threat to the already over strained vaccine supply chain system in Kenya.
Automobile Management System Project Report.pdfKamal Acharya
The proposed project is developed to manage the automobile in the automobile dealer company. The main module in this project is login, automobile management, customer management, sales, complaints and reports. The first module is the login. The automobile showroom owner should login to the project for usage. The username and password are verified and if it is correct, next form opens. If the username and password are not correct, it shows the error message.
When a customer search for a automobile, if the automobile is available, they will be taken to a page that shows the details of the automobile including automobile name, automobile ID, quantity, price etc. “Automobile Management System” is useful for maintaining automobiles, customers effectively and hence helps for establishing good relation between customer and automobile organization. It contains various customized modules for effectively maintaining automobiles and stock information accurately and safely.
When the automobile is sold to the customer, stock will be reduced automatically. When a new purchase is made, stock will be increased automatically. While selecting automobiles for sale, the proposed software will automatically check for total number of available stock of that particular item, if the total stock of that particular item is less than 5, software will notify the user to purchase the particular item.
Also when the user tries to sale items which are not in stock, the system will prompt the user that the stock is not enough. Customers of this system can search for a automobile; can purchase a automobile easily by selecting fast. On the other hand the stock of automobiles can be maintained perfectly by the automobile shop manager overcoming the drawbacks of existing system.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Event Management System Vb Net Project Report.pdfKamal Acharya
In present era, the scopes of information technology growing with a very fast .We do not see any are untouched from this industry. The scope of information technology has become wider includes: Business and industry. Household Business, Communication, Education, Entertainment, Science, Medicine, Engineering, Distance Learning, Weather Forecasting. Carrier Searching and so on.
My project named “Event Management System” is software that store and maintained all events coordinated in college. It also helpful to print related reports. My project will help to record the events coordinated by faculties with their Name, Event subject, date & details in an efficient & effective ways.
In my system we have to make a system by which a user can record all events coordinated by a particular faculty. In our proposed system some more featured are added which differs it from the existing system such as security.
A LITERATURE SURVEY ON INFORMATION EXTRACTION BY PRIORITIZING CALLS
1. International Journal of Technical Research and Applications e-ISSN: 2320-8163,
www.ijtra.com Volume 3, Issue 1 (Jan-Feb 2015), PP. 35-37
35 | P a g e
A LITERATURE SURVEY ON INFORMATION
EXTRACTION BY PRIORITIZING CALLS
Kalaiselvi. K1
, Dr. P. Uma Maheswari Ph.D2
Department of Computer Science & Engineering,
Info Institute of Engineering,
Coimbatore, India.
1
kalai.cbe@gmail.com, 2
dr.umasundar@gmail.com
Abstract- The Application Programming Interface restricts
the types of queries that the Web service can answer. For
instance, a Web service might provide a method that returns the
books of a given author in fast manner, but it might not provide a
method that returns the authors of a given book. If the user asks
for the author of some specific book, then the Web service cannot
be called – even though the underlying database might have the
preferred piece of information, this scenario is called asymmetry.
This asymmetry is particularly problematic if the service is used
in a Web service orchestration system. In this survey, we propose
to use on-the-fly information extraction (IE).IE used to collect
values, and then the value can be used as parameter Bindings for
the Web service. This survey shows how the information
extraction can be integrated into a Web service orchestration
system. The proposed approach is fully implemented in a
prototype called Search Using Services and Information
Extraction (SUSIE). Real-life data and services are used to
demonstrate the practical viability and good performance of our
approach.
Keywords- limited access patterns to relations, complete
answers to queries, query stability.
I. INTRODUCTION
A. Web Services
Generally a Web service[9] is an interface that provides
accesses to an integrated back-end database. There is an
increasing number of Web services that provide a capital of
information. There are Web services about books are
isbndb.org, library thing. com, Amazon, Abe Books, and so
on, and then about movies are api.internetvideoarchive.com,
about music’s are musicbrainz.org, lastfm.com, and about a
large variety of other topics.
B. How Web Services Work?
For instance, the site musicbrainz.org offers a service for
accessing its database about music albums. The Web service
defines functions that can be called remotely musicbrainz.org
offers the function getSongs, which takes a singer as input
parameter and delivers the songs by that singer as output. If
the user wants to know all songs by Leonard Cohen, she can
call getSongs with input as Leonard Cohen. The output will
contain the songs Hallelujah, Suzanne, etc.
C. Web Services vs. Keyword Based search on web
Web services play a vital part in the trend towards data
centric applications on the Web. Unlike Web search engines,
Web services deliver crisp answers to queries[10]. This allows
the user to recover answers to a query without having to read
through several result pages. Web services can also be used to
answer precise conjunctive queries, which would require
several searches on the Web and joins across them, if done
manually with a search engine. The results of Web services
are machine-readable, which allows query answering systems
to provide to complex user demands by orchestrating the
services. These are advantages that Web services offer over
keyword based Web search.
D. Binding Patterns of the Web Service Functions
Web services allow querying remote databases. However,
the queries have to follow the binding patterns of the Web
service functions. Binding patterns [11][13] should be
achieved by providing values for mandatory input parameters
before the function can be called. In our example of
musicbrainz, the function get Songs can only be called if a
singer is provided with enough information. Thus, it is
possible to ask for the songs of a given singer, but it is not
always possible to ask for the singers of a given song.
E. Web service asymmetry
On the client side, this is a highly difficult limitation, in
which the data may be available, but cannot be queried in the
preferred way [12][14]. If the user wants to recognize, e.g.,
who sang Hallelujah, then the Web service cannot be used to
answer this question, even though the database contains the
preferred information. This control is not due to missing data,
but a design choice of the Web service owner, who wants to
prevent external users from extracting and downloading large
fractions of its back-end database.
F. Web Service Composition
Web Service Management System (WSMS) enables
querying multiple web services in a transparent and integrated
2. International Journal of Technical Research and Applications e-ISSN: 2320-8163,
www.ijtra.com Volume 3, Issue 1 (Jan-Feb 2015), PP. 35-37
36 | P a g e
fashion. This paper [10] handles a first basic WSMS problem
called query optimization for Select-Project-Join queries
spanning multiple web services. The contribution of an
algorithm for arranging a query’s web service calls into a
pipelined execution plan that optimally exploits parallelism
among web services to minimize the query’s total running
time.
G. Using Web Services in Knowledge Base
The overall architecture specified in the paper [8] is
illustrated in Figure 2.The system uses the existing YAGO
ontology, which consists of 2 million entities and 20 million
facts extracted from various encyclopedic Web sources. In
addition, we extended the knowledge with a built-in collection
of function definitions for the following Web services: Music
Brainz, LastFM, LibraryThing, ISBNdb, AbeBooks, and IVA
(Internet Video Archive). In this paper [8] author envisioned
long-term usage, the function definitions would either be
automatically acquired from a Web-service broker/repository
or they could be semi-automatically generated by a tool.
Figure 2:- System architecture of the ANGIE.
II. EXISTING SYSTEM
[1] In 2007 F. M. Suchanek, G. Kasneci, and G. Weikum,
worked on “YAGO: A Core of Semantic Knowledge”. This
paper explains that, if the user asks for the singer of the song
Hallelujah, then we can use a semantic knowledge base such
as YAGO to get a list of singers. Then we call getSongs with
every singer, and remember those for which the Web service
returns Hallelujah. The first limitation in this system is
runtime, with Web service calls taking up to 1 second to
complete. Trying out thousands of singers that a knowledge
base such as YAGO contains could easily take hours. The
second limitation is the data provider itself, which most likely
restricts violent querying from the same IP address.
[2]In 1995 A. Rajaraman, Y. Sagiv, and J.D.Ullman,
worked on “Answering queries using templates with binding
patterns”.Rewriting the initial query into a set of queries to be
executed over the given views considered in this scheme. The
method show that for a conjunctive query over a global
schema and a set of views over the same schema, shaping
whether there exists a conjunctive query plan over the views
that are equivalent to the original query in NP-hard.This
rewriting strategy assumes that the views are complete (i.e.,
contain all the tuples in their definition).
The main contribution of this paper is obtaining the
cheapest solution under certain reasonable cost metrics.
Thus, these works will produce pipelines of calls of an
unbound length in order to obtain the maximal number of
answer .This is infeasible and challenging task in the context
of Web services. the assumption on views are unrealistic in
our setting with Web services, where sources may overlap or
complement each other but are usually incomplete.
[3] In 2004 S. Kambhampati, E. Lambrecht, U. Nambiar,
Z. Nie, and G. Senthil, worked on “Optimizing recursive
information gathering plans in this paper authors describe two
optimization techniques that are specially tailored for
information gathering. The first optimization technique is a
greedy minimization algorithm that minimizes an information
gathering plan by removing redundant and overlapping
information sources without loss of completeness. Then these
papers discuss a set of heuristics that guide the greedy
minimization algorithm so as to remove costlier information
sources first.
The main contribution of this paper is consider end-to-end
the issues of redundancy elimination and optimization in
recursive information gathering plan but it also reducing the
number of accesses
[4]In 2006 W. Wu, A. Doan, and C. T. Yu, worked on
“Webiq: Learning from the web to match deep-web query
interfaces”. This paper aims to match the schema of two Deep
Web forms. This approach finds instances for Deep Web
attributes by querying the surface Web and then validating the
instances through original Web form.
The advantage of this method is avoiding unnecessary
requests. Deep Web does not address the prioritization of
inverse functions in execution plans is very big challenge.
[5]In 2008 A. Cal`ı and D. Martinenghi, worked on
“Querying data under access limitations,” This scheme
represent an optimization technique for conjunctive queries
that produces a query plan that:
(1) minimizes the number of accesses according to a
strong notion of minimality; (2) excludes all sources that are
not relevant for the query. The major role is decrease of the
number of accesses in a large number of cases.
[6]In 2010 N. Preda, G. Kasneci, F. M. Suchanek, T.
Neumann, W. Yuan, and G. Weikum, worked on “Active
Knowledge : In this paper Dynamically Enriching RDF
Knowledge Bases by Web Services (ANGIE)[8],” answers
queries using views as surrogates for Web services, and it
imposes an upper bound on the number of function calls.
This strategy is prioritizing function calls that are more
likely to deliver answers is major role of this method but it
also Reduces the number of sub-queries in a bottom-up
evaluation.
[7]In 2011 M. Benedikt, G. Gottlob, and P. Senellart,
worked on “Determining relevance of accesses at runtime, “In
this process, authors relate dynamic relevance of an access to
query containment under access limitations and characterize
the complexity of this problem;
The advantage of this scheme is reducing the number of
accesses. The major drawback in this paper is, the
3. International Journal of Technical Research and Applications e-ISSN: 2320-8163,
www.ijtra.com Volume 3, Issue 1 (Jan-Feb 2015), PP. 35-37
37 | P a g e
computation of maximal results is not considered also the
guessing accesses are not eliminated.
III. PROPOSED SYSTEM
[20]The objective is to answer queries by using only
function calls. The main goal of this paper, in contrast, is to
compute the largest number of answers using the given budget
of calls. Hence, we have to prioritize calls that are likely to
lead to an answer. This is different from the problem of join
ordering in previous work.
Information extraction (IE)[15][17] is concerned with
extracting structured data from documents. This method
should suffer from the inherent imprecision of the extraction
process. Generally the extracted data is way too noisy to allow
direct querying. This limitation should be overcome by SUSIE
using Information extraction solely for finding candidate
entities of interest and feeding these as inputs into Web
service calls. Named Entity Recognition (NER) approaches
aim to detect interesting entities in text documents. This
scheme can be used to generate candidates for SUSIE.
The approach discussed in this paper matches noun
phrases against the names of entities that are registered in a
knowledge base a simple but effective technique that
circumvents the noise in learning-based Named Entity
Recognition(NER)[18][19] techniques. For SUSIE, we have
developed judiciously customized methods along these lines.
These are not limited to lists and tables, but detect arbitrary
repetitive structures that could contain candidates.
Alternative IE[16] methods such as Wrapper Induction,
fact extraction, or entity extraction could be also considered,
but they are not practical in our scenario as they require
training data and, thus, human supervision.
IE[20] has the following contribution
1. The proposed approach provide solution to the
problem of Web service asymmetry, where input
values for the Web services are extracted on-the-fly
from Web pages found by keyword queries.
2. An alteration to the standard Data log evaluation
procedure is that prioritizes inverse functions over
infinite call chains.
3. An evaluation of our approach with the APIs of real
Web services, improve the performance of a real-
world query answering system.
Table 1:-Comparative Study on Existing vs. Proposed System
TECHNIQUE EXISTING PROPOSED
Scheme Binding Pattern Information
Extraction
Query Type Querying For
Single Argument
Querying For
Multiple Argument
Execution Plan Ordered Join Plans Ordered Sequences
Of Calls
Aim Maximal Number
Of Query Answers
Maximal Contained
Rewriting
Query Ordering Join Ordering Prioritize Calls
IV. CONCLUSION
This survey shows validity of SUSIE approach on real data
sets. We believe that the good looks of our approach lies in the
fruitful symbiosis of information extraction and Web services,
which each mitigate the weaknesses of the other. Our current
implementation uses naive information extraction algorithms
that serve mainly as a proof of concept.
V. SCOPE AND FUTURE ENHANCEMENT
Future work will explore new algorithms that could step
in and approaches that aims to automize the discovery of new
Web services and their integration into the system.
REFERENCES
[1] F. M. Suchanek, G. Kasneci, and G. Weikum, “YAGO: A
Core of Semantic Knowledge,” in WWW, 2007.
[2] Rajaraman, Y. Sagiv, and J. D. Ullman, “Answering
queries using templates with binding patterns,” in PODS,
1995.
[3] S. Kambhampati, E. Lambrecht, U. Nambiar, Z. Nie, and
G. Senthil, “Optimizing recursive information gathering
plans in EMERAC,” J. Intell. Inf. Syst., 2004.
[4] W. Wu, A. Doan, and C. T. Yu, “Webiq: Learning from the
web to match deep-web query interfaces,” in ICDE, 2006.
[5] Cal`ı and D. Martinenghi, “Querying data under access
limitations,” in ICDE, 2008.
[6] N. Preda, G. Kasneci, F. M. Suchanek, T. Neumann, W.
Yuan, and G. Weikum, “Active Knowledge: Dynamically
Enriching RDF Knowledge Bases by Web Services.
(ANGIE),” in SIGMOD, 2010.
[7] M. Benedikt, G. Gottlob, and P. Senellart, “Determining
relevance of accesses atruntime,” in PODS, 2011.
[8] N. Preda, G. Kasneci, F. M. Suchanek, T. Neumann, W.
Yuan, and G. Weikum, “Active Knowledge : Dynamically
Enriching RDF Knowledge Bases by Web Services.
(ANGIE),” in SIGMOD, 2010.
[9] Deutsch, L. Sui, and V. Vianu, Specification and
verification of data-driven web services,” in PODS, 2004.
[10] U. Srivastava, K. Munagala, J. Widom, and R. Motwani,
“Query optimization over web services,” in VLDB, 2006.
[11] S. Thakkar, J. L. Ambite, and C. A. Knoblock,
“Composing, optimizing,and executing plans for
bioinformatics web services,” VLDB J., 2005.
[12] K. Q. Pu, V. Hristidis, and N. Koudas, “Syntactic rule
based approach to Web service composition,” in ICDE,
2006.
[13] M. Benedikt, G. Gottlob, and P. Senellart, “Determining
relevance of accesses at runtime,” in PODS, 2011.
[14] M. Benedikt, P. Bourhis, and C. Ley, “Querying schemas
with access restrictions,” PVLDB, 2012.
[15] M. Banko, M. J. Cafarella, S. Soderland, M. Broadhead,
and O. Etzioni, “Open Information Extraction from the
Web,” in IJCAI, 2007.
[16] F. M. Suchanek, M. Sozio, and G. Weikum, “SOFIE: A
Self-Organizing Framework for Information Extraction,” in
WWW, 2009.
[17] N. Kushmerick, “Wrapper induction for information
extraction,” Ph.D. dissertation, U.Washington, 1997.
[18] J. Zhu, Z. Nie, J.-R. Wen, B. Zhang, and W.-Y. Ma, “2d
conditional random fields for web information extraction,”
in ICML. ACM, 2005.
[19] J. D. Lafferty, A. McCallum, and F. C. N. Pereira,
“Conditional random fields: Probabilistic models for
segmenting and labeling sequence data,” in ICML. Morgan
Kaufmann Publishers Inc., 2001.
[20] Nicoleta Preda1, Fabian Suchanek2, Wenjun Yuan3,
Gerhard Weikum2,” Susie: Search Using Servicesand
Information Extraction” In Ieee Transactions On
Knowledge And Data Engineering Year 2013.