Internet has played a drastic change in today’s life. Especially, web browsing has become more exclusive in compact devices. This tempts the people to migrate their innovations & skills into an unimaginable world. With these things in mind, it is necessary for us to concentrate more on the techniques that how the web data’s are accessed and accounted. Developed countries use a widely popular technique called Flat- rate pricing, which is solely independent on data usage. But whereas, developing countries are still behind the concept of “pay as you use”, which leads to high usage bills.With an effort to resolve the problem of high usage bills, we propose a cost
effective technique, which reduces the data consumption in web mobile browsing. It reduces the usage bills in the
mechanism of usage-based pricing. The key idea of our approach is to leverage the data plan of the user to compute a cost quota for each web request and a network middle-box to automatically adapt any web page to the cost quota. Here we use a simple but effective content adaption technique that highly decides which image or data best fits the mobile display with low cost and high quality resolution. It also emphasis on the trendy technique,”
The Data Mining “which mines the requested & required data. The mined data’s are filtered based on the content adaption technique and fit into the display effectively. Interesting and noticeable feature in this concept is that only important web contents requested by the user are exhibited. A feedback process involves in this concept to retrieve the required data alone and also to improve the best fit resolution. With this proposed system web mobile browsing becomes cheaper & contributes an enormous logic for the future project in the field of Mobile browsing.
PROCEDURE OF EFFECTIVE USE OF CLOUDLETS IN WIRELESS METROPOLITAN AREA NETWORK...IJCNCJournal
The article develops a method to ensure the efficient use of cloudlet resources by the mobile users. The article provides a solution to the problem of correct use of cloudlets located on the movement route of mobile users in Wireless Metropolitan Area Networks - WMAN environment. Conditions for downloading
necessary applications to the appropriate cloudlet using the possible values that determine the importance and coordinates of the cloudlets were studied. The article provides a model of the mobile user's route model in metropolitan environments and suggests a method for solving the problem.
Managing Instantly Dense Hot Spot Regions in Wireless Cellular Communication ...Eswar Publications
In a wireless communication cellular network, call activity can be more intensive in some regions than others. These high-traffic regions are called hot spot regions. In typical deployments of wireless cellular networks, traffic hot spots can arise from the non-uniform geographic distribution of the mobile users, and the heavy-tailed nature of their network usage patterns. These hot spots of activity can degrade system performance, by increasing network utilization, wireless interference, call blocking, and even call dropping from failed handoffs for mobile users. In this paper, a hierarchical cellular communication wireless network is characterized by overlapping the service area for managing the new calls users having different mobility speed. The overlapping property of the hierarchical-network provides the advantages that share the traffic load to improve the performance of wireless
cellular networks in the highly populated area where both slow speed users and high speed users are available.
Picocells are created that are underlaid to two-tier networks for handling the slow or staying speed visitor (outside registered) users. The hierarchical-networks with picocells, microcells and macrocells provide the secondary resource, which provide the services to new calls as well as handoff calls with guard channels by overflow the slow speed visitor users in picocells, slow speed local users in macrocell by sharing the frequency in
vertical as well as in horizontal directions. The picocell is installed on four wheeler vehicle may be moved at any place as per necessity and may be utilized to create picocell to handle the load of hot spot area. Such kind of picocell is known as Portable-Picocell (P-Picocell/ P2cell). The call loss probability of new calls is developed through numerical analysis. The proposed schemes are compared with the existing schemes of CAC. Results show that new proposed schemes are more efficient and handle more visitor calls by redirecting calls and sharing of load in P2cell.
PROCEDURE OF EFFECTIVE USE OF CLOUDLETS IN WIRELESS METROPOLITAN AREA NETWORK...IJCNCJournal
The article develops a method to ensure the efficient use of cloudlet resources by the mobile users. The article provides a solution to the problem of correct use of cloudlets located on the movement route of mobile users in Wireless Metropolitan Area Networks - WMAN environment. Conditions for downloading
necessary applications to the appropriate cloudlet using the possible values that determine the importance and coordinates of the cloudlets were studied. The article provides a model of the mobile user's route model in metropolitan environments and suggests a method for solving the problem.
Managing Instantly Dense Hot Spot Regions in Wireless Cellular Communication ...Eswar Publications
In a wireless communication cellular network, call activity can be more intensive in some regions than others. These high-traffic regions are called hot spot regions. In typical deployments of wireless cellular networks, traffic hot spots can arise from the non-uniform geographic distribution of the mobile users, and the heavy-tailed nature of their network usage patterns. These hot spots of activity can degrade system performance, by increasing network utilization, wireless interference, call blocking, and even call dropping from failed handoffs for mobile users. In this paper, a hierarchical cellular communication wireless network is characterized by overlapping the service area for managing the new calls users having different mobility speed. The overlapping property of the hierarchical-network provides the advantages that share the traffic load to improve the performance of wireless
cellular networks in the highly populated area where both slow speed users and high speed users are available.
Picocells are created that are underlaid to two-tier networks for handling the slow or staying speed visitor (outside registered) users. The hierarchical-networks with picocells, microcells and macrocells provide the secondary resource, which provide the services to new calls as well as handoff calls with guard channels by overflow the slow speed visitor users in picocells, slow speed local users in macrocell by sharing the frequency in
vertical as well as in horizontal directions. The picocell is installed on four wheeler vehicle may be moved at any place as per necessity and may be utilized to create picocell to handle the load of hot spot area. Such kind of picocell is known as Portable-Picocell (P-Picocell/ P2cell). The call loss probability of new calls is developed through numerical analysis. The proposed schemes are compared with the existing schemes of CAC. Results show that new proposed schemes are more efficient and handle more visitor calls by redirecting calls and sharing of load in P2cell.
Smart Government or Mobile Government
What is the smart process
Business Process Reengineering
Lecture for Smart Government Conference - Dubai - Burj Al Arab - December 2013
نقدم لكم في هذه الدورة المتواضعة شرحا عن طريقة إنشاء وتعريف واستخدام ال Canvas، بالإضافة الى رسم العديد من الأشكال وصولا الى ال Animation، وتتلخص مواضيع الدورة في 115 صفحة صغيرة، مع 13 صفحة ويب تحتوي العديد من الأمثلة المتنوعة والمختلفة، نسأل الله تعالى أن يتقبل منا ومنكم صالح الأعمال.
Efficient Image Compression Technique using Clustering and Random PermutationIJERA Editor
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google
Efficient Image Compression Technique using Clustering and Random PermutationIJERA Editor
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google.
Personalized web is used in all functional domains. Personalized web consists of private pages and pages
which personalize the content, data and information based on user’s context and preferences.
Personalization engines use users’ implicit and explicit feedback to personalize the user experience and
information. While the personalization drives the user satisfaction levels, it also has performance side
effects. Traditional web performance optimization methods fall short of improving the performance of
personalized web due to privacy and security concerns and due to the dynamic nature of the data. In this
paper we have tried to address this crucial issue by discussing various aspects of personalized performance
optimization algorithms. We have discussed a novel approach using “Personalization performance
Optimization” (PPO) framework that has resulted in 30% increase in page response times and 35%
increase in cache hit ratio during our experiments.
PPO-AN APPROACH TO PERSONALIZED WEB ACCELERATIONijcseit
Personalized web is used in all functional domains. Personalized web consists of private pages and pages
which personalize the content, data and information based on user’s context and preferences.
Personalization engines use users’ implicit and explicit feedback to personalize the user experience and
information. While the personalization drives the user satisfaction levels, it also has performance side
effects. Traditional web performance optimization methods fall short of improving the performance of
personalized web due to privacy and security concerns and due to the dynamic nature of the data. In this
paper we have tried to address this crucial issue by discussing various aspects of personalized performance
optimization algorithms. We have discussed a novel approach using “Personalization performance
Optimization” (PPO) framework that has resulted in 30% increase in page response times and 35%
increase in cache hit ratio during our experiments.
Personalized web is used in all functional domains. Personalized web consists of private pages and pages which personalize the content, data and information based on user’s context and preferences. Personalization engines use users’ implicit and explicit feedback to personalize the user experience and information. While the personalization drives the user satisfaction levels, it also has performance side effects. Traditional web performance optimization methods fall short of improving the performance of personalized web due to privacy and security concerns and due to the dynamic nature of the data. In this paper we have tried to address this crucial issue by discussing various aspects of personalized performance optimization algorithms. We have discussed a novel approach using “Personalization performance Optimization” (PPO) framework that has resulted in 30% increase in page response times and 35% increase in cache hit ratio during our experiments.
Smart Government or Mobile Government
What is the smart process
Business Process Reengineering
Lecture for Smart Government Conference - Dubai - Burj Al Arab - December 2013
نقدم لكم في هذه الدورة المتواضعة شرحا عن طريقة إنشاء وتعريف واستخدام ال Canvas، بالإضافة الى رسم العديد من الأشكال وصولا الى ال Animation، وتتلخص مواضيع الدورة في 115 صفحة صغيرة، مع 13 صفحة ويب تحتوي العديد من الأمثلة المتنوعة والمختلفة، نسأل الله تعالى أن يتقبل منا ومنكم صالح الأعمال.
Efficient Image Compression Technique using Clustering and Random PermutationIJERA Editor
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google
Efficient Image Compression Technique using Clustering and Random PermutationIJERA Editor
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google.
Personalized web is used in all functional domains. Personalized web consists of private pages and pages
which personalize the content, data and information based on user’s context and preferences.
Personalization engines use users’ implicit and explicit feedback to personalize the user experience and
information. While the personalization drives the user satisfaction levels, it also has performance side
effects. Traditional web performance optimization methods fall short of improving the performance of
personalized web due to privacy and security concerns and due to the dynamic nature of the data. In this
paper we have tried to address this crucial issue by discussing various aspects of personalized performance
optimization algorithms. We have discussed a novel approach using “Personalization performance
Optimization” (PPO) framework that has resulted in 30% increase in page response times and 35%
increase in cache hit ratio during our experiments.
PPO-AN APPROACH TO PERSONALIZED WEB ACCELERATIONijcseit
Personalized web is used in all functional domains. Personalized web consists of private pages and pages
which personalize the content, data and information based on user’s context and preferences.
Personalization engines use users’ implicit and explicit feedback to personalize the user experience and
information. While the personalization drives the user satisfaction levels, it also has performance side
effects. Traditional web performance optimization methods fall short of improving the performance of
personalized web due to privacy and security concerns and due to the dynamic nature of the data. In this
paper we have tried to address this crucial issue by discussing various aspects of personalized performance
optimization algorithms. We have discussed a novel approach using “Personalization performance
Optimization” (PPO) framework that has resulted in 30% increase in page response times and 35%
increase in cache hit ratio during our experiments.
Personalized web is used in all functional domains. Personalized web consists of private pages and pages which personalize the content, data and information based on user’s context and preferences. Personalization engines use users’ implicit and explicit feedback to personalize the user experience and information. While the personalization drives the user satisfaction levels, it also has performance side effects. Traditional web performance optimization methods fall short of improving the performance of personalized web due to privacy and security concerns and due to the dynamic nature of the data. In this paper we have tried to address this crucial issue by discussing various aspects of personalized performance optimization algorithms. We have discussed a novel approach using “Personalization performance Optimization” (PPO) framework that has resulted in 30% increase in page response times and 35% increase in cache hit ratio during our experiments.
Contemporary Energy Optimization for Mobile and Cloud Environmentijceronline
Cloud and mobile computing applications are increasing heavily in terms of usage. These two areas extending usability of systems. This review paper gives information about cloud and mobile applications in terms of resources they consume and the need of choosing variety of features for users from several locations and the evolutionary provisions for service provider and end users. Both the fields are combined to provide good functionality, efficiency and effectiveness with mobile phones. The enhancement by considering power consumption by means of resource constrained nature of devices, communication media and cost effectiveness. This paper discuss about the concepts related to power consumption, underlying protocols and the other performance issues
Dynamic Performance Acceleration - Reducing the Browser BottleneckXO Communications
The XO Dynamic Performance Acceleration solution is designed to optimize a customer's web pages to enable faster Time-to-Action. Time-to-Action is the amount of time that elapses between when a page begins to load and the time that an end user can interact with that web page. Dynamic Performance Acceleration transparently optimizes the customer's web sites by rewriting elements of the HTML page in such a way that it accelerates the page rendering process.
Ijri ece-01-01 joint data hiding and compression based on saliency and smvqIjripublishers Ijri
Global interconnect planning becomes a challenge as semiconductor technology continuously scales. Because of the increasing wire resistance and higher capacitive coupling in smaller features, the delay of global interconnects becomes large compared with the delay of a logic gate, introducing a huge performance gap that needs to be resolved A novel equalized global link architecture and driver– receiver co design flow are proposed for high-speed and low-energy on-chip communication by utilizing a continuous-time linear equalizer (CTLE). The proposed global link is analyzed using a linear system method, and the formula of CTLE eye opening is derived to provide high-level design guidelines and insights.
Compared with the separate driver–receiver design flow, over 50% energy reduction is observed.
Content-Based Image Retrieval (CBIR) systems have been used for the searching of relevant images in various research areas. In CBIR systems features such as shape, texture and color are used. The extraction of features is the main step on which the retrieval results depend. Color features in CBIR are used as in the color histogram, color moments, conventional color correlogram and color histogram. Color space selection is used to represent the information of color of the pixels of the query image. The shape is the basic characteristic of segmented regions of an image. Different methods are introduced for better retrieval using different shape representation techniques; earlier the global shape representations were used but with time moved towards local shape representations. The local shape is more related to the expressing of result instead of the method. Local shape features may be derived from the texture properties and the color derivatives. Texture features have been used for images of documents, segmentation-based recognition,and satellite images. Texture features are used in different CBIR systems along with color, shape, geometrical structure and sift features.
The cyber attacks have become most prevalent in the past few years. During this time, attackers have discovered new vulnerabilities to carry out malicious activities on the internet. Both the clients and the servers have been victimized by the attackers. Clickjacking is one of the attacks that have been adopted by the attackers to deceive the innocuous internet users to initiate some action. Clickjacking attack exploits one of the vulnerabilities existing in the web applications. This attack uses a technique that allows cross domain attacks with the help of userinitiated clicks and performs unintended actions. This paper traces out the vulnerabilities that make a website vulnerable to clickjacking attack and proposes a solution for the same.
Performance Analysis of Audio and Video Synchronization using Spreaded Code D...Eswar Publications
The audio and video synchronization plays an important role in speech recognition and multimedia communication. The audio-video sync is a quite significant problem in live video conferencing. It is due to use of various hardware components which introduces variable delay and software environments. The objective of the synchronization is used to preserve the temporal alignment between the audio and video signals. This paper proposes the audio-video synchronization using spreading codes delay measurement technique. The performance of the proposed method made on home database and achieves 99% synchronization efficiency. The audio-visual
signature technique provides a significant reduction in audio-video sync problems and the performance analysis of audio and video synchronization in an effective way. This paper also implements an audio- video synchronizer and analyses its performance in an efficient manner by synchronization efficiency, audio-video time drift and audio-video delay parameters. The simulation result is carried out using mat lab simulation tools and simulink. It is automatically estimating and correcting the timing relationship between the audio and video signals and maintaining the Quality of Service.
Due to the availability of complicated devices in industry, models for consumers at lower cost of resources are developed. Home Automation systems have been developed by several researchers. The limitations of home automation includes complexity in architecture, higher costs of the equipment, interface inflexibility. In this paper as we have proposed, the working protocol of PIC 16F72 technology is which is secure, cost efficient, flexible that leads to the development of efficient home automation systems. The system is operational to control various home appliances like fans, Bulbs, Tube light. The following paper describes about components used and working of all components connected. The home automation system makes use of Android app entitled “Home App” which gives
flexibility and easy to use GUI.
Semantically Enchanced Personalised Adaptive E-Learning for General and Dysle...Eswar Publications
E-learning plays an important role in providing required and well formed knowledge to a learner. The medium of e- learning has achieved advancement in various fields such as adaptive e-learning systems. The need for enhancing e-learning semantically can enhance the retrieval and adaptability of the learning curriculum. This paper provides a semantically enhanced module based e-learning for computer science programme on a learnercentric perspective. The learners are categorized based on their proficiency for providing personalized learning environment for users. Learning disorders on the platform of e-learning still require lots of research. Therefore, this paper also provides a personalized assessment theoretical model for alphabet learning with learning objects for
children’s who face dyslexia.
Agriculture plays an important role in the economy of our country. Over 58 percent of the rural households depend on the agriculture sector as their means of livelihood. Agriculture is one of the major contributors to Gross Domestic Product(GDP). Seeds are the soul of agriculture. This application helps in reducing the time for the researchers as well as farmers to know the seedling parameters. The application helps the farmers to know about the percentage of seedlings that will grow and it is very essential in estimating the yield of that particular crop. Manual calculation may lead to some error, to minimize that error, the developed app is used. The scientist and farmers require the app to know about the physiological seed quality parameters and to take decisions regarding their farming activities. In this article a desktop app for seed germination percentage and vigour index calculation are developed in PHP scripting language.
What happens when adaptive video streaming players compete in time-varying ba...Eswar Publications
Competition among adaptive video streaming players severely diminishes user-QoE. When players compete at a bottleneck link many do not obtain adequate resources. This imbalance eventually causes ill effects such as screen flickering and video stalling. There have been many attempts in recent years to overcome some of these problems. However, added to the competition at the bottleneck link there is also the possibility of varying network bandwidth which can make the situation even worse. This work focuses on such a situation. It evaluates current heuristic adaptive video players at a bottleneck link with time-varying bandwidth conditions. Experimental setup includes the TAPAS player and emulated network conditions. The results show PANDA outperforms FESTIVE, ELASTIC and the Conventional players.
WLI-FCM and Artificial Neural Network Based Cloud Intrusion Detection SystemEswar Publications
Security and Performance aspects of cloud computing are the major issues which have to be tended to in Cloud Computing. Intrusion is one such basic and imperative security problem for Cloud Computing. Consequently, it is essential to create an Intrusion Detection System (IDS) to detect both inside and outside assaults with high detection precision in cloud environment. In this paper, cloud intrusion detection system at hypervisor layer is developed and assesses to detect the depraved activities in cloud computing environment. The cloud intrusion detection system uses a hybrid algorithm which is a fusion of WLI- FCM clustering algorithm and Back propagation artificial Neural Network to improve the detection accuracy of the cloud intrusion detection system. The proposed system is implemented and compared with K-means and classic FCM. The DARPA’s KDD cup dataset 1999 is used for simulation. From the detailed performance analysis, it is clear that the proposed system is able to detect the anomalies with high detection accuracy and low false alarm rate.
Spreading Trade Union Activities through Cyberspace: A Case StudyEswar Publications
This report present the outcome of an investigative research conducted to examine the modu-operandi of academic staff union of polytechnics (ASUP) YabaTech. The investigation covered the logistics and cost implication for spreading union activities among members. It was discovered that cost of management and dissemination of information to members was at high side, also logistics problem constitutes to loss of information in transit hence cut away some members from union activities. To curtail the problem identified, we proposed the
design of secure and dynamic website for spreading union activities among members and public. The proposed system was implemented using HTML5 technology, interface frameworks like Bootstrap and j query which enables the responsive feature of the application interface. The backend was designed using PHPMYSQL. It was discovered from the evaluation of the new system that cost of managing information has reduced considerably, and logistic problems identified in the old system has become a forgotten issue.
Identifying an Appropriate Model for Information Systems Integration in the O...Eswar Publications
Nowadays organizations are using information systems for optimizing processes in order to increase coordination and interoperability across the organizations. Since Oil and Gas Industry is one of the large industries in whole of the world, there is a need to compatibility of its Information Systems (IS) which consists three categories of systems: Field IS, Plant IS and Enterprise IS to create interoperability and approach the
optimizing processes as its result. In this paper we introduce the different models of information systems integration, identify the types of information systems that are using in the upstream and downstream sectors of petroleum industry, and finally based on expert’s opinions will identify a suitable model for information systems integration in this industry.
Link-and Node-Disjoint Evaluation of the Ad Hoc on Demand Multi-path Distance...Eswar Publications
This work illustrates the AOMDV routing protocol. Its ancestor, the AODV routing protocol is also described. This tutorial demonstrates how forward and reverse paths are created by the AOMDV routing protocol. Loop free paths formulation is described, together with node and link disjoint paths. Finally, the performance of the AOMDV routing protocol is investigated along link and node disjoint paths. The WSN with the AOMDV routing protocol using link disjoint paths is better than the WSN with the AOMDV routing protocol using node disjoint paths for energy consumption.
Bridging Centrality: Identifying Bridging Nodes in Transportation NetworkEswar Publications
To identify the importance of node of a network, several centralities are used. Majority of these centrality measures are dominated by components' degree due to their nature of looking at networks’ topology. We propose a centrality to identification model, bridging centrality, based on information flow and topological aspects. We apply bridging centrality on real world networks including the transportation network and show that the nodes distinguished by bridging centrality are well located on the connecting positions between highly connected regions. Bridging centrality can discriminate bridging nodes, the nodes with more information flowed through them and locations between highly connected regions, while other centrality measures cannot.
Now a days we are living in an era of Information Technology where each and every person has to become IT incumbent either intentionally or unintentionally. Technology plays a vital role in our day to day life since last few decades and somehow we all are depending on it in order to obtain maximum benefit and comfort. This new era equipped with latest advents of technology, enlightening world in the form of Internet of Things (IoT). Internet of things is such a specified and dignified domain which leads us to the real world scenarios where each object can perform some task while communicating with some other objects. The world with full of devices, sensors and other objects which will communicate and make human life far better and easier than ever. This paper provides an overview of current research work on IoT in terms of architecture, a technology used and applications. It also highlights all the issues related to technologies used for IoT, after the literature review of research work. The main purpose of this survey is to provide all the latest technologies, their corresponding
trends and details in the field of IoT in systematic manner. It will be helpful for further research.
Automatic Monitoring of Soil Moisture and Controlling of Irrigation SystemEswar Publications
In past couple of decades, there is immediate growth in field of agricultural technology. Utilization of proper method of irrigation by drip is very reasonable and proficient. A various drip irrigation methods have been proposed, but they have been found to be very luxurious and dense to use. The farmer has to maintain watch on irrigation schedule in the conventional drip irrigation system, which is different for different types of crops. In remotely monitored embedded system for irrigation purposes have become a new essential for farmer to accumulate his energy, time and money and will take place only when there will be requirement of water. In this approach, the soil test for chemical constituents, water content, and salinity and fertilizer requirement data collected by wireless and processed for better drip irrigation plan. This paper reviews different monitoring systems and proposes an automatic monitoring system model using Wireless Sensor Network (WSN) which helps the farmer to improve the yield.
Multi- Level Data Security Model for Big Data on Public Cloud: A New ModelEswar Publications
With the advent of cloud computing the big data has emerged as a very crucial technology. The certain type of cloud provides the consumers with the free services like storage, computational power etc. This paper is intended to make use of infrastructure as a service where the storage service from the public cloud providers is going to leveraged by an individual or organization. The paper will emphasize the model which can be used by anyone without any cost. They can store the confidential data without any type of security issue, as the data will be altered
in such a way that it cannot be understood by the intruder if any. Not only that but the user can retrieve back the original data within no time. The proposed security model is going to effectively and efficiently provide a robust security while data is on cloud infrastructure as well as when data is getting migrated towards cloud infrastructure or vice versa.
Impact of Technology on E-Banking; Cameroon PerspectivesEswar Publications
The financial services industry is experiencing rapid changes in services delivery and channels usage, and financial companies and users of financial services are looking at new technologies as they emerge and deciding whether or not to embrace them and the new opportunities to save and manage enormous time, cost and stress.
There is no doubt about the favourable and manifold impact of technology on e-banking as pictured in this review paper, almost all banks are with the least and most access e-banking Technological equipments like ATMs and Cards. On the other Hand cheap and readily available technology has opened a favourable competition in ebanking services business with a lot of wide range competitors competing with Commercial Banks in Cameroon in providing digital financial services.
Classification Algorithms with Attribute Selection: an evaluation study using...Eswar Publications
Attribute or feature selection plays an important role in the process of data mining. In general the data set contains more number of attributes. But in the process of effective classification not all attributes are relevant.
Attribute selection is a technique used to extract the ranking of attributes. Therefore, this paper presents a comparative evaluation study of classification algorithms before and after attribute selection using Waikato Environment for Knowledge Analysis (WEKA). The evaluation study concludes that the performance metrics of the classification algorithm, improves after performing attribute selection. This will reduce the work of processing irrelevant attributes.
Mining Frequent Patterns and Associations from the Smart meters using Bayesia...Eswar Publications
In today’s world migration of people from rural areas to urban areas is quite common. Health care services are one of the most challenging aspect that is must require to the people with abnormal health. Advancements in the technologies lead to build the smart homes, which contains various sensor or smart meter devices to automate the process of other electronic device. Additionally these smart meters can be able to capture the daily activities of the patients and also monitor the health conditions of the patients by mining the frequent patterns and
association rules generated from the smart meters. In this work we proposed a model that is able to monitor the activities of the patients in home and can send the daily activities to the corresponding doctor. We can extract the frequent patterns and association rules from the log data and can predict the health conditions of the patients and can give the suggestions according to the prediction. Our work is divided in to three stages. Firstly, we used to record the daily activities of the patient using a specific time period at three regular intervals. Secondly we applied the frequent pattern growth for extracting the association rules from the log file. Finally, we applied k means clustering for the input and applied Bayesian network model to predict the health behavior of the patient and precautions will be given accordingly.
Network as a Service Model in Cloud Authentication by HMAC AlgorithmEswar Publications
Resource pooling on internet-based accessing on use as pay environmental technology and ruled in IT field is the
cloud. Present, in every organization has trusted the web, however, the information must flow but not hold the
data. Therefore, all customers have to use the cloud. While the cloud progressing info by securing-protocols. Third
party observing and certain circumstances directly stale in flow and kept of packets in the virtual private cloud.
Global security statistics in the year 2017, hacking sensitive information in cloud approximately maybe 75.35%,
and the world security analyzer said this calculation maybe reached to 100%. For this cause, this proposed
research work concentrates on Authentication-Message-Digest-Key with authentication in routing the Network as
a Service of packets in OSPF (Open Shortest Path First) implementing Cloud with GNS3 has tested them to
securing from attackers.
Microstrip patch antennas are recently used in wireless detection applications due to their low power consumption, low cost, versatility, field excitation, ease of fabrication etc. The microstrip patch antennas are also called as printed antennas which is suffer with an array elements of antenna and narrow bandwidth. To overcome the above drawbacks, Flame Retardant Material is used as the substrate. Rectangular shape of microstrip patch antenna with FR4 material as the substrate which is more suitable for the explosive detection applications. The proposed printed antenna was designed with the dimension of 60 x 60 mm2. FR-4 material has a dielectric constant value of 4.3 with thickness 1.56 mm, length and width 60 mm and 60 mm respectively. One side of the substrate contains the ground plane of dimensions 60 x60 mm2 made of copper and the other side of the substrate contains the patch which have dimensions 34 x 29 mm2 and thickness 0.03mm which is also made of copper. RMPA without slot, Vertical slot RMPA, Double horizontal slot RMPA and Centre slot RMPA structures were
designed and the performance of the antennas were analysed with various parameters such as gain, directivity, Efield, VSWR and return loss. From the performance analysis, double horizontal slot RMPA antenna provides a better result and it provides maximum gain (8.61dB) and minimum return loss (-33.918dB). Based on the E-field excitation value the SEMTEX explosive material is detected and it was simulated using CST software.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
JMeter webinar - integration with InfluxDB and Grafana
Mobile Web Browsing Based On Content Preserving With Reduced Cost
1. Int. J. Advanced Networking and Applications
Volume: 6 Issue: 4 Pages: 2398-2403 (2015) ISSN: 0975-0290
2398
Mobile Web Browsing Based On Content
Preserving With Reduced Cost
Dr.N.Saravanaselvam
Head of CSE Department, Sri Eshwar College of Engineering, Tamil Nadu
Dr.K.Rajangam
Head of EEE Department,SCAD Institute of Technology, Tamil Nadu
Ms.L.Dhanam
Assistant Professor, Department of Computer Science & Engineering, SCAD Institute of Technology, Tamil Nadu
Ms.Anuradha Balasubramanian
Assistant Professor, Department of ECE,Info Institute of Technology, Tamil Nadu
----------------------------------------------------------------ABSTRACT------------------------------------------------------------------
Internet has played a drastic change in today’s life. Especially, web browsing has become more exclusive in
compact devices. This tempts the people to migrate their innovations & skills into an unimaginable world. With
these things in mind, it is necessary for us to concentrate more on the techniques that how the web data’s are
accessed and accounted. Developed countries use a widely popular technique called Flat- rate pricing, which is
solely independent on data usage. But whereas, developing countries are still behind the concept of “pay as you
use”, which leads to high usage bills.With an effort to resolve the problem of high usage bills, we propose a cost
effective technique, which reduces the data consumption in web mobile browsing. It reduces the usage bills in the
mechanism of usage-based pricing. The key idea of our approach is to leverage the data plan of the user to
compute a cost quota for each web request and a network middle-box to automatically adapt any web page to the
cost quota. Here we use a simple but effective content adaption technique that highly decides which image or data
best fits the mobile display with low cost and high quality resolution. It also emphasis on the trendy technique,”
The Data Mining “which mines the requested & required data. The mined data’s are filtered based on the content
adaption technique and fit into the display effectively. Interesting and noticeable feature in this concept is that
only important web contents requested by the user are exhibited. A feedback process involves in this concept to
retrieve the required data alone and also to improve the best fit resolution. With this proposed system web mobile
browsing becomes cheaper & contributes an enormous logic for the future project in the field of Mobile browsing.
Keywords— content adaptation, cost-aware web browsing, user experience, pricing, web usage, optimization.
‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
Date Of Submission: October 20, 2014 Date Of Acceptance: December 29, 2014
‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
I. INTRODUCTION
The increasing pervasiveness of web browsing on
portable devices calls for the careful consideration of
how web pages are accessed by mobile users. While
flat-rate pricing (independent of the usage) has been the
conventional data pricing model in rich countries,
mobile roaming users and users in developing regions
are typically subject to usage-based pricing based on
cost per megabyte downloaded. With the exceptional
growth in both the size and complexity of web pages,
usage-based pricing can result in high monthly bills
even under nominal web usage [23]. That aims to
substantially reduce the usage cost incurred for the end-
user due to usage-based pricing. There is a large body
of work on web content adaptation and also specifically
in the space of content adaptation for mobile our system
on popular web pages, we show that our system can
significantly reduce the size of downloaded data, and
therefore the cost. On average, our system is able to
reduce the cost of mobile web browsing by a factor of
2, and our most aggressive level of content adaptation
reduces the cost by a factor of 110. We also show that,
for each page request, our solution is to provide the web
page with the quality of content in minimized cost.
Current systems generate thumbnails by
shrinking the original image. This method is simple.
However, thumbnails generated this way can be
difficult to recognize, especially when the thumbnails
are very small. This phenomenon is not unexpected,
since shrinking an image causes detailed information to
be lost. An intuitive solution is to keep the more
informative part of the image and cut less informative
regions before shrinking. Some commercial products
allow users to manually crop and shrink images. Burton
et al. proposed and compared several image
simplification methods to enhance the full-size images
before sub sampling. They chose edge-detecting
smoothing, loss image compression and self-organizing
feature map as three different techniques in their work.
In quite a different context, DeCarlo and Santella
tracked a user’s eye movements to determine interesting
portions of images, and generated non-photorealistic,
painterly images that enhanced the most salient parts of
2. Int. J. Advanced Networking and Applications
Volume: 6 Issue: 4 Pages: 2398-2403 (2015) ISSN: 0975-0290
2399
the image. Chen et al. use a visual attention model as a
cue to conduct image adaptation for small displays.
II. RELATED WORK
Content adaptation techniques can be classified into two
categories, general purpose and content-type-specific.
Many current content adaptation techniques are content
type- specific, i.e., designed for specific content types,
such as those for text, image, and video objects. [4] For
example, much research has been done on adaptation of
images, such as color reduction, using perceptual hint to
keep semantically important blocks, mesh modeling and
generation, and other transcoding techniques. For video
adaptation, schemes such as adaptive streaming, frame
skipping, multidimensional bit-rate reduction, temporal
filtering, and spatio-temporal resolution adjustments
have been proposed. Info Pyramid has been designed
for general multimedia content adaptation. It provides a
multimodal, multi resolution representation hierarchy
for multimedia content. A transcoding-enabled caching
system is proposed for streaming media. [17] Content
transcoding is performed at the edge servers to reduce
the user-perceived latency and network traffic. For text
contents, several summarization techniques, such as
keyword extraction and summary sentence selection
and compression, have been proposed. All these
systems consider only individual types of contents.
Data Volume Price (Ksh)
Price per MB
(Ksh)
50MB
200MB
500MB
1.5GB
3GB
4GB
8GB
20GB
30GB
100
250
499
999
1,999
2,499
3,999
9,999
14,999
2.00
1.25
1.00
0.65
0.65
0.61
0.49
0.49
0.49
TABLE I
PREPAY DATA BUNDLES FOR SAFARICOM,
KENYA’S LARGEST MOBILE NETWORK. ALL
PLANS COST 8KSH PER MB AFTER THE BUNDLE
VOLUME. 83.7KSH = 1USD (DECEMBER 2011)
Some general-purpose content adaptation
systems have also been developed. In the Power
Browser project, a proxy-based architecture about
content adaptation for PDAs has been introduced. The
proxy [11] fetches Web pages on the client’s behalf and
dynamically generates hierarchical views of the fetched
pages.
Fig. 1. Web page anatomy of Alexa top 100 pages
broken down by bytes.
III.PROPOSED FRAMEWORK
Recent developments towards ubiquitous multimedia
trigger the need for mobile users to access and interact
with the multimedia content anywhere at any time. The
pressing need of improving the user’s media experience
have resulted in the paradigm shifting from traditional
device-centric multimedia to contemporary user-centric
multimedia. In this new paradigm, users are placed in
the center of ubiquitous multimedia environment and
are provided with access to personalized multimedia
content intelligently. Regardless of mobile device
capabilities, seamless access is desired to maximize the
quality of experience for mobile users, [8] based on the
media content and user’s intention. To facilitate
portable usage, mobile devices are usually designed to
be light weight and to serve the short viewing distance.
On one hand, such a design benefits mobility and
allows ubiquitous access of multimedia content; on the
other hand, it has been the main reason for the small
display size of mobile device, which is in turn the main
limiting obstacle to affect user’s viewing experience. To
provide mobile user with high viewing experience,
proper adaptation on high resolution media content is
necessary to fit various limited display of mobile
devices, heterogeneity of network links, and distinct
customer intention of mobile users. To obtain
adaptation results which are consistent with the media
contents and mobile user intentions, the semantic gap
and user intention gap are two critical challenges that
need to be properly addressed.
3. Int. J. Advanced Networking and Applications
Volume: 6 Issue: 4 Pages: 2398-2403 (2015) ISSN: 0975-0290
2400
Fig. 2. (a) Original page. (b) Level 2 adaptations.
Visually, the images are down sampled, and the page
retains most of its original formatting.
Text and Image Adaptation
Most existing adaptation approaches utilizing cropping,
warping, or seam carving techniques for various
displays are completely based on low level features
without consideration of real contents in images
corresponding to the underlying concepts in the real
world. They utilize saliency which can be computed
through various low level features such as color,
intensity, contrast, orientation, gradient, etc. to catch
and preserve the important contents. Although various
techniques were developed to detect the salient image
parts, such adaptation schemes based solely on low
level features may fail to catch some important contents
in images. They lack intuitive connection to the
semantics of images and user demands. As a result, the
regions obtained for display may not be consistent with
semantically important regions to users. Some
significant content may be dropped or shrunk too much;
hence, user viewing experiences are often not
satisfactory. According to, humans tend to view and
comprehend the images based on semantics which
refers to the creatures and objects comprising the scene
and the relations among the entities. Therefore,
semantic based image adaptation is a desired research
direction to provide the mobile user better perceptual
experiences. To carry out semantic based image
adaptation, semantic analysis to extract the key persons
or objects and associate them with high level concepts
is an indispensible step. Although a good portion of
user generated media data have already been tagged
with some keywords such as “wedding” and “baseball”,
important objects have to be extracted and localized for
content selection in adaptation.
We present a design and implementation for
displaying and manipulating HTML pages on small
handheld devices such as personal digital assistants
(PDAs), or cellular phones. We introduce methods [22],
[23] for summarizing parts of Web pages and HTML
forms. Each Web page is broken into text units that can
each be hidden, partially displayed, made fully visible,
or summarized. A variety of methods are introduced
that summarize the text units. In addition, HTML forms
are also summarized by displaying just the text labels
that prompt the use for input. We tested the relative
performance of the summarization methods by asking
human subjects to accomplish single-page information
search tasks. We found that the combination of
keywords and single-sentence summaries provides
significant improvements in access times and number of
required pen actions, as compared to other schemes. [5].
Most of the content adaptation work for
mobile devices concentrates on content transformation
to suit the display of smaller devices. Various content
transformation processes like layout change and content
format reconfiguration are used in [18] for adapting
web content to mobile user agents. The idea of
partitioning the web page into blocks (snippets), and
processing the information only in the useful blocks by
filtering out unnecessary ones is used in [19]. Some
content adaptation solutions like those in [26] target
only the multimedia content of the web pages. The task
of the general parser is to build a structure tree to
represent the hierarchical structure embedded in the
HTML source, while the content-specific parser
extracts specific, content related attributes. The general
parser maintains a list of HTML tags. Each tag in the
list corresponds to a content-specific parser. Then a tag
is identified in the HTML source. If it is not in the list,
it is ignored since there are no further adaptation rules
for the corresponding content. If the tag is in the list, the
general parser creates a new node in the appropriate
location in the structure tree. After the creation of the
node, the general parser passes the corresponding
HTML source between the current tag and the next tag
as well as the pointer to the newly created tree node to
the corresponding content specific parser.
The content-specific parser extracts the attributes of
the object from the HTML source and attaches the
information to the tree node. The page facts specific to
the object are generated by the content-specific parser.
Once the content-specific parser completes its content
processing, the control is sent back to the general
parser. The general parser continues a similar parsing
process on the next HTML tag until the whole structure
tree is built. The pointer will be moved up in the
hierarchy when an HTML close tag is met. The
structure tree is successfully built. The general parser
then traverses the tree and generates the page facts
about the hierarchical structure of the HTML page, such
as parent-child and sibling relationship.
Content Adaptation Ladder
4. Int. J. Advanced Networking and Applications
Volume: 6 Issue: 4 Pages: 2398-2403 (2015) ISSN: 0975-0290
2401
Given the anatomy of a web page, we
introduce a six-level content adaptation hierarchy for a
given web page. The first three levels of the Web page
which form the text-only ladder focus on textual
versions of the page while the final three levels which
form the advanced ladder focus on adaptation and
filtering techniques that preserve the basic structure of
the page.
1) Text-only ladder: The three levels of the text-only
ladder of a web page are:
Snippet page: In the snippet version of a page, we
extract the most important textual snippet of the web
page requested by the user. In the preprocessing step,
we extract the main body of the web page by removing
redundant data, advertisements and split the block-level
HTML components into a sequence of paragraphs. To
extract the main text snippet, we extend the basic
mechanism proposed in [27], which uses a simple
averaging mechanism to smooth the word count graph
at a paragraph level. We extend this scheme to include
the influence of a desired number of neighboring
paragraphs. We normalize the word count of every
paragraph by convolving the word count function with a
Gaussian function with a fixed width, that is, if {ni}N
i=1
is the normalized word count as a function of the
paragraph number, we do the following:
!"=
∑ $%&!'%(
)%2(
2
&2&
(
∑ $&
(
%&
)%2(
2
&2
Where p is the range: number of samples to be
considered before and after the current sample. A
salient feature of the Gaussian smoothing function is
that the current sample is given the highest weight and
the weight of the neighboring samples decays smoothly
with distance. This feature helps in distinguishing the
main text with higher probability.
1). Text-only version: In the text-only version,
we strip all unessential HTML tags and present a
condensed HTML (with a plain CSS) containing the
textual content of a page. This is done by repeatedly
using the above mentioned procedure of main text
snippet extraction to extract the next level of main
snippets. We then take the top 5 results of the main text
snippet extraction and form a text-only version of the
requested web page.
Page summarization (Level 0): At this level, we present
a summarization of the web page where we show the
important headings with little more information about
them. This method filters out everything but the
headings and the paragraphs next to each heading. This
technique works very well for news web pages or other
pages with textual data.
2) Advanced Ladder: The advanced ladder
supports three levels as outlined below.
Level 1: HTML, CSS, I frame, Relevant Java scripts,
Images in headings
Level 2: Level 1 + Images (compressed, down-
sampled), I frame
Level 3: Level 2 + embedded objects
Level 1 extracts and presents only the critical
components of the page while Level 2 is a filtered and
adapted version of the page (removing ads and
unnecessary scripts). Level 3 presents an adapted
version of the page with no filtering. While generating
these different versions of the pages, we performed a
few critical optimizations to enhance load times and
reduce the web page size.
HTML rewriting: Minification is the practice
of reducing the size of the code thereby improving the
load times. Minification usually involves removing
white spaces (space, tab, newline) and comments. In the
case of JavaScript, this improves response time
performance because the size of the downloaded file is
reduced. This contributed to over a 5 – 10 KB reduction
in web page sizes. Rendering time optimization:
Moving the Cascaded Style sheet to the header of the
HTML allows progressive rendering. Similarly, moving
the JavaScript’s [24] to the end of the HTML code
enhances user experience since it would enable the light
weight components of the web page to be downloaded
first.
Ads and favicon filtering: JavaScript’s that
contain particular keywords are potential signatures of
advertisements.[6] To separate scripts that are related to
a page and those that are not, we constructed a large list
of keywords that we used for identifying scripts related
to advertisements. Similarly, we distinguish images
related to the web page from those that are not based on
the name, location of the images and location of the
image pointer in the HTML. We also remove favicons
across all adaptation levels which save 2 - 10 KB.
Image and embedded objects filtering: We
use the Standard approaches of down-sampling images
and removing embedded objects at the appropriate
levels of the adaptation ladder which may include
multimedia content, like sounds, videos, flash etc., in
web pages. The typical size of a flash object in web
pages is 85.4 KB which contributes 13% to the average
web page size.
Type 50% 75% 90% 95%
Snippet
Level 1
Level 2
110
3
2
200
7
5
300
15
10
350
20
16
TABLE II : CUMULATIVE DISTRIBUTION OF THE
SIZE EDUCTION FACTOR FROM THE ORIGINAL
PAGE.
Size reduction: To measure the size reduction,
we ran our system across the top sites from Alexa for
different levels of adaptation. Table II summarizes our
results. The table shows the cumulative distribution of
the reduction factor page size when using our system
for different adaptation levels at 50%, 75%, 90% and
95% of the CDF. We observe that in the median case,
5. Int. J. Advanced Networking and Applications
Volume: 6 Issue: 4 Pages: 2398-2403 (2015) ISSN: 0975-0290
2402
Level 1 and Level 2 adaptations provide a size
reduction by a factor of 3 and 2 respectively while
snippet adaptation can provide a reduction by a factor
of 110. The reduction factor of page size increases up to
20, 60, and 350 times for adaptation levels (1, 2, and
snippet) at 95% of the CDF.
IV. SYSTEM DESIGN
A. Rule base
The rules in the Rule Base are classified into
content analysis rules and content adaptation rules. The
content adaptation rules define the manner in which
various objects are adapted. The content analysis rules
process the raw page facts and derive new facts of
higher abstraction.
B. Content Analysis Rules
Inferred content facts can have many different
varieties. Width information is very important in
content adaptation, but it is not always provided in
HTML documents. The width information in HTML
documents will be directly utilized when it is available.
Otherwise, some rules are designed to derive the width
information. Some rules assign a width value to the
object based on the analysis of many Web pages. For
example, the widths of some commonly used form
objects can be set to a fixed value, such as 25 pixels for
a check box, or 20 pixels for a radio button. Other width
information could be computed from facts. For
example, the width of a button can be estimated from
the number of characters in its label.
C. Content Adaptation Rules
Content adaptation rules derive adaptation
instructions from facts. Each adaptation instruction
corresponds to an adaptation procedure that may be
implemented in different programming languages or
simply using existing content adaptation components.
The content adaptation rules are organized according to
the Web object types that they are applied to. Many
rules are specific to certain object types. For example,
the rules to summarize the text only work on text
objects. The rule to reduce image color depth is only
applicable to image objects. Some rules may be specific
to structure objects or pointer objects. The rules for
structure objects generally adapt the structure and
determine the layout of the adapted page. The rules for
pointer objects generally extract and rearrange the
pointer objects in a page to facilitate navigation
between pages. The goal of rule classification is to
improve the inference speed. Consider the rule is for
reducing the color depth of an image object. Although
the color depth can be automatically adjusted at the
client device, implementing the rule will significantly
reduce the size of the image file to be transferred over
the network, which is an important concern due to the
low network bandwidth and processing power of
mobile devices. There are also some rules that are
general and applicable to multiple object types. For
example, the rule to change the object width can be
applied to both image and structure objects.
V. CONCLUSION
The majority cell phone consumers in emergent
counties are focus to practice found costly data price
procedure in several type or a further consequential in
high handling expenses. The Proposed system offered a
content- sensitive mobile web accessing agenda that
systematically adapts internet pages and content as a
utility of a user’s data costing preparation and
utilization altitudes to significantly decrease data
expenses and increase the quality. Also to minimize the
pricing, the proposed methodology improves consumer
web accessing comfortable by forwarding the mainly
tremendous potential adaptation of a web page for a
provided data cost and clarity. The proposed assessment
demonstrates the considerable cost savings with content
preservation model which the current implementation
can present for internet accessing cell phone consumers.
REFERENCES
[1]. Let’s make the web faster - google code. Http
articles/web-metrics.html.
[2]. Opera mini.
http://www.opera.com/mobile/download/.
[3]. [3] State of the mobile web (2011).
http://www.opera.com/smw/.
[4]. [4]Websiteoptimization.com.
http://www.websiteoptimization.com/.
[5]. P. Baudisch, X. Xie, C. Wang, and W. Ma.
Collapse-to-zoom: viewing web pages on small
screen devices by interactively removing
irrelevant content. In Proceedings of the 17th
annual ACM symposium on User interface
software and technology, pages 91–94. ACM,
2004.
[6]. T. Bickmore, A. Girgensohn, and J. Sullivan. Web
page filtering and re-authoring for mobile users.
The Computer Journal, 42(6):534–546,
1999.
[7]. A. Blekas, J. Garofalakis, and V. Stefanis. Use of
rss feeds for content adaptation in mobile web
browsing. In Proceedings of the 2006 international
cross-disciplinary workshop on Web accessibility
(W4A): Building the mobile web: rediscovering
accessibility?, pages 79–85. ACM, 2006.
[8]. O. Buyukkokten, O. Kaljuvee, H. Garcia-Molina,
A. Paepcke, and T. Winograd. Efficient web
browsing on handheld devices using page
and form summarization. ACM Transactions on
Information Systems, 20(1):82–115, 2002.
[9]. J. Charzinski. Traffic Properties, Client Side
Cachability and CDN Usage of Popular Web
Sites. Measurement, Modelling, and Evaluation of
Computing Systems and Dependability and Fault
Tolerance, 2010.
[10]. J. Chen, L. Subramanian, and J. Li. Ruralcafe:
web search in the rural developing world.
Proceedings of the 18th International Conference
on World Wide Web, 2009.
6. Int. J. Advanced Networking and Applications
Volume: 6 Issue: 4 Pages: 2398-2403 (2015) ISSN: 0975-0290
2403
[11]. X. Chen and X. Zhang. Coordinated data
prefetching by utilizing reference information at
both proxy and web servers. ACM SIGMETRICS
Performance Evaluation Review, 2001.
[12]. Y. Chen, W. Ma, and H. Zhang. Detecting web
page structure for adaptive viewing on small form
factor devices. In Proceedings of the 12th
international conference on World Wide Web,
May, pages 20–24, 2003.
[13]. J. Domˇcnech, A. Pont, J. Sahuquillo, and J. Gil.
A user-focused evaluation of web prefetching
algorithms. Computer Communications
Review, 30(10), 2007.
[14]. L. Fan, P. Cao, W. Lin, and Q. Jacobson. Web
prefetching between lowbandwidth clients and
proxies: potential and performance. Proceedings
of the ACM SIGMETRICS International
Conference on Measurement and modeling of
computer systems, 1999.
[15]. A. Fox and E. Brewer. Reducing www latency
and bandwidth requirements by real-time
distillation. Computer Networks and ISDN
Systems, 28(7-11):1445–1456, 1996.
[16]. Y. Hwang, J. Kim, and E. Seo. Structure-aware
web transcoding for mobile devices. Internet
Computing, IEEE, 7(5):14–21, 2003.
[17]. J. Korva, J. Plomp, P. M¨a¨att¨a, and M. Metso.
On-line service adaptation for mobile and fixed
terminal devices. In Mobile Data Management,
pages 254–259. Springer, 2001.
[18]. T. Laakko and T. Hiltunen. Adapting web content
to mobile user agents. Internet Computing, IEEE,
9(2):46–53, 2005.
[19]. E. Lee, J. Kang, J. Choi, and J. Yang. Topic-
specific web content adaptation to mobile devices.
In Proceedings of the 2006 IEEE/WIC/ACM
International Conference on Web Intelligence,
pages 845–848. IEEE Computer Society, 2006.
[20]. T. F. MediaLab. Web content adaptation - white
paper, 2004.
[21]. P. K. Mishra, K. Baliyan, Deepshika, and S.
Singh. User interactive web content adaptation for
mobile devices. In 2010 Impact of Globalization
and Privatization on meeting India’s IT HR needs,
March 6-7, 2010.
[22]. I. Mohomed, J. Cai, S. Chavoshi, and E. de Lara.
Context-aware interactive content adaptation. In
Proceedings of the 4th international conference on
Mobile systems, applications and services, pages
42–55. ACM, 2006.
[23]. I. Mohomed, J. Cai, and E. De Lara. Urica:
Usage-aware interactive content adaptation for
mobile devices. ACM SIGOPS Operating Systems
Review, 40(4):345–358, 2006.