The Web Browser is to date a popular piece of software in modern computing systems. They are the main interface for vast information access from the Internet. Browsers technologies have advanced to a stage where they do more than before. They now parse not only plaintext and Hypertext Markup Language (HTML), but also images, videos and other intricate protocols. These advancements have increased demand for memory. This increased demand poses a challenge in multiprogramming environments. The contemporary browser reference model does not have a memory control mechanism that can limit maximum memory a browser can use. This leads to hogging of memory by contemporary browsers. This paper is a review on emergent techniques that have been used to control memory hogging by browsers based on the contemporary reference architecture. We review major browsers architectures including Mozilla Firefox, Google Chrome and Internet explorer. We give an in-depth study on techniques that have been adopted with a view to solve this problem. From these reviews we derive the weaknesses of the contemporary browser architecture and inefficiency of each technique used.
Toward a New Algorithm for Hands Free BrowsingCSCJournals
The role of the Internet in the life of everyone is becoming more and more important. People who usually use the internet are normal people with use of all physical parts of their body. Unfortunately, members of society who are physically handicapped do not get access to this medium as they don’t have the ability to access the internet or use it. The Integrated Browser is a computer application that browses the internet and displays information on screen. It targets handicapped users who have difficulties using a standard keyboard and/or a standard mouse. In addition, this browser, unlike the conventional method, works by responding to voice commands. It is characterized as an economical and humanitarian approach because it is handicap-friendly and is especially helpful to those who cannot use their hands. The Integrated Browser is a state-of-art technology that provides better and faster Internet experience. It is a program that will facilitate and enhance learning of slow or/and handicapped learners
Toward a New Algorithm for Hands Free BrowsingCSCJournals
The role of the Internet in the life of everyone is becoming more and more important. People who usually use the internet are normal people with use of all physical parts of their body. Unfortunately, members of society who are physically handicapped do not get access to this medium as they don’t have the ability to access the internet or use it. The Integrated Browser is a computer application that browses the internet and displays information on screen. It targets handicapped users who have difficulties using a standard keyboard and/or a standard mouse. In addition, this browser, unlike the conventional method, works by responding to voice commands. It is characterized as an economical and humanitarian approach because it is handicap-friendly and is especially helpful to those who cannot use their hands. The Integrated Browser is a state-of-art technology that provides better and faster Internet experience. It is a program that will facilitate and enhance learning of slow or/and handicapped learners
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
An article written by Avanite's CTO, Peter Jones exploring the link between web browsing and end user computing. Original link: https://www.linkedin.com/pulse/end-user-computing-web-browsing-peter-jones/
Similar to Techniques to Control Memory Hogging by Web Browsers: An in-Depth Review (20)
Text Mining in Digital Libraries using OKAPI BM25 ModelEditor IJCATR
The emergence of the internet has made vast amounts of information available and easily accessible online. As a result, most libraries have digitized their content in order to remain relevant to their users and to keep pace with the advancement of the internet. However, these digital libraries have been criticized for using inefficient information retrieval models that do not perform relevance ranking to the retrieved results. This paper proposed the use of OKAPI BM25 model in text mining so as means of improving relevance ranking of digital libraries. Okapi BM25 model was selected because it is a probability-based relevance ranking algorithm. A case study research was conducted and the model design was based on information retrieval processes. The performance of Boolean, vector space, and Okapi BM25 models was compared for data retrieval. Relevant ranked documents were retrieved and displayed at the OPAC framework search page. The results revealed that Okapi BM 25 outperformed Boolean model and Vector Space model. Therefore, this paper proposes the use of Okapi BM25 model to reward terms according to their relative frequencies in a document so as to improve the performance of text mining in digital libraries.
Green Computing, eco trends, climate change, e-waste and eco-friendlyEditor IJCATR
This study focused on the practice of using computing resources more efficiently while maintaining or increasing overall performance. Sustainable IT services require the integration of green computing practices such as power management, virtualization, improving cooling technology, recycling, electronic waste disposal, and optimization of the IT infrastructure to meet sustainability requirements. Studies have shown that costs of power utilized by IT departments can approach 50% of the overall energy costs for an organization. While there is an expectation that green IT should lower costs and the firm’s impact on the environment, there has been far less attention directed at understanding the strategic benefits of sustainable IT services in terms of the creation of customer value, business value and societal value. This paper provides a review of the literature on sustainable IT, key areas of focus, and identifies a core set of principles to guide sustainable IT service design.
Policies for Green Computing and E-Waste in NigeriaEditor IJCATR
Computers today are an integral part of individuals’ lives all around the world, but unfortunately these devices are toxic to the environment given the materials used, their limited battery life and technological obsolescence. Individuals are concerned about the hazardous materials ever present in computers, even if the importance of various attributes differs, and that a more environment -friendly attitude can be obtained through exposure to educational materials. In this paper, we aim to delineate the problem of e-waste in Nigeria and highlight a series of measures and the advantage they herald for our country and propose a series of action steps to develop in these areas further. It is possible for Nigeria to have an immediate economic stimulus and job creation while moving quickly to abide by the requirements of climate change legislation and energy efficiency directives. The costs of implementing energy efficiency and renewable energy measures are minimal as they are not cash expenditures but rather investments paid back by future, continuous energy savings.
Performance Evaluation of VANETs for Evaluating Node Stability in Dynamic Sce...Editor IJCATR
Vehicular ad hoc networks (VANETs) are a favorable area of exploration which empowers the interconnection amid the movable vehicles and between transportable units (vehicles) and road side units (RSU). In Vehicular Ad Hoc Networks (VANETs), mobile vehicles can be organized into assemblage to promote interconnection links. The assemblage arrangement according to dimensions and geographical extend has serious influence on attribute of interaction .Vehicular ad hoc networks (VANETs) are subclass of mobile Ad-hoc network involving more complex mobility patterns. Because of mobility the topology changes very frequently. This raises a number of technical challenges including the stability of the network .There is a need for assemblage configuration leading to more stable realistic network. The paper provides investigation of various simulation scenarios in which cluster using k-means algorithm are generated and their numbers are varied to find the more stable configuration in real scenario of road.
Optimum Location of DG Units Considering Operation ConditionsEditor IJCATR
The optimal sizing and placement of Distributed Generation units (DG) are becoming very attractive to researchers these days. In this paper a two stage approach has been used for allocation and sizing of DGs in distribution system with time varying load model. The strategic placement of DGs can help in reducing energy losses and improving voltage profile. The proposed work discusses time varying loads that can be useful for selecting the location and optimizing DG operation. The method has the potential to be used for integrating the available DGs by identifying the best locations in a power system. The proposed method has been demonstrated on 9-bus test system.
Analysis of Comparison of Fuzzy Knn, C4.5 Algorithm, and Naïve Bayes Classifi...Editor IJCATR
Early detection of diabetes mellitus (DM) can prevent or inhibit complication. There are several laboratory test that must be done to detect DM. The result of this laboratory test then converted into data training. Data training used in this study generated from UCI Pima Database with 6 attributes that were used to classify positive or negative diabetes. There are various classification methods that are commonly used, and in this study three of them were compared, which were fuzzy KNN, C4.5 algorithm and Naïve Bayes Classifier (NBC) with one identical case. The objective of this study was to create software to classify DM using tested methods and compared the three methods based on accuracy, precision, and recall. The results showed that the best method was Fuzzy KNN with average and maximum accuracy reached 96% and 98%, respectively. In second place, NBC method had respective average and maximum accuracy of 87.5% and 90%. Lastly, C4.5 algorithm had average and maximum accuracy of 79.5% and 86%, respectively.
Web Scraping for Estimating new Record from Source SiteEditor IJCATR
Study in the Competitive field of Intelligent, and studies in the field of Web Scraping, have a symbiotic relationship mutualism. In the information age today, the website serves as a main source. The research focus is on how to get data from websites and how to slow down the intensity of the download. The problem that arises is the website sources are autonomous so that vulnerable changes the structure of the content at any time. The next problem is the system intrusion detection snort installed on the server to detect bot crawler. So the researchers propose the use of the methods of Mining Data Records and the method of Exponential Smoothing so that adaptive to changes in the structure of the content and do a browse or fetch automatically follow the pattern of the occurrences of the news. The results of the tests, with the threshold 0.3 for MDR and similarity threshold score 0.65 for STM, using recall and precision values produce f-measure average 92.6%. While the results of the tests of the exponential estimation smoothing using ? = 0.5 produces MAE 18.2 datarecord duplicate. It slowed down to 3.6 datarecord from 21.8 datarecord results schedule download/fetch fix in an average time of occurrence news.
Evaluating Semantic Similarity between Biomedical Concepts/Classes through S...Editor IJCATR
Most of the existing semantic similarity measures that use ontology structure as their primary source can measure semantic similarity between concepts/classes using single ontology. The ontology-based semantic similarity techniques such as structure-based semantic similarity techniques (Path Length Measure, Wu and Palmer’s Measure, and Leacock and Chodorow’s measure), information content-based similarity techniques (Resnik’s measure, Lin’s measure), and biomedical domain ontology techniques (Al-Mubaid and Nguyen’s measure (SimDist)) were evaluated relative to human experts’ ratings, and compared on sets of concepts using the ICD-10 “V1.0” terminology within the UMLS. The experimental results validate the efficiency of the SemDist technique in single ontology, and demonstrate that SemDist semantic similarity techniques, compared with the existing techniques, gives the best overall results of correlation with experts’ ratings.
Semantic Similarity Measures between Terms in the Biomedical Domain within f...Editor IJCATR
The techniques and tests are tools used to define how measure the goodness of ontology or its resources. The similarity between biomedical classes/concepts is an important task for the biomedical information extraction and knowledge discovery. However, most of the semantic similarity techniques can be adopted to be used in the biomedical domain (UMLS). Many experiments have been conducted to check the applicability of these measures. In this paper, we investigate to measure semantic similarity between two terms within single ontology or multiple ontologies in ICD-10 “V1.0” as primary source, and compare my results to human experts score by correlation coefficient.
A Strategy for Improving the Performance of Small Files in Openstack Swift Editor IJCATR
This is an effective way to improve the storage access performance of small files in Openstack Swift by adding an aggregate storage module. Because Swift will lead to too much disk operation when querying metadata, the transfer performance of plenty of small files is low. In this paper, we propose an aggregated storage strategy (ASS), and implement it in Swift. ASS comprises two parts which include merge storage and index storage. At the first stage, ASS arranges the write request queue in chronological order, and then stores objects in volumes. These volumes are large files that are stored in Swift actually. During the short encounter time, the object-to-volume mapping information is stored in Key-Value store at the second stage. The experimental results show that the ASS can effectively improve Swift's small file transfer performance.
Integrated System for Vehicle Clearance and RegistrationEditor IJCATR
Efficient management and control of government's cash resources rely on government banking arrangements. Nigeria, like many low income countries, employed fragmented systems in handling government receipts and payments. Later in 2016, Nigeria implemented a unified structure as recommended by the IMF, where all government funds are collected in one account would reduce borrowing costs, extend credit and improve government's fiscal policy among other benefits to government. This situation motivated us to embark on this research to design and implement an integrated system for vehicle clearance and registration. This system complies with the new Treasury Single Account policy to enable proper interaction and collaboration among five different level agencies (NCS, FRSC, SBIR, VIO and NPF) saddled with vehicular administration and activities in Nigeria. Since the system is web based, Object Oriented Hypermedia Design Methodology (OOHDM) is used. Tools such as Php, JavaScript, css, html, AJAX and other web development technologies were used. The result is a web based system that gives proper information about a vehicle starting from the exact date of importation to registration and renewal of licensing. Vehicle owner information, custom duty information, plate number registration details, etc. will also be efficiently retrieved from the system by any of the agencies without contacting the other agency at any point in time. Also number plate will no longer be the only means of vehicle identification as it is presently the case in Nigeria, because the unified system will automatically generate and assigned a Unique Vehicle Identification Pin Number (UVIPN) on payment of duty in the system to the vehicle and the UVIPN will be linked to the various agencies in the management information system.
Assessment of the Efficiency of Customer Order Management System: A Case Stu...Editor IJCATR
The Supermarket Management System deals with the automation of buying and selling of good and services. It includes both sales and purchase of items. The project Supermarket Management System is to be developed with the objective of making the system reliable, easier, fast, and more informative.
Energy-Aware Routing in Wireless Sensor Network Using Modified Bi-Directional A*Editor IJCATR
Energy is a key component in the Wireless Sensor Network (WSN)[1]. The system will not be able to run according to its function without the availability of adequate power units. One of the characteristics of wireless sensor network is Limitation energy[2]. A lot of research has been done to develop strategies to overcome this problem. One of them is clustering technique. The popular clustering technique is Low Energy Adaptive Clustering Hierarchy (LEACH)[3]. In LEACH, clustering techniques are used to determine Cluster Head (CH), which will then be assigned to forward packets to Base Station (BS). In this research, we propose other clustering techniques, which utilize the Social Network Analysis approach theory of Betweeness Centrality (BC) which will then be implemented in the Setup phase. While in the Steady-State phase, one of the heuristic searching algorithms, Modified Bi-Directional A* (MBDA *) is implemented. The experiment was performed deploy 100 nodes statically in the 100x100 area, with one Base Station at coordinates (50,50). To find out the reliability of the system, the experiment to do in 5000 rounds. The performance of the designed routing protocol strategy will be tested based on network lifetime, throughput, and residual energy. The results show that BC-MBDA * is better than LEACH. This is influenced by the ways of working LEACH in determining the CH that is dynamic, which is always changing in every data transmission process. This will result in the use of energy, because they always doing any computation to determine CH in every transmission process. In contrast to BC-MBDA *, CH is statically determined, so it can decrease energy usage.
Security in Software Defined Networks (SDN): Challenges and Research Opportun...Editor IJCATR
In networks, the rapidly changing traffic patterns of search engines, Internet of Things (IoT) devices, Big Data and data centers has thrown up new challenges for legacy; existing networks; and prompted the need for a more intelligent and innovative way to dynamically manage traffic and allocate limited network resources. Software Defined Network (SDN) which decouples the control plane from the data plane through network vitalizations aims to address these challenges. This paper has explored the SDN architecture and its implementation with the OpenFlow protocol. It has also assessed some of its benefits over traditional network architectures, security concerns and how it can be addressed in future research and related works in emerging economies such as Nigeria.
Measure the Similarity of Complaint Document Using Cosine Similarity Based on...Editor IJCATR
Report handling on "LAPOR!" (Laporan, Aspirasi dan Pengaduan Online Rakyat) system depending on the system administrator who manually reads every incoming report [3]. Read manually can lead to errors in handling complaints [4] if the data flow is huge and grows rapidly, it needs at least three days to prepare a confirmation and it sensitive to inconsistencies [3]. In this study, the authors propose a model that can measure the identities of the Query (Incoming) with Document (Archive). The authors employed Class-Based Indexing term weighting scheme, and Cosine Similarities to analyse document similarities. CoSimTFIDF, CoSimTFICF and CoSimTFIDFICF values used in classification as feature for K-Nearest Neighbour (K-NN) classifier. The optimum result evaluation is pre-processing employ 75% of training data ratio and 25% of test data with CoSimTFIDF feature. It deliver a high accuracy 84%. The k = 5 value obtain high accuracy 84.12%
Hangul Recognition Using Support Vector MachineEditor IJCATR
The recognition of Hangul Image is more difficult compared with that of Latin. It could be recognized from the structural arrangement. Hangul is arranged from two dimensions while Latin is only from the left to the right. The current research creates a system to convert Hangul image into Latin text in order to use it as a learning material on reading Hangul. In general, image recognition system is divided into three steps. The first step is preprocessing, which includes binarization, segmentation through connected component-labeling method, and thinning with Zhang Suen to decrease some pattern information. The second is receiving the feature from every single image, whose identification process is done through chain code method. The third is recognizing the process using Support Vector Machine (SVM) with some kernels. It works through letter image and Hangul word recognition. It consists of 34 letters, each of which has 15 different patterns. The whole patterns are 510, divided into 3 data scenarios. The highest result achieved is 94,7% using SVM kernel polynomial and radial basis function. The level of recognition result is influenced by many trained data. Whilst the recognition process of Hangul word applies to the type 2 Hangul word with 6 different patterns. The difference of these patterns appears from the change of the font type. The chosen fonts for data training are such as Batang, Dotum, Gaeul, Gulim, Malgun Gothic. Arial Unicode MS is used to test the data. The lowest accuracy is achieved through the use of SVM kernel radial basis function, which is 69%. The same result, 72 %, is given by the SVM kernel linear and polynomial.
Application of 3D Printing in EducationEditor IJCATR
This paper provides a review of literature concerning the application of 3D printing in the education system. The review identifies that 3D Printing is being applied across the Educational levels [1] as well as in Libraries, Laboratories, and Distance education systems. The review also finds that 3D Printing is being used to teach both students and trainers about 3D Printing and to develop 3D Printing skills.
Survey on Energy-Efficient Routing Algorithms for Underwater Wireless Sensor ...Editor IJCATR
In underwater environment, for retrieval of information the routing mechanism is used. In routing mechanism there are three to four types of nodes are used, one is sink node which is deployed on the water surface and can collect the information, courier/super/AUV or dolphin powerful nodes are deployed in the middle of the water for forwarding the packets, ordinary nodes are also forwarder nodes which can be deployed from bottom to surface of the water and source nodes are deployed at the seabed which can extract the valuable information from the bottom of the sea. In underwater environment the battery power of the nodes is limited and that power can be enhanced through better selection of the routing algorithm. This paper focuses the energy-efficient routing algorithms for their routing mechanisms to prolong the battery power of the nodes. This paper also focuses the performance analysis of the energy-efficient algorithms under which we can examine the better performance of the route selection mechanism which can prolong the battery power of the node
Comparative analysis on Void Node Removal Routing algorithms for Underwater W...Editor IJCATR
The designing of routing algorithms faces many challenges in underwater environment like: propagation delay, acoustic channel behaviour, limited bandwidth, high bit error rate, limited battery power, underwater pressure, node mobility, localization 3D deployment, and underwater obstacles (voids). This paper focuses the underwater voids which affects the overall performance of the entire network. The majority of the researchers have used the better approaches for removal of voids through alternate path selection mechanism but still research needs improvement. This paper also focuses the architecture and its operation through merits and demerits of the existing algorithms. This research article further focuses the analytical method of the performance analysis of existing algorithms through which we found the better approach for removal of voids
Decay Property for Solutions to Plate Type Equations with Variable CoefficientsEditor IJCATR
In this paper we consider the initial value problem for a plate type equation with variable coefficients and memory in
1 n R n ), which is of regularity-loss property. By using spectrally resolution, we study the pointwise estimates in the spectral
space of the fundamental solution to the corresponding linear problem. Appealing to this pointwise estimates, we obtain the global
existence and the decay estimates of solutions to the semilinear problem by employing the fixed point theorem
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Monitoring Java Application Security with JDK Tools and JFR Events
Techniques to Control Memory Hogging by Web Browsers: An in-Depth Review
1. International Journal of Computer Applications Technology and Research
Volume 7–Issue 04, 185-192, 2018, ISSN:-2319–8656
www.ijcat.com 185
Techniques to Control Memory Hogging by Web
Browsers: An in-Depth Review
Harun K. Kamau
School of Computing
and Informatics
Maseno University,
Maseno, Kenya
Dr.O.McOyowo
School of Computing
and Informatics
Maseno University,
Maseno, Kenya
Dr.O.Okoyo
School of Computing
and Informatics
Maseno University,
Maseno, Kenya
Dr.C.Ratemo
School of Computing
and Informatics
Maseno University,
Maseno, Kenya
Abstract: The Web Browser is to date a popular piece of software in modern computing systems. They are the main interface for vast
information access from the Internet. Browsers technologies have advanced to a stage where they do more than before. They now parse
not only plaintext and Hypertext Markup Language (HTML), but also images, videos and other intricate protocols. These advancements
have increased demand for memory. This increased demand poses a challenge in multiprogramming environments. The contemporary
browser reference model does not have a memory control mechanism that can limit maximum memory a browser can use. This leads to
hogging of memory by contemporary browsers. This paper is a review on emergent techniques that have been used to control memory
hogging by browsers based on the contemporary reference architecture. We review major browsers architectures including Mozilla
Firefox, Google Chrome and Internet explorer. We give an in-depth study on techniques that have been adopted with a view to solve
this problem. From these reviews we derive the weaknesses of the contemporary browser architecture and inefficiency of each technique
used.
Keywords: Browser reference architecture, memory hogging, web browser
1. INTRODUCTION
The Internet is progressively becoming an indispensable
component of today’s life. Most often than not, people largely
rely on the expediency and elasticity of Internet-connected
devices in learning, shopping, entertainment, communication
and in broad-spectrum activities, that would otherwise
necessitate their physical presence (Sagar A. et. al., 2010).
Access to information or services via the Internet requires a
medium; a browser operates as a medium. It is the prime
component of a computer system when the Internet services are
required. A browser retrieves, displays and traverses
information resources on the web (World Wide Web
Consortium, 2004).
Information resources comprise text, image, video, or other
piece of content. These resources are accessed and indentified
by a Uniform Resource Identifier (URI). The first browser
known as WorldWideWeb was made in the early 1990s by Tim
Berners-Lee (Tim Berners-Lee, 1999). Since then, browsers
have seen tremendous advancements in their architectures and
usage. The earliest browsers; Nexus, Mosaic and Netscape
were less complex and used considerably low computer
memory. However, they were commonly used for viewing
basic HTML pages. With the birth of the Internet, browsers
have gained a lot of popularity globally.
1.1 Motivation
Today, the browser is the most used computer application
(Allan and Michael, 2006; Antero et. al., 2008). This
phenomenon may be attributed to its various usages in
everyday life. With limited computer power to process
voluminous data generated from various sources, users have
resorted to other technologies like the cloud computing and
other online solutions where there is robust computer
processing power, vast storage, scalability, reliability and on
demand services. In these cases, resources are accessed as
services via the Internet with thin clients especially the
browsers.
Originally, Web information comprised a set of documents that
in most cases contained text and hyperlinks to other related
documents, having little or no client-side code. All rendered
content originated from a single source. Web content has
increasingly become more complex in pursuit to incorporate
interactive features. Today, web programs have advanced to
become highly interactive applications that execute on both the
server side and client machine. With these advancements,
modern web pages are no longer simple documents. They
comprise highly dynamic contents that work together with each
other. In other words, a Web page is now said to be a “system”–
having dynamic contents as programs running in it, interacting
2. International Journal of Computer Applications Technology and Research
Volume 7–Issue 04, 185-192, 2018, ISSN:-2319–8656
www.ijcat.com 186
with users, accessing other contents both on the web page and
in the hosting browser, invoking browser Application
Programming Interfaces (APIs), and interacting with programs
on the server side. These advancements require adequate
computer memory in order to run properly from a host
computer.
Consequently, these advancements in content rendering have
raised memory demand browsers. In fact, memory allocation to
a browser rises gradually from tens of Megabytes (Mbs), to
hundreds of Mbs and eventually to Gigabytes (Doug DePerry,
2012). This fact only, categorizes browsers as today’s memory
“wolfs”. Indeed, it leads to browser crash. The size of Random
Access Memory (RAM) is an important factor in the running
of software and consequently determines the level of
multiprogramming. A single process consuming nearly a
gigabyte of RAM in a one GB computer will lead to starvation
of other processes and therefore lower multiprogramming
level. This starvation may eventually lead to a crawl. However,
these browsers behave differently in different platforms and
with the content, the browser loads.
2. METHODS
The works reviewed were based contemporary browsers
architecture and optimization techniques adopted thereon.
2. 1 Introduction
Today, browsers have advanced in terms of content rendering.
This has raised memory demand for browsers. In fact, memory
allocation to a browser rises gradually from tens of Megabytes
(Mbs), to hundreds of Mbs and eventually to Gigabytes (Doug
DePerry, 2012). This fact only categorizes browsers as today’s
memory wolfs. The size of RAM is an important factor in the
running of software and consequently determines the level of
multiprogramming; especially when it comes to browser
efficiency. A single process consuming nearly a gigabyte of
RAM in a one GB computer will lead to starvation of other
processes and therefore lower multiprogramming level. This
starvation may eventually lead to a crawl and even lead to the
browser crashing. However, browsers behave differently in
different platforms and with the content, they display.
2.2 Causes of Memory Hogging
Many users from the respective browser forums have regularly
affirmed that memory hogging is attributed by several factors.
To begin with is the length of the time the browser is used. As
the browser gets used, gradually it will take more time to load
during startup, the speed might decrease; and browsing
eventually starts to slow down. This is a very frequent problem
and occurs partially because of fragmentation in the databases
browsers use. In particular, if Firefox is left running for a
number of hours, consumed memory of well over a Gigabyte is
observed even with only a few tabs open; a long running
memory leak issue that plagues Firefox sometimes (Doug
DePerry, 2012).
Secondly, when a user opens many tabs simultaneously, the
browser will use more RAM. This is because each tab is
designed to cache pictures, text and other active data, which
keeps page data persistent while using multiple tabs.
Expectedly, browsers such as Chrome and Firefox have ways
to turn this behavior off, but the user may not wish it to happen.
This is because, without caching, YouTube videos will not play
in the background, and most real-time web apps will fail to
work correctly.
Memory leakage is another factor. A memory leak happens
when the browser for some reason doesn’t release memory
from objects which are not needed any more. This may happen
because of browser bugs, browser extension problems and
rarely, due to browser developer mistakes in the code
architecture. Leaks may occur because of browser extensions,
interacting with the page. More importantly, a leak may occur
because of two extensions interaction bugs (Ilya Kantor, 2011).
For instance, when Skype extension and the Antivirus are
enabled, it leaks and when any of them is off, it doesn’t.
2.3 Browser Reference Architecture
The following illustrates the convectional browser architecture
adopted while building the contemporary browsers. We keenly
look at how specific browsers have adopted this model.
3. International Journal of Computer Applications Technology and Research
Volume 7–Issue 04, 185-192, 2018, ISSN:-2319–8656
www.ijcat.com 187
Figure 1: Browser Reference Architecture
The User Interface component provides the methods with
which a user interacts with the Browser Engine. The User
Interface provides standard features (preferences, printing,
downloading, and toolbars) users expect when dealing with a
desktop application.
The Browser Engine component provides a high-level
interface to the Rendering Engine. The Browser Engine
provides methods to initiate the loading of a Uniform Resource
Locator (URL) and other high-level browsing actions (reload,
back, forward). The Browser Engine also provides the User
interface with various messages relating to error messages and
loading progress.
The Rendering Engine component produces the visual
representation of a given URL. The Rendering Engine
interprets the HTML, Extensible Markup Language (XML),
and JavaScript that comprises a given URL and generates the
layout that is displayed in the User Interface. A key component
of the Rendering Engine is the HTML parser, this HTML parser
is quite complex because it allows the Rendering Engine to
display poorly formed HTML pages.
The Networking component provides functionality to handle
URLs retrieval using the common Internet protocols of
Hypertext Transfer Protocol (HTTP), Hyper Text Transfer
Protocol Secure (HTTPS) and File Transfer Protocol (FTP).
The Networking components handle all aspects of Internet
communication and security, character set translations and
MIME type resolution. The Network component may
implement a cache of retrieved documents to minimize network
traffic.
The JavaScript Interpreter component executes the
JavaScript code that is embedded in a website. Results of the
execution are passed to the Rendering Engine for display. The
Rendering Engine may disable various actions based on user
defined properties.
The XML Parser component is used to parse XML documents.
The Display Backend component is tightly coupled with the
host operating system. It provides primitive drawing and
windowing methods that are host operating system dependent.
The Data Persistence component manages user data such as
bookmarks and preferences.
3. BROWSER ARCHITECTURES
In a view to find how browsers have been developed, their
architectures were reviewed to find out whether they are true
derivations from the reference architecture.
3.1 Google Chrome
Google Chrome uses a multi-process architecture which gives
it a competitive edge in performance over other browsers. Each
tab has its own process which runs independently from other
tabs. This allows one tab process to dedicate itself to a single
web-application, thereby increasing browser performance. This
protects the browser application from bugs and glitches in the
rendering engine. Furthermore, it restricts access from each
rendering engine process to others and to the rest of the system.
This scenario offers memory protection and access control as
manifested in operating systems. The multi-process
architecture also increases the stability of the browser, as it
provides insulation. In the case that one process encounters a
bug and crashes, the browser itself and the other applications
running concurrently are preserved.
Function wise, this is an improvement over other browsers, as
highly valuable user information in other tabs will be
preserved. Google Chrome has used the WebKit as a layout
engine until version 27. Later versions have been using
Blink.V8 has been used as JavaScript Interpreter in all versions.
The components of Chrome are distributed under various open
source licenses. Although Google developers have variant
components in their architectural design, they have derived it
from the reference architecture.
4. International Journal of Computer Applications Technology and Research
Volume 7–Issue 04, 185-192, 2018, ISSN:-2319–8656
www.ijcat.com 188
Figure 3.1: Google Chrome Architecture
3.2 Microsoft Internet Explorer
Essential to the browser's architecture is the use of the
Component Object Model (COM), which governs the
interaction of all of its components and enables component
reuse and extensibility (MSDN, 2016). Internet Explorer uses
Jscript and VBScript as JavaScript interpreter and Trident
layout engine.
Figure 3.2: Internet Explorer Architecture
The following is a description of each of Microsoft Internet
Explorer’s six key framework components:
IExplore.exe is at the top level, and is the Internet Explorer
executable. It is a small application that relies on the other main
components of Internet Explorer to do the work of rendering,
navigation, protocol implementation.
Browsui.dll provides the user interface to Internet Explorer.
Often referred to as the "chrome," this Dynamic Link Library
(DLL) includes the Internet Explorer address bar, status bar,
menus, and so on.
Shdocvw.dll provides functionality such as navigation and
history, and is commonly referred to as the WebBrowser
control. This DLL exposes ActiveX Control interfaces,
enabling you to easily host the DLL in a Windows application
using frameworks such as Microsoft Visual Basic, Microsoft
Foundation Classes (MFC), Active Template Library (ATL), or
Microsoft .NET Windows Forms. When your application hosts
9999 the WebBrowser control, it obtains all the functionality of
Internet Explorer except for the user interface provided by
Browseui.dll. This means that you will need to provide your
own implementations of toolbars and menus.
Mshtml.dll is at the heart of Internet Explorer and takes care
of its HTML and Cascading Style Sheets (CSS) parsing and
rendering functionality. Mshtml.dll is sometimes referred to by
its code name, "Trident". Mshtml.dll exposes interfaces that
enable you to host it as an active document. Other applications
such as Microsoft Word, Microsoft Excel, Microsoft Visio, and
many non-Microsoft applications also expose active document
interfaces so they can be hosted by shdocvw.dll. For example,
when a user browses from an HTML page to a Word document,
mshtml.dll is swapped out for the DLL provided by Word,
which then renders that document type. Mshtml.dll may be
called upon to host other components depending on the HTML
document's content, such as scripting engines (for example,
Microsoft JScript or Microsoft Visual Basic Scripting Edition
(VBScript)), ActiveX controls, XML data,
Urlmon.dll offers functionality for MIME handling and code
download.
WinInet.dll is the Windows Internet Protocol handler. It
implements the HTTP and File Transfer Protocol (FTP)
protocols along with cache management.
Microsoft’s Internet Explorer architecture utilizes the reference
model components though variant in design.IExplorer.exe is a
wrapper for the whole application. Browsui.dll serves as user
interface while Shdocvw.dll performs functions of a browser
engine. The Mshtml.dll is the core component that serves as
rendering engine. It has HTML, CSS, XML and JavaScript
parsers. WinInet.dll provides networking functions as
provided for in the reference architecture.
3.3 Mozilla Firefox
5. International Journal of Computer Applications Technology and Research
Volume 7–Issue 04, 185-192, 2018, ISSN:-2319–8656
www.ijcat.com 189
The following model has been used in the design of Mozilla
Firefox (Andre C. et al. 2007).
Figure 3.3: Mozilla Firefox architecture
The User Interface is split over two subsystems, allowing for
parts of it to be reused in other applications in the Mozilla suite
such as the mail/news client. All data persistence is provided
by Mozilla’s profile mechanism, which stores both high-level
data such as bookmarks and low-level data such as a page
cache.
Mozilla’s Rendering Engine is larger and more complex than
that of other browsers. One reason for this is Mozilla’s
excellent ability to parse and render malformed or broken
HTML. Another reason is that the Rendering Engine also
renders the application’s cross-platform user interface. The
User Interface (UI) is specified in platform-independent
Extensible User Interface Language (XUL), which in turn is
mapped onto platform-specific libraries using specially written
adapter components. This architecture distinguishes Mozilla
from other browsers in which the platform-specific display and
widget libraries are used directly, and it minimizes the
maintenance effort required to support multiple, diverse
platforms.
Recently, the core of Mozilla has been transformed into a
common runtime called XULRunner, exposing the Rendering
Engine, Networking, JavaScript Interpreter, Display Backend,
and Data Persistence subsystems to other applications.
XULRunner allows developers to use modern web
technologies to create rich client applications, as opposed to
typical browser-based web applications. In fact, the Mozilla
developers are working on transitioning newer Mozilla-based
applications such as Firefox and Thunderbird to use
XULRunner directly, rather than each using a separate copy of
the core libraries. All components of this model fits exactly to
those in the reference architecture.
3.4 Weaknesses of the Current Browser
Architecture
a) The rendering engine processes the requests made by
the browser engine by giving a visual display of the
URL. This happens provided there is little memory
available for use by the browser. If the operating
system can no longer allocate any more memory, the
computer freezes hence becomes unusable.
b) The browser process prevents other legitimate
processes from being loaded in the main memory if
it consumes almost all-available memory. This
reduces the level of multiprogramming.
From the review of the above named architectures, memory
hogging still remains a thorny issue. In attempt to reduce the its
impact, third party software have been developed.
4. MEMORY OPTIMIZATION
TECHNIQUES
To free memory that is unnecessary to the browser, several
third party tools have been used. Memory optimization
programs include but not limited to the following:
4.1 Firemin
With Firemin for Firefox, you can effectively stop Firefox
memory leaks automatically. As memory usage of this popular
browser increases, your system slows down and you're stuck
with limited system resources. In fact, Firefox can use up to
500MB of memory if you use the browser continuously (F.
Ortega, 2013). Firemin forces Firefox to give back the memory
it took from Windows and allows you to use Firefox in an
optimized environment.
Firemin does not do anything that Windows does not do itself
when the system runs out of RAM. It calls the Windows
function EmptyWorkingSet over and over again in a loop to
free up memory. Calling the function removes as many pages
as possible from the working set of the specified process. The
program ships with a slider that you can use to set the desired
interval in which you want it to call the function.
However, the limitations of Firemin.exe are that, the technical
security rating is 30% dangerous. This is because it records
keyboard and mouse inputs, monitors applications and
manipulates other programs. Moreover, some malware
6. International Journal of Computer Applications Technology and Research
Volume 7–Issue 04, 185-192, 2018, ISSN:-2319–8656
www.ijcat.com 190
camouflages itself as Firemin.exe, particularly when located in
the C:windows or C:windowsSystem32 folder. Also, Firemin
is only compatible with Mozilla Firefox.
Figure 4.1: Firemin
4.2 Wise Memory Optimizer
Wise Memory Optimizer helps you free up and tune up the
physical memory taken up by some unknown non-beneficial
applications to enhance PC performance. You can enable
automatic optimization mode when the free PC memory goes
below a value that you can specify, and make Wise Memory
Optimizer run even when the CPU is idle, as well as adjust the
amount of memory you want to free up. Then it will optimize
PC memory automatically in the background.
However, this tool does not prevent the browser from hogging
memory it only reclaims memory from unknown non-
beneficial applications.
Figure 4.2: Wise memory optimizer
4.3 SpeedyFox
SpeedyFox is a tool designed specifically for compacting the
SQLite database files which will in turn reduce the time taken
to read from and write to them. In addition to Firefox which it
was originally designed for, SpeedyFox can now also compact
the databases for the Chrome, Epic Browser, SRWare Iron and
Pale Moon browsers. It also supports the Mozilla Thunderbird
and Skype tools as well.
Upon running the portable executable, SpeedyFox
automatically detects and loads the default profile for each of
the supported applications. As they’re very popular these days,
it’s also possible to load custom profiles for Firefox or Chrome
portable versions. Click the SpeedyFox menu bar and select
“Add custom profile” or drag the profile folder and drop it onto
the SpeedyFox window.
Simply tick the application profiles to optimize and click the
Optimize! Button, SpeedyFox will start to compact the SQLite
databases. The progress window shows what databases are
optimized and also how much space is saved. You need to make
sure the programs being optimized are not running at the time
or they won’t be processed. In a quick test it reduced 14MB of
Firefox databases to 6MB and 192Mb of Chrome databases to
186MB. The author of SpeedyFox recommends running the
tool every 1-2 weeks depending on your usage of the included
browsers.
Though tool increases Mozilla Firefox launch speed, it does not
prevent memory hogging. It just clears cache over some time.
Figure 4.3: SpeedyFox
4.4 All Browsers Memory Zip
All Browsers Memory Zip has no database compacting
functions but is a dedicated memory-optimizing tool for a large
number of popular web browsers. It works very much like
another memory optimizer called CleanMem, but this tool only
handles browsers. In addition to Chrome and Firefox, it also
works with other popular browsers like Opera, Internet
7. International Journal of Computer Applications Technology and Research
Volume 7–Issue 04, 185-192, 2018, ISSN:-2319–8656
www.ijcat.com 191
Explorer and Maxthon etc. The program is portable but has
separate 32-bit and 64-bit versions, and when you run it there
will be a small tooltip and then All Browsers Memory Zip will
sit in the system tray optimizing the memory of any running
supported browsers. If you open Task Manager
(Ctrl+Shift+Esc) before you launch the tool, you will see the
used memory for the browser process suddenly decreases by a
massive amount. It is not uncommon to see 1GB+ in memory
usage drop to fewer than 10MB in a few seconds.
Right click on the tray icon to pause the program from
optimizing and pressing Usage Controller will popup the
window above that will allow you to set the maximum amount
or RAM for each browser and edit the shortcut keys. Just select
the browser form the dropdown, enter the max amount in
Megabytes and click Set.
This tool must execute all times a browser process is running.
It requires a significant amount of memory. Consequently, it
impacts negatively when streaming content over the Internet.
Figure 4.4: All browsers zip usage controller
5. CONCLUSION
A review on various techniques adopted in a view to control
memory hogging was presented in this paper. It is evident
enough that memory hogging among various browsers is and
still a thorny issue. It is desired that computer applications use
little memory and execute faster with a view to allow as many
programs to be loaded in the main memory for execution. With
browsers being among such applications, this still remains an
issue under investigation. Many third party applications have
been developed in quest to reduce memory consumption. These
applications include Firemin, Wise Memory Optimizer,
SpeedyFox and All browsers memory zip. After all these tools
have been analyzed, it has been found out that, memory control
is not efficient, poor compatibility issues, overhead to users and
decrease in browser performance. An interesting issue has been
found on the browser reference architecture. The contemporary
architecture in use today aggravates this problem. It has been
found out that, this model lacks memory control mechanism
which would complement these third party applications.
Today, browsers have become a major platform through which
resources accessed via the Internet are availed to the user.
Based on this fact, memory as prime resource has remained a
major limitation while trying to run applications through the
browser. This has become a major drawback in multi
programming environment. A new approach to incorporate a
memory analyzer in the architecture has been suggested. It is
hoped that this shall control memory hogging and reduce
overhead to the browser application while optimizing memory.
6. REFERENCES:
[1] A. E. Hassan and R. C. Holt, (2000). A reference
architecture for web servers. In Proceedings of 7th the
Working Conference on Reverse Engineering (WCRE
’00), pp. 150–160, 2000.
[2] A. Mockus, R. T. Fielding, and J. Herbsleb, (2002) Two
case studies of open source software development:
Apache and Mozilla. In ACM Trans. Software
Engineering and Methodology, pp. 11(3), 309–346, 2002.
[3] A. Taivalsaari and T. Mikkonen, (2011). "The Web as an
Application Platform: The Saga Continues," Proc. 37th
Euromicro Conf. Software Engineering and Advanced
Applications (SEAA 11), IEEE CS, 2011, pp. 170–174.
[4] A. Taivalsaari et al., (2008). Web Browser as an
Application Platform: The Lively Kernel Experience, tech.
report TR-2008-175, Sun Microsystems Labs, 2008.
[5] Accuvant Labs, 2011: Browser Security Comparison; A
Quantitative Approach. Retrieved from
http://files.accuvant.com/web/files/AccuvantBrowserSecC
ompar_FINAL.pdf
[6] Adam Overa, 2011: Web Browser Grand Prix 3: IE9
Enters The Race. Efficiency Benchmarks: Memory Usage
and Management. Retrieved from
http://www.tomshardware.com/reviews/internet-explorer-
9-chrome-10-opera-11,2897-11.html
[7] Adèr, H.J. & Mellenbergh, G.J. (2008). Advising on
Research Methods: A consultant’s companion. Huizen,
the Netherlands: Johannes van Kessel Publish.
[8] Ahmed E. Hassan, Michael W. Godfrey, and Richard C.
Holt, n.d. Software Engineering Research in the Bazaar.
[9] Alan Grosskurth and Michael W. Godfrey, (2005)
Reference architecture for web browsers. In ICSM'05:
Proceedings of the 21st IEEE International Conference on
8. International Journal of Computer Applications Technology and Research
Volume 7–Issue 04, 185-192, 2018, ISSN:-2319–8656
www.ijcat.com 192
Software Maintenance (ICSM'05), pp 661-664,
Washington, DC, USA, 2005. IEEE Computer Society.
[10] Alan Grosskurth, Michael W. Godfrey ,(2006)
Architecture and evolution of the modern web browser.
Retrieved from http://grosskurth.ca/papers/browser-
archevol-20060619.pdf
[11] Allan Grosskurth and Michael Godfrey, (2014). Reference
architecture for web browsers. In Journal of Software
Maintenance and Evolution: Research and Practice, pp 1–
7, 2006
[12] Avant Force, (2016). Retrieved from
http://www.maxthon.com/about-us/
[13] Chris Anderson (2012). The Man Who Makes the Future:
Wired Icon Marc Andreessen. Retrieved from http:
//www.wired.com/2012/04/ff_andreessen/all/
[14] Doug Deperry, (2012). HTML5 security in the modern
web browser perspective.
[15] Gnome desktop environment. Home Page. Retrieved
from Http: //gnome.org.
[16] Karl Gephart (2013): Optimize Firefox’s Performance
with these Memory Add-Ons! : Retrieved from
http://www.drakeintelgroup.com/2013/06/25/Firefox-
memory-addons/
[17] Krause, Ralph (March 2000). Browser Comparison.
Retrieved from http://www.linuxjournal.com/article/5413
[18] Matthew Braga (2011): Web Browser Showdown:
Memory Management Tested. Retrieved from http:
//www.tested.com/tech/web/2420-web-browser-
showdown-memory-management-tested/index.php
[19] Maxthon International Ltd, (2014). Retrieved from
http://www.maxthon.com/about-us/
[20] Nick Veitch, (August 2010). 8 of the best web browsers
for Linux. Retrieved from
http://www.techradar.com/news/software/applications/8-
of-the-best-web-browsers-for-linux-706580/3
[21] Nyce, J. M. & Kahn P., (1991).From Memex To
Hypertext: Vannevar Bush and the Mind's Machine.
Academia press, San Diego.
[22] Opera Software, (February, 2003). “Opera version
history”. Retrieved from http://www.opera.com/docs/
[23] Pour, Andreas (January, 2003). "Apple Announces New
"Safari" Browser". KDE Dot News. Retrieved from
https://dot.kde.org/2003/01/08/apple-announces-new-
safari-browser
[24] Sagar, A., Pratik, G., Rajwin, P. and Aditya, G., (2010).
Market research on web browsers. Retrieved from
http://www.slideshare.net/sagar_agrawal/research-on-
web-browsers.
[25] T. Mikkonen and A. Taivalsaari, (2007) “Web
Applications – Spaghetti Code for the 21 Century”.
Retrieved from
http://research.sun.com/techrep/2007/abstract-166.html
(presented in the SERA Conference, Prague, Czech
Republic, August 21, 2008)
[26] T. Mikkonen and A. Taivalsaari, (2008) “Web Browser
as an Application Platform: The Lively Kernel
Experience”
http://research.sun.com/techrep/2008/abstract-175.html
(presented in the SEAA Conference, Parma, Italy,
September 4, 2008)
[27] T. Mikkonen and A. Taivalsaari, (2011). "Apps vs. Open
Web: The Battle of the Decade," Proc. 2nd Workshop
Software Eng. for Mobile Application Development (MSE
11), 2011; Retrieved from
http://www.mobileseworkshop.org/papers6-
Mikkonen_Taivalsaari.pdf.
[28] Tim Berners-Lee (1999). Weaving the Web: The Original
Design and Ultimate Destiny of the World Wide Web by
Its Inventor. Harper San Francisco.
[29] W3C (2004). Architecture of the World Wide Web,
Volume One. Online. Retrieved from
http://www.w3.org/TR/webarch/
[30] Wayner, Peter (January, 2005). "BASICS; Custom Tailor
A Web Browser Just for You", The New York Times,
ISSN 0362-4331, OCLC 1645522. Retrieved from
http://query.nytimes.com/gst/fullpage.html?res=9C0DE6
D9163BF934A15752C0A9639C8B63