The BRKSS Architecture is based upon shared
nothing clustering that can scale-up to a large number of
computers, increase their speed and maintain the work load.
The architecture comprises of a console along with a CPU that
also acts as a buffer and stores information based on the
processing of transactions, when a batch enters into the
system. This console is connected to a switch (p-ports) which is
again connected to the c-number of clusters through their
respective hubs. The architecture can be used for personal
databases and for online databases like cloud through router.
This architecture uses the concept of load balancing by
moving the transaction among various nodes within the
clusters so that the overhead of a particular node can be
minimised. In this paper we have simulated the working of
BRKSS architecture using JDK 1.7 with Net beans 8.0.2. We
compared the result of performance parameters sch as
turnaround time, throughput and waiting time with existing
hierarchical clustering model.
This document discusses data replication in distributed systems. It describes two types of replication: synchronous and asynchronous. For asynchronous replication, there is primary site asynchronous and peer-to-peer asynchronous. The document then focuses on the lazy master framework, outlining components like the primary/secondary copies, ownership, propagation, refreshment, and configuration. It provides details on the system architecture involving log monitors, propagators, receivers, and refreshers. Finally, it covers topics like failure and recovery handling.
SSL implemented for secure communication over network. For establishing a tunnel between Queue Managers SSL is important. Here used Sha256 algorithm for encryption & decryption.
1. The document provides solutions to problems regarding database replication.
2. For a read-only replicated database, availability improves as more replicas are added. However, for an update-only replicated database, availability can decrease if the replication protocol requires updating all replicas for a transaction to commit.
3. The replication protocol described, where transactions execute on one server and propagate updates to the other server within the transaction boundary using two-phase locking and two-phase commit, does not provide one-copy serializability. A history is provided as a counterexample.
This is the presentation I used to give a seminar about the "Paxos Commit" algorithm. It is one of Leslie Lamport's works (in this case, a joint work between him and Jim Gray). You can find the original paper here:
http://research.microsoft.com/users/lamport/pubs/pubs.html#paxos-commit
Feel free to post comments ;)
Enjoy.
This document discusses data replication in distributed systems. It describes two types of replication: synchronous and asynchronous. For asynchronous replication, there is primary site asynchronous and peer-to-peer asynchronous. The document then focuses on the lazy master framework, outlining components like the primary/secondary copies, ownership, propagation, refreshment, and configuration. It provides details on the system architecture involving log monitors, propagators, receivers, and refreshers. Finally, it covers topics like failure and recovery handling.
SSL implemented for secure communication over network. For establishing a tunnel between Queue Managers SSL is important. Here used Sha256 algorithm for encryption & decryption.
1. The document provides solutions to problems regarding database replication.
2. For a read-only replicated database, availability improves as more replicas are added. However, for an update-only replicated database, availability can decrease if the replication protocol requires updating all replicas for a transaction to commit.
3. The replication protocol described, where transactions execute on one server and propagate updates to the other server within the transaction boundary using two-phase locking and two-phase commit, does not provide one-copy serializability. A history is provided as a counterexample.
This is the presentation I used to give a seminar about the "Paxos Commit" algorithm. It is one of Leslie Lamport's works (in this case, a joint work between him and Jim Gray). You can find the original paper here:
http://research.microsoft.com/users/lamport/pubs/pubs.html#paxos-commit
Feel free to post comments ;)
Enjoy.
This document analyzes the scalability of Apache Flume by conducting experiments with different Flume configurations and load levels. Two experiment setups are used: one with a one-to-one relationship between Flume nodes and load generators, and another using a cascading setup to aggregate events from two scale nodes into a collector node. The results show that doubling the channel capacity does not necessarily double the maximum event rate, and that a cascading setup can improve scalability over a non-cascading setup.
This document summarizes key concepts about deadlocks from Chapter 7 of the textbook "Operating System Concepts – 9th Edition" by Silberschatz, Galvin and Gagne. It defines the four conditions required for deadlock, describes methods for handling deadlocks including prevention, avoidance, detection and recovery, and provides examples to illustrate resource allocation graphs and the banker's algorithm for deadlock avoidance.
Queue Manager both as sender and Receiver.docxsarvank2
Queue manager is one and once delivery but sure delivery. It is very best technology for message routing. Queue Manager here acting as sender and reciever.
Drivers connect applications to Cassandra clusters and maintain connections to nodes. They probe clusters to discover nodes, token ranges, and latency. Drivers are data-aware and can route queries to appropriate replicas or fail over if needed. Cassandra clusters can span multiple data centers for redundancy, workload separation, and geographic distribution of data and queries. Configuration files like cassandra.yaml and cassandra-env.sh are used to configure memory, data storage, caching, and other settings. Cassandra clusters should be provisioned on commodity servers using tools like cassandra-stress to test workloads and estimate needed nodes.
The document discusses processes and process scheduling in operating systems. It defines a process as a program in execution that changes state as it runs. Process information is stored in a process control block. Processes move between ready, running, waiting, and terminated states. The operating system uses long-term and short-term schedulers to select which processes to move between queues like ready and device queues. Context switching occurs when the CPU switches between processes. Processes can cooperate through communication and synchronization using message passing or shared memory.
Logging Last Resource Optimization for Distributed Transactions in Oracle…Gera Shegalov
The document describes the Logging Last Resource (LLR) optimization used in Oracle WebLogic Server, which adapts the Nested 2PC optimization to reduce the number of forced writes and exchanged messages for distributed transactions spanning multiple resources including a SQL database. LLR allows the last participant in a transaction to act as the transaction coordinator, logging the outcome only after all other participants have responded, saving synchronous writes and messages compared to the standard Presumed Abort 2PC protocol. This optimization has been validated on industry benchmarks and successfully deployed by customers for mission-critical high-performance applications.
The document discusses CPU scheduling algorithms in operating systems. It covers basic scheduling concepts, criteria for evaluating algorithms, and examples of common algorithms like first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It also discusses more advanced topics like multilevel queue scheduling, real-time scheduling, thread scheduling, and scheduling on multiprocessor and multicore systems.
Logging Last Resource Optimization for Distributed Transactions in Oracle We...Gera Shegalov
Logging Last Resource Optimization for Distributed Transactions in Oracle WebLogic Server describes optimizing distributed transactions by designating the last resource as a non-XA "logging last resource" (LLR). This allows skipping XA protocol calls for the last resource, reducing latency. The transaction manager logs to the LLR table, which acts as the combined transaction log. If the LLR commit succeeds, other resources are committed; if it fails, the transaction aborts globally. This provides the same ACID guarantees with lower overhead compared to a standard two-phase commit protocol.
Group Communication (Distributed computing)Sri Prasanna
This document discusses different modes of communication in distributed systems including unicast, anycast, multicast, and broadcast. It then covers topics related to implementing group communication such as hardware vs software approaches, reliability, ordering of messages, and protocols like IP multicast and IGMP.
The document summarizes key concepts from Chapter 6 of Operating System Concepts - 9th Edition about CPU scheduling. It discusses the goals of CPU scheduling, including maximizing CPU utilization and throughput. It describes common scheduling algorithms like first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin. It also covers more advanced techniques such as multilevel queue scheduling and multilevel feedback queue scheduling. Evaluation methods like deterministic modeling are presented to analyze and compare the performance of different scheduling algorithms.
The International Journal of Engineering and Science (The IJES)theijes
This document summarizes a research paper on using work-stealing scheduling algorithms to effectively balance workloads across multiple processors in real-time systems. It discusses how work stealing allows tasks to be distributed from busy processors to idle ones, improving system throughput. The authors propose a priority-based approach where tasks are stolen following an earliest deadline first policy. Their EDF-HSB scheduling algorithm uses work stealing combined with periodic servers to ensure real-time tasks meet deadlines while also efficiently executing best-effort jobs opportunistically during idle periods. The system model and overall approach are described to integrate real-time and best-effort work using global scheduling with work stealing.
This document summarizes key concepts from Chapter 4 of the textbook "Operating System Concepts - 9th Edition" about threads. It discusses how threads allow applications to take advantage of multicore systems through parallelism and concurrency. Different threading models like many-to-one, one-to-one, and many-to-many are described based on how user threads map to kernel threads. Popular thread libraries like POSIX pthreads and how they provide APIs for thread creation and synchronization are also covered. The document concludes with sections on implicit threading, threading issues, and thread cancellation approaches.
This document discusses CPU scheduling in operating systems. It introduces CPU scheduling as the basis for multiprogrammed operating systems. Various scheduling algorithms are described such as first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). Evaluation criteria for scheduling algorithms like CPU utilization, throughput, turnaround time, and waiting time are also presented. Multilevel queue and multilevel feedback queue scheduling are discussed as ways to improve performance.
The document summarizes key concepts about the transport layer from Chapter 3, including:
- The transport layer provides logical communication between application processes running on different hosts. It uses multiplexing and demultiplexing to direct data between applications.
- The main Internet transport protocols are UDP (connectionless) and TCP (connection-oriented and reliable). TCP provides congestion control while UDP does not.
- Reliable data transfer requires mechanisms like sequence numbers, acknowledgments, timeouts, and retransmissions to handle packet loss and out-of-order delivery over unreliable networks. Stop-and-wait and pipelined protocols like Go-Back-N and Selective Repeat are introduced.
- TCP provides
This chapter discusses process synchronization and solving the critical section problem. It introduces Peterson's solution to the critical section problem and other synchronization methods like mutex locks, semaphores, and monitors. Classical synchronization problems covered include the bounded buffer problem, dining philosophers problem, and readers-writers problem. The chapter aims to present concepts of process synchronization and tools to solve synchronization issues.
This document discusses various techniques for clock synchronization and maintaining consistency in distributed systems. It covers Cristian's algorithm, the Berkeley algorithm, Lamport timestamps, algorithms for mutual exclusion including a centralized, distributed, and token ring approach, and techniques for concurrency control including two-phase locking, pessimistic timestamp ordering, and ensuring serializability of transactions.
CPU Scheduling is the process through which we can find the best way to check the shortest and fastest working. Different Algorithms are explained here in this chapter. First-come-first-servers, Shortest Job First, Shortest remaining time first,
Round Robin, Priority Scheduler.
Training Slides: Advanced 303: Upgrading from Tungsten Clustering 5.x Multi-S...Continuent
In our advanced training, we discuss upgrading from Tungsten Clustering 5.x multi-site/multi-master to 6.0 multimaster. We review the key differences between Tungsten Clustering v5 and v6, discuss new service names, and walk through an upgrade with a full end-to-end demo. Basic MySQL and MySQL replication knowledge is assumed.
Course Prerequisite Learning
- Basics: Introduction to Clustering
- Advanced: Multi-Site/Multi-Master Tungsten Clustering Deployments for Geo-Distributed Applications
AGENDA
- Review the key difference between v5 and v6
- Discuss new Service Names
- Walkthrough an Upgrade (Full end to end demo)
Container orchestration from theory to practiceDocker, Inc.
"Join Laura Frank and Stephen Day as they explain and examine technical concepts behind container orchestration systems, like distributed consensus, object models, and node topology. These concepts build the foundation of every modern orchestration system, and each technical explanation will be illustrated using SwarmKit and Kubernetes as a real-world example. Gain a deeper understanding of how orchestration systems work in practice and walk away with more insights into your production applications."
Efficient Resource Allocation to Virtual Machine in Cloud Computing Using an ...ijceronline
The focus of the paper is to generate an advance algorithm of resource allocation and load balancing that can deduced and avoid the dead lock while allocating the processes to virtual machine. In VM while processes are allocate they executes in queue , the first process get resources , other remains in waiting state .As rest of VM remains idle . To utilize the resources, we have analyze the algorithm with the help of First-Come, First-Served (FCFS) Scheduling, Shortest-Job-First (SJR) Scheduling, Priority Scheduling, Round Robin (RR) and CloudSIM Simulator.
This document analyzes the scalability of Apache Flume by conducting experiments with different Flume configurations and load levels. Two experiment setups are used: one with a one-to-one relationship between Flume nodes and load generators, and another using a cascading setup to aggregate events from two scale nodes into a collector node. The results show that doubling the channel capacity does not necessarily double the maximum event rate, and that a cascading setup can improve scalability over a non-cascading setup.
This document summarizes key concepts about deadlocks from Chapter 7 of the textbook "Operating System Concepts – 9th Edition" by Silberschatz, Galvin and Gagne. It defines the four conditions required for deadlock, describes methods for handling deadlocks including prevention, avoidance, detection and recovery, and provides examples to illustrate resource allocation graphs and the banker's algorithm for deadlock avoidance.
Queue Manager both as sender and Receiver.docxsarvank2
Queue manager is one and once delivery but sure delivery. It is very best technology for message routing. Queue Manager here acting as sender and reciever.
Drivers connect applications to Cassandra clusters and maintain connections to nodes. They probe clusters to discover nodes, token ranges, and latency. Drivers are data-aware and can route queries to appropriate replicas or fail over if needed. Cassandra clusters can span multiple data centers for redundancy, workload separation, and geographic distribution of data and queries. Configuration files like cassandra.yaml and cassandra-env.sh are used to configure memory, data storage, caching, and other settings. Cassandra clusters should be provisioned on commodity servers using tools like cassandra-stress to test workloads and estimate needed nodes.
The document discusses processes and process scheduling in operating systems. It defines a process as a program in execution that changes state as it runs. Process information is stored in a process control block. Processes move between ready, running, waiting, and terminated states. The operating system uses long-term and short-term schedulers to select which processes to move between queues like ready and device queues. Context switching occurs when the CPU switches between processes. Processes can cooperate through communication and synchronization using message passing or shared memory.
Logging Last Resource Optimization for Distributed Transactions in Oracle…Gera Shegalov
The document describes the Logging Last Resource (LLR) optimization used in Oracle WebLogic Server, which adapts the Nested 2PC optimization to reduce the number of forced writes and exchanged messages for distributed transactions spanning multiple resources including a SQL database. LLR allows the last participant in a transaction to act as the transaction coordinator, logging the outcome only after all other participants have responded, saving synchronous writes and messages compared to the standard Presumed Abort 2PC protocol. This optimization has been validated on industry benchmarks and successfully deployed by customers for mission-critical high-performance applications.
The document discusses CPU scheduling algorithms in operating systems. It covers basic scheduling concepts, criteria for evaluating algorithms, and examples of common algorithms like first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It also discusses more advanced topics like multilevel queue scheduling, real-time scheduling, thread scheduling, and scheduling on multiprocessor and multicore systems.
Logging Last Resource Optimization for Distributed Transactions in Oracle We...Gera Shegalov
Logging Last Resource Optimization for Distributed Transactions in Oracle WebLogic Server describes optimizing distributed transactions by designating the last resource as a non-XA "logging last resource" (LLR). This allows skipping XA protocol calls for the last resource, reducing latency. The transaction manager logs to the LLR table, which acts as the combined transaction log. If the LLR commit succeeds, other resources are committed; if it fails, the transaction aborts globally. This provides the same ACID guarantees with lower overhead compared to a standard two-phase commit protocol.
Group Communication (Distributed computing)Sri Prasanna
This document discusses different modes of communication in distributed systems including unicast, anycast, multicast, and broadcast. It then covers topics related to implementing group communication such as hardware vs software approaches, reliability, ordering of messages, and protocols like IP multicast and IGMP.
The document summarizes key concepts from Chapter 6 of Operating System Concepts - 9th Edition about CPU scheduling. It discusses the goals of CPU scheduling, including maximizing CPU utilization and throughput. It describes common scheduling algorithms like first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin. It also covers more advanced techniques such as multilevel queue scheduling and multilevel feedback queue scheduling. Evaluation methods like deterministic modeling are presented to analyze and compare the performance of different scheduling algorithms.
The International Journal of Engineering and Science (The IJES)theijes
This document summarizes a research paper on using work-stealing scheduling algorithms to effectively balance workloads across multiple processors in real-time systems. It discusses how work stealing allows tasks to be distributed from busy processors to idle ones, improving system throughput. The authors propose a priority-based approach where tasks are stolen following an earliest deadline first policy. Their EDF-HSB scheduling algorithm uses work stealing combined with periodic servers to ensure real-time tasks meet deadlines while also efficiently executing best-effort jobs opportunistically during idle periods. The system model and overall approach are described to integrate real-time and best-effort work using global scheduling with work stealing.
This document summarizes key concepts from Chapter 4 of the textbook "Operating System Concepts - 9th Edition" about threads. It discusses how threads allow applications to take advantage of multicore systems through parallelism and concurrency. Different threading models like many-to-one, one-to-one, and many-to-many are described based on how user threads map to kernel threads. Popular thread libraries like POSIX pthreads and how they provide APIs for thread creation and synchronization are also covered. The document concludes with sections on implicit threading, threading issues, and thread cancellation approaches.
This document discusses CPU scheduling in operating systems. It introduces CPU scheduling as the basis for multiprogrammed operating systems. Various scheduling algorithms are described such as first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). Evaluation criteria for scheduling algorithms like CPU utilization, throughput, turnaround time, and waiting time are also presented. Multilevel queue and multilevel feedback queue scheduling are discussed as ways to improve performance.
The document summarizes key concepts about the transport layer from Chapter 3, including:
- The transport layer provides logical communication between application processes running on different hosts. It uses multiplexing and demultiplexing to direct data between applications.
- The main Internet transport protocols are UDP (connectionless) and TCP (connection-oriented and reliable). TCP provides congestion control while UDP does not.
- Reliable data transfer requires mechanisms like sequence numbers, acknowledgments, timeouts, and retransmissions to handle packet loss and out-of-order delivery over unreliable networks. Stop-and-wait and pipelined protocols like Go-Back-N and Selective Repeat are introduced.
- TCP provides
This chapter discusses process synchronization and solving the critical section problem. It introduces Peterson's solution to the critical section problem and other synchronization methods like mutex locks, semaphores, and monitors. Classical synchronization problems covered include the bounded buffer problem, dining philosophers problem, and readers-writers problem. The chapter aims to present concepts of process synchronization and tools to solve synchronization issues.
This document discusses various techniques for clock synchronization and maintaining consistency in distributed systems. It covers Cristian's algorithm, the Berkeley algorithm, Lamport timestamps, algorithms for mutual exclusion including a centralized, distributed, and token ring approach, and techniques for concurrency control including two-phase locking, pessimistic timestamp ordering, and ensuring serializability of transactions.
CPU Scheduling is the process through which we can find the best way to check the shortest and fastest working. Different Algorithms are explained here in this chapter. First-come-first-servers, Shortest Job First, Shortest remaining time first,
Round Robin, Priority Scheduler.
Training Slides: Advanced 303: Upgrading from Tungsten Clustering 5.x Multi-S...Continuent
In our advanced training, we discuss upgrading from Tungsten Clustering 5.x multi-site/multi-master to 6.0 multimaster. We review the key differences between Tungsten Clustering v5 and v6, discuss new service names, and walk through an upgrade with a full end-to-end demo. Basic MySQL and MySQL replication knowledge is assumed.
Course Prerequisite Learning
- Basics: Introduction to Clustering
- Advanced: Multi-Site/Multi-Master Tungsten Clustering Deployments for Geo-Distributed Applications
AGENDA
- Review the key difference between v5 and v6
- Discuss new Service Names
- Walkthrough an Upgrade (Full end to end demo)
Container orchestration from theory to practiceDocker, Inc.
"Join Laura Frank and Stephen Day as they explain and examine technical concepts behind container orchestration systems, like distributed consensus, object models, and node topology. These concepts build the foundation of every modern orchestration system, and each technical explanation will be illustrated using SwarmKit and Kubernetes as a real-world example. Gain a deeper understanding of how orchestration systems work in practice and walk away with more insights into your production applications."
Efficient Resource Allocation to Virtual Machine in Cloud Computing Using an ...ijceronline
The focus of the paper is to generate an advance algorithm of resource allocation and load balancing that can deduced and avoid the dead lock while allocating the processes to virtual machine. In VM while processes are allocate they executes in queue , the first process get resources , other remains in waiting state .As rest of VM remains idle . To utilize the resources, we have analyze the algorithm with the help of First-Come, First-Served (FCFS) Scheduling, Shortest-Job-First (SJR) Scheduling, Priority Scheduling, Round Robin (RR) and CloudSIM Simulator.
Parallel processing involves performing multiple tasks simultaneously to increase computational speed. It can be achieved through pipelining, where instructions are overlapped in execution, or vector/array processors where the same operation is performed on multiple data elements at once. The main types are SIMD (single instruction multiple data) and MIMD (multiple instruction multiple data). Pipelining provides higher throughput by keeping the pipeline full but requires handling dependencies between instructions to avoid hazards slowing things down.
This document discusses parameters for tuning the performance of WebLogic servers. It covers OS-level TCP parameters, JVM heap size and GC logging parameters, WebLogic server-level parameters like work managers, execute queues, and stuck threads, and JDBC and JMS pool parameters. It also provides an overview of different types of garbage collection in the HotSpot JVM.
This document summarizes a paper that proposes and evaluates the performance of a multithreaded architecture capable of exploiting both coarse-grained parallelism and fine-grained instruction-level parallelism. The architecture distributes processing across multiple processing elements connected by an interconnection network. Each processing element supports multiple concurrently executing threads by grouping instructions from different threads. The architecture introduces a distributed data structure cache to reduce network latency when accessing remote data. Simulation results indicate the architecture achieves high processor throughput and the data structure cache significantly reduces network latency.
Container Orchestration from Theory to PracticeDocker, Inc.
Join Laura Frank and Stephen Day as they explain and examine technical concepts behind container orchestration systems, like distributed consensus, object models, and node topology. These concepts build the foundation of every modern orchestration system, and each technical explanation will be illustrated using Docker’s SwarmKit as a real-world example. Gain a deeper understanding of how orchestration systems like SwarmKit work in practice and walk away with more insights into your production applications.
Improvement of Congestion window and Link utilization of High Speed Protocols...IOSR Journals
This document summarizes a research paper that proposes using a k-nearest neighbors (k-NN) algorithm to help high-speed transport layer protocols like CUBIC better distinguish between packet drops due to network congestion versus other factors like noise. The k-NN algorithm would analyze patterns in packet drop history to classify new drops, helping protocols avoid unnecessary window size reductions when drops are not actually due to congestion. The document provides background on high-speed protocols, issues like underutilization from treating all drops as congestion, and how incorporating k-NN classification could improve protocols' performance in noisy network conditions.
The document discusses real-time operating systems for embedded systems. It describes that RTOS are necessary for systems with scheduling of multiple processes and devices. An RTOS kernel manages tasks, inter-task communication, memory allocation, timers and I/O devices. The document provides examples of creating tasks to blink an LED and print to USART ports, using a semaphore for synchronization between tasks. The tasks are run and output is seen on a Minicom terminal.
This document provides an overview of Postgres clustering solutions and distributed Postgres architectures. It discusses master-slave replication, Postgres-XC/XL, Greenplum, CitusDB, pg_shard, BDR, pg_logical, and challenges around distributed transactions, high availability, and multimaster replication. Key points include the tradeoffs of different approaches and an implementation of multimaster replication built on pg_logical and a timestamp-based distributed transaction manager (tsDTM) that provides partition tolerance and automatic failover.
Cruz:Application-Transparent Distributed Checkpoint-Restart on Standard Oper...Mark J. Feldman
The document summarizes Cruz, a system that provides application-transparent distributed checkpoint-restart on standard operating systems. Cruz uses a kernel module to checkpoint and restart unmodified applications and their distributed communication state. It migrates networked applications in a way that is transparent to remote peers by preserving each application's IP address and TCP connection state across migrations. Cruz enables fault tolerance, planned downtime reduction, and resource management for distributed applications without requiring modifications to applications or use of specialized systems.
This document discusses concepts related to client-server computing and database management systems (DBMS). It covers topics such as DBMS concepts and architecture, centralized and distributed systems, client-server systems, transaction servers, data servers, parallel and distributed databases, and network types. Key points include the definitions of centralized, client-server, and distributed systems. Transaction servers and data servers are described as two types of server system architectures. Issues related to parallelism such as speedup, scaleup, and factors limiting them are also covered.
Kafka Multi-Tenancy—160 Billion Daily Messages on One Shared Cluster at LINE confluent
(Yuto Kawamura, LINE Corporation) Kafka Summit SF 2018
LINE is a messaging service with 160+ million active users. Last year I talked about how we operate our Kafka cluster that receives more than 160 billion messages daily, dealing with performance problems to meet our tight requirement. Since last year we have deployed three more new clusters each for different purposes, such as one in different datacenter, one for security sensitive usages and so on, still keeping the fundamental concept: one cluster for everyone to use. While letting many projects using few multi-tenancy clusters greatly saves our operational cost and enables us to concentrate our engineering resources for maximizing their reliability, hosting multiple topics of different kinds of workload led us through a lot of challenges, too.
In this talk I will introduce how we operate Kafka clusters shared among different services, solving troubles we met to maximize its reliability. Especially, one of the most critical issues we’ve solved—delayed consumer Fetch request causing a broker’s network threads to be blocked—should be very interesting because it could have worse overall performance of brokers in a very common situation, and we have managed to solve it leveraging advanced technique such as dynamic tracing and tricky patch to control in-kernel behavior from Java code.
Kafka Multi-Tenancy - 160 Billion Daily Messages on One Shared Cluster at LINEkawamuray
This document summarizes a presentation about Kafka multi-tenancy at LINE Corporation. The presentation discusses how LINE runs a single shared Kafka cluster to handle over 100 billion messages per day from many independent services. It describes the hardware used, requirements for multi-tenancy like protecting against abusive workloads and providing isolation. It then discusses specific issues identified like slow response times caused by disk reads, and the solutions implemented like request quotas, metrics, and pre-loading data into memory to avoid blocking. The presentation concludes that after addressing these issues, their shared Kafka hosting model works efficiently while maintaining a single data hub.
This document discusses a receiver-based flow control scheme for networks experiencing overload. It proposes using virtual queues at receivers to provide back-pressure and optimize data delivery via threshold-based packet dropping and back-pressure routing. This approach generalizes traditional per-flow utility optimization to allow assigning a single utility function to multiple flows. Simulations show this control scheme achieves near-optimal performance using finite buffers independently of arrival statistics.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
IRJET- Latin Square Computation of Order-3 using Open CLIRJET Journal
This document discusses using OpenCL parallel programming to compute Latin squares of order 3 more efficiently than sequential algorithms. It proposes dividing the input matrix into sub-matrices that are processed concurrently by multiple processing elements in the GPU. This parallel approach reduces the computation time compared to performing the operations sequentially on the CPU. First, the input matrix is divided based on task or data parallelism. Then the sub-matrices are computed simultaneously by different processing elements. The results are combined and stored in GPU memory before being transferred to CPU memory and output. Implementing the Latin square computation with OpenCL exploits parallelism to improve efficiency over the traditional sequential approach.
Kubernetes Architecture with ComponentsAjeet Singh
This document provides an overview of Kubernetes architecture and components. It describes how to run a simple Kubernetes setup using a Docker container. The container launches all key Kubernetes components including the API server, scheduler, etcd and controller manager. Using kubectl, the document demonstrates deploying an nginx pod and exposing it as a service. This allows curling the nginx default page via the service IP to confirm the basic setup is functioning.
The Role of Elastic Load Balancer - Apache StratosImesh Gunaratne
The document discusses the role of the Elastic Load Balancer (ELB) in Apache Stratos PaaS. It describes how the ELB uses components like Synapse, Axis2, and Tribes to distribute incoming traffic across backend nodes and auto-scale capacity. The ELB handles load balancing, failover, auto-scaling, and multi-tenancy. It integrates with Stratos by receiving topology information, load balancing requests to cartridge instances, and auto-scaling the number of instances based on traffic.
The document provides an introduction to LAVA workload scheduler. It discusses how LAVA allows users to share computing resources and time on a cluster. It describes installing and configuring LAVA, and how to use LAVA commands to submit, monitor, modify and control jobs on the cluster.
Many thanks to Nick McKeown (Stanford), Jennifer Rexford (Princeton), Scott Shenker (Berkeley), Nick Feamster (Princeton), Li Erran Li (Columbia), Yashar Ganjali (Toronto)
Similar to Simulation of BRKSS Architecture for Data Warehouse Employing Shared Nothing Clustering (20)
Total Ionization Cross Sections due to Electron Impact of Ammonia from Thresh...Dr. Amarjeet Singh
In the present paper, we have employed modified Khare-BEB method [Atoms, (2019)] to evaluate total ionization cross sections by the electron impact for ammonia in energy range from the ionization threshold to 10 MeV. The theoretical ionization cross sections have been compared to the available previous theoretical and experimental results. The collision parameters dipole matrix squared M_j^2 and CRP also have been calculated. The present calculations were found in remarkable agreement with the available experimental results.
A Case Study on Small Town Big Player – Enjay IT Solutions Ltd., BhiladDr. Amarjeet Singh
Adequately trained Manpower is a problem that affects the IT industry as a whole, but it is particularly acute for Enjay IT Solution. Enjay's location in a semi-urban or rural area makes it even more difficult to find a talented employee with the right skills. As the competition for skilled workers grows, it becomes more difficult to attract and keep those workers who have the requisite training and experience.
Effect of Biopesticide from the Stems of Gossypium Arboreum on Pink Bollworm ...Dr. Amarjeet Singh
Pink bollworm and Lepidoptera development quickly in numbers which is a typical animal group that produces around 100 youthful ones inside certain days or weeks. This assault influences the harvests broadly in the tropical and sub-tropical temperature areas. Thus, to keep up with the yield of harvests the vermin ought to be kept away by utilizing pesticides. The unnecessary measure of the purpose of pesticides influences the dirt, land, and as well as human well-being, and contaminates the climate. Thus, an ozone-accommodating biopesticide is extracted from the stems of the Gossypium arboreum. Thus, the extraction of biopesticide from the stems of Gossypium arboreum demonstrated that the quantity of pink bollworm and Lepidoptera is diminished step by step in the wake of showering the arrangement on the impacted region of the plant because of the presence of the gossypol.
Artificial Intelligence Techniques in E-Commerce: The Possibility of Exploiti...Dr. Amarjeet Singh
This document discusses the potential applications of artificial intelligence techniques in e-commerce in Saudi Arabia. It begins with an introduction to e-commerce and AI, and how AI is being used increasingly in e-commerce applications worldwide. It then reviews literature on how AI can be integrated into e-commerce systems and the various applications of AI in e-commerce. Some key applications discussed include AI assistants, personalized recommendations, demand forecasting, supply chain management, fraud detection and more. The document concludes that Saudi Arabia is well positioned to benefit from using AI to boost its growing e-commerce sector.
Factors Influencing Ownership Pattern and its Impact on Corporate Performance...Dr. Amarjeet Singh
This document summarizes a research study that analyzed the factors influencing ownership patterns of selected Indian companies and the impact of ownership patterns on corporate performance. The study used data from 5 industries over 5 years from 2017 to 2021. Multiple regression, ANOVA, and correlation analyses were conducted. The results found that the percentage of independent directors on the board and the size of the company had a significant impact on Indian promoter holdings. Additionally, non-institutional ownership was found to have a significant impact on corporate performance measures like asset utilization ratio. The study concluded that ownership patterns can influence corporate performance and companies should work to optimize factors like debt-equity ratio and board independence to improve financial outcomes.
An Analytical Study on Ratios Influencing Profitability of Selected Indian Au...Dr. Amarjeet Singh
Every country with a well-developed transportation network has a well-developed economy. The automobile industry is a critical engine of the nation's economic development. The automobile industry has significant backward and forward links with every area of the economy, as well as a strong and progressive multiplier impact. The automotive industry and the auto component industry are both included in the vehicle industry. It includes passenger waggons, light, medium, and heavy commercial vehicles, as well as multi-utility vehicles such as jeeps, three-wheelers, military vehicles, motorcycles, tractors, and auto-components such as engine parts, batteries, drive transmission parts, electrical, suspension and chassis parts, and body and other parts. In the last several years, India's automobile sector has seen incredible growth in sales, production, innovation, and exports. India's car industry has emerged as one of the best in the world, and the auto-ancillary sector is poised to assist the vehicle sector's expansion. Vehicle manufacturers and auto-parts manufacturers account for a significant component of global motorised manufacturing. Vehicle manufacturers from across the world are keeping a close eye on the Indian auto sector in order to assess future demand and establish India as a global manufacturing base. The current research focuses on three automotive behemoths: TATA Motors, MRF, and Mahindra & Mahindra.
A Study on Factors Influencing the Financial Performance Analysis Selected Pr...Dr. Amarjeet Singh
The growth of a country's banking sector has a significant impact on its economic development. The banking sector plays a critical role in determining a country's economic future. A well-planned, structured, efficient, and viable banking system is an essential component of an economy's economic and social infrastructure. In modern society, a strong banking system is required because it meets the financial needs of the modern society. In a country's economy, the banking system plays a crucial role. Because it connects surplus and deficit economic agents, the bank is the most important financial intermediary in the economy. The banking system is regarded as the economy's lifeline. It meets the financial needs of commerce, industry, and agriculture. As a result, the country's development and the banking system are intertwined. They are critical in the mobilisation of savings and the distribution of credit to various sectors of the economy. India's private sector banks play a critical role in the country's economic development. So The financial performance of private sector banks must be evaluated carefully.
An Empirical Analysis of Financial Performance of Selected Oil Exploration an...Dr. Amarjeet Singh
After the United States, China, and Japan, India was the world's fourth biggest consumer of oil and petroleum products. The nation is significantly reliant on crude oil imports, the majority of which come from the Middle East. The Indian oil and gas business is one of the country's six main sectors, with important forward links to the rest of the economy. More than two-thirds of the country's overall primary energy demands are met by the oil and gas industry. The industry has played a key role in placing India on the global map. India is now the world's sixth biggest crude oil user and ninth largest crude oil importer. In addition, the country's portion of the worldwide refining market is growing. India's refining industry is now the world's sixth biggest. With plans for Reliance Petroleum Limited to commission another refinery with a capacity of 29 MTPA next 16 to its 33 MTPA refinery in Jamnagar, Gujarat, this position is projected to be enhanced. As a consequence, the Reliance refinery would be the biggest single-site refinery in the world. Based on secondary data gathered from CMIE, the current research examines the ratios influencing the profitability of selected oil exploration and production businesses in India during a 10-year period.
Since 1991, thanks to economic policy liberalization, the Indian economy has entered an era in which Indian businesses can no longer disregard global markets. Prior to the 1990s, the prices of a variety of commodities, metals, and other assets were carefully regulated. Others, which were not rolled, were primarily dependant on regulated input costs. As a result, there was no uncertainty and, as a result, no price fluctuations. However, in 1991, when the process of deregulation began, the prices of most items were deregulated. It has also resulted in the exchange being partially deregulated, easing trade restrictions, lowering interest rates, and making significant advancements in foreign institutional investors' access to the capital markets, as well as establishing market-based government securities pricing, among other things. Furthermore, portfolio and securities price volatility and instability were influenced by market-determined exchange rates and interest rates. As a result, hedging strategies employing a variety of derivatives were exposed to a variety of risks. The Indian capital market will be examined in this study, with a focus on derivatives.
Theoretical Estimation of CO2 Compression and Transport Costs for an hypothet...Dr. Amarjeet Singh
This document discusses theoretical estimates for the costs of compressing and transporting CO2 from a hypothetical carbon capture and storage project at the Saline Joniche Power Plant in Italy. It first provides background on the power plant project from 2008 that proposed converting the site to coal power. It then details the methodology used to size the compression system, estimating power needs for multi-stage compression up to pipeline pressures. Costs are considered for constructing, operating, and maintaining both the compression plant and pipeline to a potential offshore storage site. The aim is to evaluate retrofitting the existing plant with carbon capture and storage as a way to enable continued coal power production consistent with climate goals.
Analytical Mechanics of Magnetic Particles Suspended in Magnetorheological FluidDr. Amarjeet Singh
In this paper, the behavior of MR particles has been systematically investigated within the scope of analytical mechanics. . A magnetorheological fluid belongs to a class of smart materials. In magnetorheological fluids, the motion of magnetic particles is controlled by the action of internal and external forces. This paper presents analytical mechanics for the interaction of system of particles in MR fluid. In this paper, basic principles of Analytical Mechanics are utilized for the construction of equations.
Techno-Economic Aspects of Solid Food Wastes into Bio-ManureDr. Amarjeet Singh
Solid waste is health hazard and cause damage to the environment due to improper handling. Solid waste comprises of Industrial Waste (IW), Hazardous Waste (HW), Municipal Solid Waste (MSW), Electronic waste (E-waste), Bio-Medical Waste (BMW) which depend on their supply & characteristics. Food waste or Bio-waste composting and its role in sustainable development is explained in food waste is a growing area of concern with many costs to our community in terms of waste collection, disposal and greenhouse gases. When rotting food ends up in landfill it turns into methane, a greenhouse gas that is particularly damaging to the environment. Composting is biochemical process in which organic materials are biologically degraded, resulting in the production of organic by products and energy in the form of heat. Heat is trapped within the composting mass, leading to the phenomenon of self-heating. This overall process provide us Bio-Manure.
Crypto-Currencies: Can Investors Rely on them as Investment Avenue?Dr. Amarjeet Singh
The purpose of this study is to examine investors’ perceptions about investing in crypto-currencies. We think that investors trust in crypto-currencies is largely driven by crypto-currency comprehension, trust in government, and transaction speed. This is the first study to examine crypto-currencies from the investor’s perspective. Following that, we discover important antecedents of crypto-currency confidence. Second, we look at the government's role in crypto-currencies. The importance of this study is: first, crypto-currencies have the potential to disrupt the current economic system as the debate is all about impact of decentralization of transactions; thus, further research into how it affects investors trust is essential; and second, access to crypto-currencies. Finally, if Fin-Tech companies or banks want to enter the bitcoin industry may not attract huge advertising costs as well as marketing to soothe clients' concerns about investing in various digital currencies The research sheds light on indecisiveness in the context of marketing aspects adopted by demonstrating investors are aware about the crypto.
Awareness of Disaster Risk Reduction (DRR) among Student of the Catanduanes S...Dr. Amarjeet Singh
The Island Province of Catanduanes is prone to all types of natural hazards that includes torrential and heavy rains, strong winds and surge, flooding and landslide or slope failures as a result of its geographical location and topography. RA 10121 mandates local DRRM bodies to “encourage community, specifically the youth, participation in disaster risk reduction and management activities, such as organizing quick response groups, particularly in identified disaster-prone areas, as well as the inclusion of disaster risk reduction and management programs as part of youth programs and projects. The study aims to determine the awareness to disaster of the student of the Catanduanes State University. The disaster-based questionnaire was prepared and distributed among 636 students selected randomly from different Colleges and Laboratory Schools in the University
The Catanduanes State University students understood some disaster-related concepts and ideas, but uncertain on issues on preparedness, adaptation, and awareness on the risks inflicted by these natural hazards. Low perception on disaster risks are evidently observed among students. The responses of the students could be based on the efficiency and impact of the integration of DRR education in the senior high school curriculum. Specifically, integration of the concepts about the hazards, hazard maps, disaster preparedness, awareness, mitigation, prevention, adaptation, and resiliency in the science curriculum possibly affect the knowledge and understanding of students on DRR. Preparedness drills and other forms of capacity building must be done to improve awareness of the student towards DRRM.
The study further recommends that teachers and instructor must also be capacitated in handling disaster as they are the prime movers in the implementation of the DRRM in education. Preparedness drills and other forms of capacity building must be done to improve awareness of the student towards DRRM. Core subjects in Earth Sciences must be reinforced with geologic hazards. Learning competencies must also be focused on hazard identification and mapping, and coping with different geologic disaster.
The 1857 war was a watershed moment in the history of the Indian subcontinent. The battle has sparked academic debate among historians and sociologists all around the world. Despite the fact that it has been more than 150 years, this battle continues to pique the interest of historians. The war's causes and events that occurred throughout the conflict, persons who backed the British and anti-British fighters, and the results and ramifications, are all aspects of this conflict. In terms of outcomes, many academics believe that the war was a failure for those who started it. It is often assumed that the Indians who battled the British in this conflict were unable to achieve their goals. Many gains accrued to Indians as a result of the conflict, but these achievements are overshadowed by the dispute over the war's failure. This research effort focuses on the war's achievements for India, and the significance of those achievements.
Haryana's Honour Killings: A Social and Legal Point of ViewDr. Amarjeet Singh
Life is unpredictably unpredictable. Nobody knows what will happen in the next minute of their lives. In this circumstance, every human being has the right and desire to conduct their lives according to their own desires. No one should be forced to live a life solely for the benefit and reputation of others. Honour killing is defined as the assassination of a person, whether male or female, who refuses to accept the family's arranged marriage or decides to move her or his marital life according to her or his wishes solely because it jeopardizes the family's honour. The family's supreme authority looks after the family's name but neglects to consider the love and affection shared among family members. I have discussed honour killing in India in my research work. This sort of murder occurs as a result of particular triggers, which are also examined in relation to the role of the law in honour killing. No one can be released free if they break the law, and in this case, it is a felony that violates various regulations designed to safeguard citizens. This crime is similar to many others, but it is distinct enough to be differentiated in the report. When the husband is of low social standing, it lowers the position and caste of the female family, prompting the male family members to murder the girl. But they forget that the girl is their kid and that while rank may be attained, a girl's life can never be replaced, and that caste is less valuable than the girl's life and love spent with them.
Optimization of Digital-Based MSME E-Commerce: Challenges and Opportunities i...Dr. Amarjeet Singh
This document summarizes a research article about optimizing digital-based MSME e-commerce during the COVID-19 pandemic. The article discusses how the pandemic severely impacted MSMEs, with many going out of business. However, digitalization and e-commerce provide opportunities for MSMEs to transform their business models. The article reviews literature showing how technologies like websites, social media, and mobile applications can help MSMEs reach more customers online. Case studies of MSMEs in different countries found that those utilizing digital tools through e-commerce were more successful compared to those relying only on offline sales. The article concludes digitalization is both a challenge and opportunity for MSMEs to adapt their traditional business models and survive or grow
Modal Space Controller for Hydraulically Driven Six Degree of Freedom Paralle...Dr. Amarjeet Singh
This paper presents the Modal space decoupled control for a hydraulically driven parallel mechanism has been presented. The approach is based on singular values decomposition to the properties of joint-space inverse mass matrix, and mapping of the control and feedback variables from the joint space to the decoupling modal space. The method transformed highly coupled six-input six-output dynamics into six independent single-input single-output (SISO) 1 DOF hydraulically driven mechanical systems. The novelty in this method is that the signals including control errors, control outputs and pressure feedbacks are transformed into decoupled modal space and also the proportional gains and dynamic pressure feedback are tuned in modal space. The results indicate that the conventional controller can only attenuate the resonance peaks of the lower eigenfrequencies of six rigid modes properly, and the peaking points of other relative higher eigenfrequencies are over damped, The further results show that it is very effective to design and tune the system in modal space and that the bandwidth increased substantially except surge (x) and sway (y) motions, each degree of freedom can be almost tuned independently and their bandwidths can be increased near to the undamped eigenfrequencies.
It is a known fact that a large number of Steel Industry Expansion projects in India have been delayed due to regulatory clearances, environmental issues and problems pertaining to land acquisition. Also, there are challenges in the tendering phase that affect viability of projects thus delaying implementation, construction phase is beset with over-runs and disputes and last but not the least; provider skills are weak all across the value chain. Given the critical role of Steel Sector in ensuring a sustained growth trajectory for India, it is imperative that we identify the core issues affecting completion of infrastructure projects in India and chalk out initiatives that need to be acted upon in short term as well as long term.
A blockchain is a decentralised database that is shared across computer network nodes. A blockchain acts as a database, storing information in a digital format. The study primarily aims to explore how in the future, block chain technology will alter several areas of the Indian economy. The current study aims to obtain a deeper understanding of blockchain technology's idea and implementation in India, as well as the technology's potential as a disruptive financial technological innovation.
Secondary sources such as reports, journals, papers, and websites were used to compile all the data. Current and relevant information were utilised to help understand the research goals. All the information is rationally organised to fulfil the objectives. The current research focuses on recommendations for enhancing India's Blockchain ecosystem so that it may become one of the best in the world at utilising this new technology.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Simulation of BRKSS Architecture for Data Warehouse Employing Shared Nothing Clustering
1. International Journal of Engineering and Management Research e-ISSN: 2250-0758 | p-ISSN: 2394-6962
Volume- 9, Issue- 1, (February 2019)
www.ijemr.net https://doi.org/10.31033/ijemr.9.1.1
5 This work is licensed under Creative Commons Attribution 4.0 International License.
Simulation of BRKSS Architecture for Data Warehouse Employing
Shared Nothing Clustering
Bikramjit Pal1
and Mallika De2
1
Assistant Professor, Department of Computer Science and Engineering, JIS University, Kolkata, West Bengal, INDIA
2
Retired SSO (Professor Rank), Department of Engineering and Technological Studies, University of Kalyani, Kalyani, West
Bengal, INDIA
1
Corresponding Author: biku_paul@rediffmail.com
ABSTRACT
The BRKSS Architecture is based upon shared
nothing clustering that can scale-up to a large number of
computers, increase their speed and maintain the work load.
The architecture comprises of a console along with a CPU that
also acts as a buffer and stores information based on the
processing of transactions, when a batch enters into the
system. This console is connected to a switch (p-ports) which is
again connected to the c-number of clusters through their
respective hubs. The architecture can be used for personal
databases and for online databases like cloud through router.
This architecture uses the concept of load balancing by
moving the transaction among various nodes within the
clusters so that the overhead of a particular node can be
minimised. In this paper we have simulated the working of
BRKSS architecture using JDK 1.7 with Net beans 8.0.2. We
compared the result of performance parameters sch as
turnaround time, throughput and waiting time with existing
hierarchical clustering model.
Keywords-- BRKSS Architecture, Shared Nothing
Clustering, Buffer, Cloud, Router, Switch, Hubs, Load
Balancing, Databases, Turnaround Time, Throughput,
Waiting Time, Ports
I. INTRODUCTION TO BRKSS
ARCHITECTURE
A substantial amount of work has been done to
enhance the performance of data warehouses in many
different ways. In this paper, an architecture named as
BRKSS Architecture [1] is simulated, which is based upon
shared nothing clustering that can scale-up to a large
number of computers, increase their speed and maintain the
work load. The architecture comprises of a console along
with a CPU that also acts as a buffer and stores information
based on the processing of transactions, when a batch enters
into the system. This console is connected to a switch (p-
ports) which is again connected to the c-number of clusters
through their respective hubs. The architecture can be used
for personal databases and for online databases like cloud
through router. As shown in Figure–1, the BRKSS
Architecture comprises of multiple nodes connected by a
high speed LAN. Apiece node has its own Processor (P),
Memory (M) and Disk (D).
Fig. 1. BRKSS Architecture
2. International Journal of Engineering and Management Research e-ISSN: 2250-0758 | p-ISSN: 2394-6962
Volume- 9, Issue- 1, (February 2019)
www.ijemr.net https://doi.org/10.31033/ijemr.9.1.1
6 This work is licensed under Creative Commons Attribution 4.0 International License.
Maximum Possibility of Clusters and Nodes
Here, the number of clusters formed and the
number of nodes depend upon the number of ports in the
switch. Two ports of the switch will be used for connecting
with the console and the router. Suppose that‘d’ is the
number of nodes in each cluster. In Table–1, the table gives
an idea about the maximum number of clusters that could
be formed. Here, up to 64 port switch have been shown
which could be increased based on how much large is the
data warehouse.
II. PROPOSED ALGORITHM
To overcome the limitations of load balancing in
shared nothing clustering Inter-query Parallelism has been
implemented in the proposed algorithm where many diverse
queries or transactions are executed in parallel with one
another on many processors. This will not only increase the
throughput but will also scale up the system.
The steps of the algorithm are stated below:
Step–1 : Consider the number of transactions entering into the system in a batch mode.
[Suppose ‘m’ numbers of transactions are there in a batch]
Step–2: Check the number of clusters.
[Suppose ‘c’ be the number of clusters]
Step–3: Calculate the maximum value for each cluster (maxc) and node (maxn).
maxc = m/c
maxn = maxc /d
Where, maxc = 0 and maxn = 0 initially and d is the number of nodes in a cluster.
Step–4 : Distribute all the transactions evenly in the cluster based upon the maxc value and in the nodes based upon maxn
value.
Node Based
Step–5 : Now, calculate maxq = maxn /10
Where, maxq is the number of transactions that will enter into the MLFQ apiece time for execution and also calculate remn =
maxn – maxq for apiece node
Where, remn is the remaining number of transactions of a node.
Step–6 : Now for Node based Load Balancing, perform MLFQ Scheduling in apiece node.
Step–6 (a) : Allocate a ready queue to the processor of all the nodes and split the ready queue into ‘q’ number of queues.
Step–6 (b) : Put highest priority to q0 as q0 is the first queue and lowest priority to qn as qn is the last queue.
Step–6 (c) : Perform Round Robin Scheduling from q0 to qn-1 and FCFS in qn.
Step–6 (d) : Follow the MLFQ rules while performing the scheduling.
Considering two jobs A and B entering into the queue, apply the following rules:
Rule–1 : If Priority (A) > Priority (B), A will run (B doesn’t).
Rule–2 : If Priority (A) = Priority (B), A and B both run in RRS.
Rule–3 : When a job enters the system, it is placed at the highest priority, that is, the topmost queue.
Rule–4 : Once a job uses up its time allotment at a given level (regardless of how many times it has given up the
CPU), its priority is reduced, that is, it moves down one queue. This is called the Gaming Tolerance.
Rule–5 : After some time period S, move all the jobs in the system to the topmost queue. This is also known as
Priority Boost.
The above rules are applicable for a transaction or a query as well.
Step–6 (e) : At the end of apiece transaction, take up a new one from remn.
3. International Journal of Engineering and Management Research e-ISSN: 2250-0758 | p-ISSN: 2394-6962
Volume- 9, Issue- 1, (February 2019)
www.ijemr.net https://doi.org/10.31033/ijemr.9.1.1
7 This work is licensed under Creative Commons Attribution 4.0 International License.
Step–6 (f) : After time interval tz, status regarding the number of executed transactions and the remaining transactions will be
send to the buffer from apiece node of a cluster.
Step–7 : If value of remn does not become 0 within time tz, perform Node Based Load Balancing through Push Migration
approach.
Step–7 (a) : After receiving the status, check in the buffer.
If remn = maxn / 2 in all the nodes, then situation is stable, continue with the execution and move to Step–9.
If remn > maxn / 2 in all the nodes, then give them more time to reach the stable situation and then move to Step–9.
If remn = maxn / 2 in half of the nodes and remn > maxn / 2 in other half, then give some time for execution so that
most of the nodes would either reach to remn < maxn / 2 or remn = maxn / 2. Then move to Step–9.
If in most of the nodes remn is much less than maxn / 2 and in a few nodes remn = maxn / 2, then continue with the
execution and after that move to Step–9.
If remn is much less than maxn / 2 in maximum nodes and in some nodes remn > maxn / 2, then start performing load
balancing.
Step–7 (b) : When condition 7 (a) (v) occurs in the node(s), then send a signal to the console through switch.
Step–7 (c) : Console in return will send an instruction to the node(s) to submit the remaining transactions remn.
Step–7 (d) : Redistribute remn into other nodes, depending upon the condition: remn < = maxn / 2
Step–8 : Continue Step–7(a) to Step–7 (d) until maxc gets executed.
Step–9 : With the end of all the transactions, again a new maxc will enter and repeat the above steps. Cluster Based
If after ty time, the console does not get any information regarding a particular cluster, then it will assume that a fail
over has occurred in the cluster. Then the console will perform cluster based load balancing to shift the load of the fail over
cluster to the rest of the active clusters.
Step–10 : After time interval ‘t y’, console will check the executed transactions maxe and the remaining transactions remc for
apiece cluster and a copy of remc transaction will be send to the buffer.
remc = maxc – maxe
Step–11 : Perform Cluster Based Load Balancing through Push Migration approach when cluster fail over will take place.
Step–11 (a) : After time ty, check in the buffer.
If remc = maxc / 2 in all the active clusters, then situation is stable, continue with the execution and wait for condition
11 (a) (v) to occur.
If remc > maxc / 2 in all the active clusters, then give them more time to reach the stable situation and wait for
condition 11 (a) (v) to occur.
If remc = maxc / 2 in half of the active clusters and remc > maxc / 2 in other half, then give some time for execution so
that most of the clusters will either reach to remc < maxc / 2 or remc = maxc / 2 and wait for condition 11 (a) (v) to
occur.
If in most of the active clusters remc is much less than maxc / 2 and in a few active cluster remc = maxc / 2, then
continue with the execution and wait for condition 11(a) (v) to occur.
If remc is much less than maxc / 2 in all the active clusters, then performs load balancing.
Step–11 (b): Redistribute remc of the fail over cluster into the other active clusters that would satisfy the condition remc < =
maxc / 2 in the active clusters.
Step–12 : Continue Step–11 (a) and Step–11 (b) until m gets executed.
Step–13 : At the end of all the transactions, again a new batch will enter and repeat the above steps.
Example: Suppose the number of transactions (m) in one batch is 1, 80, 000, number of clusters (c) = 3 and number of nodes
in apiece cluster (d) = 4.
Then,
maxc = (1,80,000/3) = 60,000
maxn = (60,000/4) = 15,000
The stable condition for apiece node is given by:
maxn /2 = 7500
maxq = maxn /10 = 1500
Node Based Load Balancing
Number of transactions entering into the MLFQ will be either maxq or multiplicand of maxq like:
1500 * 1 = 1500
1500 * 2 = 3000
1500 * 3 = 4500
4. International Journal of Engineering and Management Research e-ISSN: 2250-0758 | p-ISSN: 2394-6962
Volume- 9, Issue- 1, (February 2019)
www.ijemr.net https://doi.org/10.31033/ijemr.9.1.1
8 This work is licensed under Creative Commons Attribution 4.0 International License.
1500 * 4 = 6000
At apiece tz interval, a status about the nodes will
be send to the console. The console will get information
about the remaining transactions of apiece node (remn) and
will decide whether continuous execution or load balancing
is required or not.
Initially,
After third Iteration in d1 and d2, remn is much
less than their maxn / 2 and in d4, remn is stable, but in d3,
remn > maxn / 2, so, Node Based Load Balancing is
performed. Here, 1,500 transactions would be taken away
from d3, making it stable and then putting that load into
either d1 or d2.
5. International Journal of Engineering and Management Research e-ISSN: 2250-0758 | p-ISSN: 2394-6962
Volume- 9, Issue- 1, (February 2019)
www.ijemr.net https://doi.org/10.31033/ijemr.9.1.1
9 This work is licensed under Creative Commons Attribution 4.0 International License.
While performing the above Iterations, a status
about all the nodes and their clusters would go to the
console and it will get updated on a regular basis.
Cluster Based Load Balancing
The console will get information in the time
interval ty about the executed number of transactions, that
is, maxe and a copy of all the remaining transactions remc.
So, when fail over of any cluster occurs, then the console
will send the unexecuted copy of transactions of the fail
over cluster to the other clusters.
Initially,
6. International Journal of Engineering and Management Research e-ISSN: 2250-0758 | p-ISSN: 2394-6962
Volume- 9, Issue- 1, (February 2019)
www.ijemr.net https://doi.org/10.31033/ijemr.9.1.1
10 This work is licensed under Creative Commons Attribution 4.0 International License.
At the end of ty1 interval, console will have the
status of maxe and a copy of remc within it, till ty2 execution
ends successfully. After that it will hold a copy of remc and
status of maxe till ty3 execution ends successfully.
In ty2, a fail over occurs and the console that is
holding the value of remc from ty1 interval will distribute it
to the other active clusters until they themselves come to a
value much less than maxc / 2.
III. SIMULATION OF THE
ALGORITHM AND RESULT ANALYSIS
The BRKSS algorithm has been simulated by
using JDK 1.7 with Netbeans 8.0.2 and the database has
been maintained by MySQL. The algorithm takes the
following user inputs: number of cluster, number of nodes,
user queries which may be numerous at a particular time
period. User queries are the transaction that determines the
performance of a DW. The output is obtained for
Turnaround Time, Waiting Time and Throughput for a
given set of inputs and the result is compared with existing
pseudo mesh schema.
I have discussed the results for three cases, which
are shown in Table 3.1, Table 3.2 and Table 3.3. Also, the
comparative result analysis of the proposed and existing
hierarchical clustering model is displayed graphically.
7. International Journal of Engineering and Management Research e-ISSN: 2250-0758 | p-ISSN: 2394-6962
Volume- 9, Issue- 1, (February 2019)
www.ijemr.net https://doi.org/10.31033/ijemr.9.1.1
11 This work is licensed under Creative Commons Attribution 4.0 International License.
Graphical Representation
8. International Journal of Engineering and Management Research e-ISSN: 2250-0758 | p-ISSN: 2394-6962
Volume- 9, Issue- 1, (February 2019)
www.ijemr.net https://doi.org/10.31033/ijemr.9.1.1
12 This work is licensed under Creative Commons Attribution 4.0 International License.
Comparison with Existing Model
9. International Journal of Engineering and Management Research e-ISSN: 2250-0758 | p-ISSN: 2394-6962
Volume- 9, Issue- 1, (February 2019)
www.ijemr.net https://doi.org/10.31033/ijemr.9.1.1
13 This work is licensed under Creative Commons Attribution 4.0 International License.
Graphical Representation
10. International Journal of Engineering and Management Research e-ISSN: 2250-0758 | p-ISSN: 2394-6962
Volume- 9, Issue- 1, (February 2019)
www.ijemr.net https://doi.org/10.31033/ijemr.9.1.1
14 This work is licensed under Creative Commons Attribution 4.0 International License.
Comparison with Existing Model
11. International Journal of Engineering and Management Research e-ISSN: 2250-0758 | p-ISSN: 2394-6962
Volume- 9, Issue- 1, (February 2019)
www.ijemr.net https://doi.org/10.31033/ijemr.9.1.1
15 This work is licensed under Creative Commons Attribution 4.0 International License.
12. International Journal of Engineering and Management Research e-ISSN: 2250-0758 | p-ISSN: 2394-6962
Volume- 9, Issue- 1, (February 2019)
www.ijemr.net https://doi.org/10.31033/ijemr.9.1.1
16 This work is licensed under Creative Commons Attribution 4.0 International License.
Graphical Representation
13. International Journal of Engineering and Management Research e-ISSN: 2250-0758 | p-ISSN: 2394-6962
Volume- 9, Issue- 1, (February 2019)
www.ijemr.net https://doi.org/10.31033/ijemr.9.1.1
17 This work is licensed under Creative Commons Attribution 4.0 International License.
Comparison with Existing Model
14. International Journal of Engineering and Management Research e-ISSN: 2250-0758 | p-ISSN: 2394-6962
Volume- 9, Issue- 1, (February 2019)
www.ijemr.net https://doi.org/10.31033/ijemr.9.1.1
18 This work is licensed under Creative Commons Attribution 4.0 International License.
As discussed, this architecture is based upon
shared nothing clustering that can scale-up to a large
number of computers, increase their speed and maintain the
workload. To support it the proposed algorithm has been
simulated and the results shown that the performance of
BRKSS is better than the existing algorithm.
IV. CONCLUSION
The simulation of BRKSS algorithm has given
positive results in its favour when it is compared with
existing hierarchical clustering algorithm in terms of
turnaround time, throughput and waiting time. Also, the
results are consistent for different permutations and
combinations of nodes and clusters.
REFERENCES
[1] Pal Bikramjit, Chowdhury Rajdeep, Verma Kumar
Gaurav, Dasgupta Saswata, & Dutta Shubham. (2016).
Proposed BRKSS architecture for performance
enhancement of data warehouse employing shared nothing
clustering. International Science Press, 9(21), 111 – 121.
[2] Lee, S. (2011). Shared-nothing vs. shared-disk cloud
database architecture. International Journal of Energy,
Information and Communications, 2(4), 211-216.
[3] Popeanga, J. (2014). Shared-nothing’ cloud data
warehouse architecture. Database Systems Journal, V(4), 3-
11.
[4] Müseler, T. (2012). A survey of shared-nothing parallel
database management systems [Comparison between
teradata, greenplum and netezza implementation].
Available at:
https://pdfs.semanticscholar.org/0365/e61fbf06872334f806
3ff03fbe1a260f210c.pdf?_ga=2.38538275.1267149839.154
9360061-719258772.1545998512.
[5] De Witt, D., J., & Gray, J. (1991). Parallel database
systems: The future of database processing or a passing
fad?. Available at:
https://jimgray.azurewebsites.net/papers/CacmParallelDB.p
df.
[6] Furtado, P. (2009). A survey on parallel and distributed
data warehouses. IGI Publishing, 5(2), 57-77.
[7] Minhas, U., F., Lomet, D., & Thekkath, C., A. (2011).
Chimera: Data sharing flexibility, shared nothing
simplicity. IDEAS, Springer Verlag.
[8] Datta, A., Moon, B., & Thomas, H. (1998). A case for
parallelism in data warehousing and OLAP. Proceedings of
the 9th International Conference on Database and Expert
Systems Applications.
[9] DeWitt, D., J., & Gray, J. (1992). Parallel database
systems: The future of high performance database systems.
Communication of the ACM, 35(6), 85–98.
[10] Ezeife, C., I. & Barker, K. (1995). A comprehensive
approach to horizontal class fragmentation in a distributed
object based system. Distributed and Parallel Databases, 1,
247–272.
[11] Ezeife, C. I. (1998). A partition-selection scheme for
warehouse aggregate views. International Conference of
Computing and Information, Manitoba, Canada.
[12] Jurgens, M. & Lenz, H–J. (1999). Tree based indexes
vs. bitmap indexes: A performance study. Available at:
https://www.researchgate.net/publication/220841960_Tree_
Based_Indexes_vs_Bitmap_Indexes_A_Performance_Stud
y.
[13] Kimball, R. (1996). The data warehouse toolkit. (3rd
ed.). Wiley and Sons, Inc. Available at:
http://www.essai.rnu.tn/Ebook/Informatique/The%20Data
%20Warehouse%20Toolkit,%203rd%20Edition.pdf.
[14] Patel, A., & Patel, J. M. (2102). Data modeling
techniques for data warehouse. International Journal of
Multidisciplinary Research, 2(2), 240–246.
[15] Farhan, M., S., Marie, M., E., El-Fangary, L., M., &
Helmy, Y., K. (2011). An integrated conceptual model for
temporal data warehouse security. Computer and
Information Science, 4(4), 46–57.
[16] Eder, J. & Koncilia, C. (2001). Changes of dimension
data in temporal data warehouses. Proceedings of Third
International Conference on Data Warehousing and
Knowledge Discovery, Munich, Germany, LNCS, Springer,
284–293.
[17] Golfarelli, M., Maio, D., & Rizzi, S. (1998). The
dimensional fact model: A conceptual model for data
warehouses. International Journal of Cooperative
Information Systems, 7(2-3), 215–247.
[18] Golfarelli, M. & Rizzi, S. (1998). A methodological
framework for data warehouse design. Proceedings of ACM
First International Workshop on Data Warehousing and
OLAP, DOLAP, Washington, 3–9.
[19] Bernardino, J. & Madeira, H. (2019). Data
warehousing and OLAP: Improving query performance
using distributed computing. Available at:
https://www.academia.edu/4241918/Data_Warehousing_an
d_OLAP_Improving_Query_Performance_Using_Distribut
ed_Computing.
[20] Albrecht, J., Gunzel, H., & Lehner, W. (1998). An
architecture for distributed OLAP. International
Conference on Parallel and Distributed Processing
Techniques and Applications. Available at:
https://www.tib.eu/en/search/id/TIBKAT%3A316251313/P
roceedings-of-the-International-Conference-on/.
[21] Comer, D. (1979). The ubiquitous B-tree. ACM
Computing Surveys, 11(2), 121–137.