The document describes a min-cut based algorithm for power-aware scheduling that aims to minimize total leakage power while satisfying timing and resource constraints. It initializes all operations to high threshold voltage. If timing constraints are violated, it uses min-cut to select operations to switch to low threshold voltage. It then performs modified force-directed scheduling, checking resource constraints and using min-cut on a mobility overlap graph to select operations to switch voltages if constraints are violated. The output satisfies both timing and resource constraints with minimum leakage power.
Improving Truck-Shovel Energy Efficiency through Discrete Event ModelingKwame Awuah-Offei
Presented at Society for Mining, Metallurgy & Exploration (SME) 2012 Annual Meeting. This talk covered research done with funding from Illinois Clean Coal Institute (ICCI).
The document discusses mining the top-K multidimensional gradients from a fact table. It proposes a gradient-based cubing approach that partitions gradient regions based on spreading factors to efficiently retrieve the top-K gradient cells. The approach builds inverted indexes and value-list indices from the base table before pruning non-valid regions and cells. It then calculates spreading factors to create a GR tree and further partition valid gradient regions.
The document appears to be a presentation on RedisGraph internals and query execution. It discusses topics like query transmission, AST construction, execution plan construction, optimization of execution plans, and result filtering and scanning. Various examples of graph queries and their corresponding execution plans are shown.
Doing in One Go: Delivery Time Inference Based on Couriers’ Trajectoriesivaderivader
1) The document describes a system called DTInf that uses courier trajectory data to infer parcel delivery times. It addresses challenges like inaccurate geocoded addresses and different reasons for stay points.
2) DTInf corrects geocoded addresses using historical delivery data and groups waybills into delivery events before matching each event to the most likely stay point.
3) An evaluation on real courier datasets finds DTInf outperforms baselines at accurately inferring delivery times, especially after refining its delivery location correction and delivery event modeling. The system has also been deployed for use by couriers and logistics companies.
This document discusses optimizations for high performance and energy efficient implementations of the Smith-Waterman algorithm on FPGAs using OpenCL. It describes an architecture with a systolic array for parallel computation along anti-diagonals and compression techniques to address the memory-bound nature. Experimental results on two FPGA boards show up to 42.5 GCUPS performance with the best performance/power ratio compared to CPUs and other FPGA implementations.
The document describes a min-cut based algorithm for power-aware scheduling that aims to minimize total leakage power while satisfying timing and resource constraints. It initializes all operations to high threshold voltage. If timing constraints are violated, it uses min-cut to select operations to switch to low threshold voltage. It then performs modified force-directed scheduling, checking resource constraints and using min-cut on a mobility overlap graph to select operations to switch voltages if constraints are violated. The output satisfies both timing and resource constraints with minimum leakage power.
Improving Truck-Shovel Energy Efficiency through Discrete Event ModelingKwame Awuah-Offei
Presented at Society for Mining, Metallurgy & Exploration (SME) 2012 Annual Meeting. This talk covered research done with funding from Illinois Clean Coal Institute (ICCI).
The document discusses mining the top-K multidimensional gradients from a fact table. It proposes a gradient-based cubing approach that partitions gradient regions based on spreading factors to efficiently retrieve the top-K gradient cells. The approach builds inverted indexes and value-list indices from the base table before pruning non-valid regions and cells. It then calculates spreading factors to create a GR tree and further partition valid gradient regions.
The document appears to be a presentation on RedisGraph internals and query execution. It discusses topics like query transmission, AST construction, execution plan construction, optimization of execution plans, and result filtering and scanning. Various examples of graph queries and their corresponding execution plans are shown.
Doing in One Go: Delivery Time Inference Based on Couriers’ Trajectoriesivaderivader
1) The document describes a system called DTInf that uses courier trajectory data to infer parcel delivery times. It addresses challenges like inaccurate geocoded addresses and different reasons for stay points.
2) DTInf corrects geocoded addresses using historical delivery data and groups waybills into delivery events before matching each event to the most likely stay point.
3) An evaluation on real courier datasets finds DTInf outperforms baselines at accurately inferring delivery times, especially after refining its delivery location correction and delivery event modeling. The system has also been deployed for use by couriers and logistics companies.
This document discusses optimizations for high performance and energy efficient implementations of the Smith-Waterman algorithm on FPGAs using OpenCL. It describes an architecture with a systolic array for parallel computation along anti-diagonals and compression techniques to address the memory-bound nature. Experimental results on two FPGA boards show up to 42.5 GCUPS performance with the best performance/power ratio compared to CPUs and other FPGA implementations.
The document presents a novel spatial query called the K-Best Site Query (KBSQ). The KBSQ finds the K sites from a set of sites S that minimize the total distance from each object to its closest site. The document proposes two approaches for processing the KBSQ - a straightforward approach and the KBSQ algorithm. The straightforward approach directly computes distances and considers all possible site combinations, while the KBSQ algorithm leverages spatial indexes like the R*-tree and Voronoi diagram to improve efficiency. Experimental results demonstrate the KBSQ algorithm outperforms the straightforward approach for large datasets.
Variational quantum gate optimization on superconducting qubit systemHeyaKentaro
This document proposes and experimentally demonstrates a variational quantum gate optimization method on a superconducting qubit system. The method uses a parameterized quantum circuit ansatz, a specialized optimizer, and input-output constraints to achieve fast convergence. Experiments applying this method to generate ZXR gates on a four-qubit superconducting chip achieve fidelities close to the coherence limit, demonstrating improved performance over conventional optimization techniques.
A basic tutorial on using Wannier90 with the VASP code. Includes a brief overview of Wannier functions, tips on how to build VASP with Wannier90 support, and how to use the VASP/Wannier90 interface to compute an HSE06 band structure and perform some other Wannier90 post processing.
This document discusses pipelined processors and different approaches to pipelining. It describes ideal pipelining and how clock period is determined. It then discusses challenges with pipelining like hazards and clocking overhead. Different techniques for pipelining like conventional pipelining, wave pipelining, and self-timed circuits are explained. Issues with wave pipelining like timing constraints and balancing delays are also covered.
This document summarizes the performance of an algebraic multigrid solver on leading multicore architectures. It describes how the multigrid solver works by repeating pre-smoothing, coarse-grid correction, and post-smoothing steps until convergence. It also discusses the SPE10 oil reservoir modeling benchmark problem being solved, the Cray XC30 and Intel Xeon Phi machines studied, and optimizations that improved the performance of the PCG solver. Charts are included showing runtimes, where time is spent in the AMG cycle, and how parameters affect performance.
Experimental Evaluation of a Novel Fast Beamsteering Algorithm for Link Re-Es...Avishek Patra
The millimeter-wave (mm-wave) bands are currently being explored for multi-Gbps wireless local area networks (WLANs). Directional antennas are required to overcome the high attenuation inherent at the mm-wave frequencies. However, directionality makes link maintenance and establishment tasks complex, especially under node mobility, as slight misalignment of antenna beams between nodes leads to link disruption. Consequently, low latency beamsteering algorithms are needed for fast link re-establishment to support seamless data provisioning. Solutions based on exhaustive sequential scanning induce high latency, thereby disrupting communication. On the other hand, existing low latency proposals typically consider only static links, depend on additional hardware, or require a priori information about the network environment. In this paper, we propose a generic, fast mm-wave beamsteering algorithm that utilizes the previous valid link information to initiate the feasible antenna sector pair search and adaptively increases the sector search space around it to re-establish a link. Additionally, we experimentally evaluate the performance of our algorithm through measurements conducted in a real indoor environment using 60 GHz packet radio transceivers. The results show that, compared to exhaustive sequential scanning, our algorithm reduces the required sector search space, and thereby the link re-establishment latency, by 89% on average compared to exhaustive sequential scanning.
Making TIAM-MACRO-SA an Integrated Assessment ModelIEA-ETSAP
The document outlines recent enhancements to the TIMES energy systems modeling framework between versions 3.4.2 and 3.8.1. Key updates include: 1) Improved algorithms for macroeconomic decomposition and integration of climate impacts; 2) New options for modeling variable renewable generation and ensuring sufficient capacity; and 3) Enhanced reporting capabilities and additional constraint formulations. Documentation of the changes is ongoing across user guides and technical notes.
Smart mm-Wave Beam Steering Algorithm for Fast Link Re-Establishment under No...Avishek Patra
Millimeter-wave (mm-wave) wireless local area networks (WLANs) are expected to provide multi-Gbps connectivity by exploiting a large amount of unoccupied spectrum in e.g. the unlicensed 60 GHz band. However, to overcome the high path loss inherent at these high frequencies, mm-wave networks must employ highly directional beamforming antennas, which make link establishment and maintenance much more challenging than in traditional omnidirectional networks. In particular, maintaining connectivity under node mobility necessitates frequent re-steering of the transmit and receive antenna beams to re-establish a directional mm-wave link. A simple exhaustive sequential scanning to search for new feasible antenna sector pairs may introduce excessive delay, potentially disrupting communication and lowering the QoS. In this paper, we propose a smart beam steering algorithm for fast 60 GHz link re-establishment under node mobility, which uses knowledge of previously feasible sector pairs to narrow the sector search space, thereby reducing the associated latency overhead. We evaluate the performance of our algorithm in several representative indoor scenarios, based on detailed simulations of signal propagation in a 60 GHz WLAN in WinProp with realistic building materials. We study the effect of indoor layout, antenna sector beamwidth, node mobility pattern, and device orientation awareness. Our results show that the smart beam steering algorithm achieves a 7-fold reduction of the sector search space on average, which directly translates into lower 60 GHz link re-establishment latency. Our results also show that our fast search algorithm selects the near-optimal antenna sector pair for link re-establishment.
Jorge Silva, Sr. Research Statistician Developer, SAS at MLconf ATL - 9/18/15MLconf
Estimating the Number of Clusters in Big Data with the Aligned Box Criterion: Finding the number, k, of clusters in a dataset is a fundamental problem in unsupervised learning. It is also an important business problem, e.g. in market segmentation. Existing approaches include the silhouette measure, the gap statistic and Dirichlet process clustering. For thirty years SAS procedures have included the option of using the cubic clustering criterion (CCC) to estimate k. While CCC remains competitive, we propose a significant and original improvement, referred to herein as the aligned box criterion (ABC). Like CCC, ABC is based on a hypothesis-testing framework, but instead of a heuristic measure we use data-adaptive reference distributions to generate more realistic null hypotheses in a scalable and easily parallelizable manner. We have implemented ABC using SAS’ High Performance Analytics platform, and achieve state-of-the-art accuracy in the estimation of k.
We introduce a sparse kernel learning framework for the Continuous Relevance Model (CRM). State-of-the-art image annotation models linearly combine evidence from several different feature types to improve image annotation accuracy. While previous authors have focused on learning the linear combination weights for these features, there has been no work examining the optimal combination of kernels. We address this gap by formulating a sparse kernel learning framework for the CRM, dubbed the SKL-CRM, that greedily selects an optimal combination of kernels. Our kernel learning framework rapidly converges to an annotation accuracy that substantially outperforms a host of state-of-the-art annotation models. We make two surprising conclusions: firstly, if the kernels are chosen correctly, only a very small number of features are required so to achieve superior performance over models that utilise a full suite of feature types; and secondly, the standard default selection of kernels commonly used in the literature is sub-optimal, and it is much better to adapt the kernel choice based on the feature type and image dataset.
This document discusses resolving conflicts between MRP (Material Requirements Planning) and Lean. It begins with an MRP demonstration and shows how applying a lot size to production order releases rather than receipts can reduce inventory levels. A computer algorithm is proposed to optimally calculate order quantities based on lot size and yield. Results showed that reducing lot sizes decreases lead times, work-in-process, and inventory while avoiding overproduction. The document concludes that MRP and Lean can co-exist if MRP is optimized for planning and Lean is used for execution.
The document discusses the public distribution system in India. It provides an overview of the evolution and objectives of PDS, the procurement and distribution of grains, issues like diversion and wastage, recent scams, and ongoing reforms including introducing IT and using Aadhar IDs. The key goals of PDS are to provide essential items at reasonable prices, influence open market prices, and promote social welfare. However, significant problems include grain diversion, storage losses, and lack of transparency leading to large scams. Reforms aim to modernize PDS using new technologies and a targeted delivery system.
This paper presents a RFID based Smart Ration System which would overcome drawbacks of conventional ration system. The conventional ration system has drawbacks such as weight of the material may be inaccurate due to human mistakes and if the material is not purchased by the customer, at the end of the month the distributor will sale the material for his profit without taking permission from government. This system will provide RFID tags to customers instead of conventional
ration card. These RFID tags will get scan at distributor and customer will get required material automatically.
Here are some key observations from on-site visits to FPS and interviews with beneficiaries:
- Ration cards are not updated regularly with latest household details like additions, deletions etc.
- Stock registers maintained by FPS owners are often incomplete or tampered with.
- FPS owners sometimes distribute less quantity than entitled or divert stock meant for PDS.
- Many beneficiaries complained of getting poor quality, damaged or wet stock.
- Transporters are involved in pilferage while transporting stock from depots to FPS.
- There is no mechanism to track movement of vehicles carrying PDS stock.
- Multiple/fake ration cards are being used to divert PDS stock meant for poor.
The document discusses food security and the public distribution system (PDS) in India. It provides background on hunger hotspots and the evolution of the PDS. Key points include: India ranks 94th on the Global Hunger Index; states like Jharkhand, Chhattisgarh and Bihar have very high levels of food insecurity. The PDS was revamped in 1992 and further targeted in 1997 to focus on below poverty line families. It currently provides subsidized grains to over 250 million families through fair price shops.
This document summarizes an academic paper that proposes automating ration shops in India using programmable logic controllers (PLCs). Currently, ration shops distribute essential goods manually, which can result in inaccurate quantities, illegal diversions, and long wait times. The proposed automated system would minimize manual intervention and improve transparency and efficiency. It would use sensors and PLCs to automatically measure and dispense goods like rice, sugar, and kerosene based on user input. The document provides details on the proposed system design and components, including storage tanks, delivery mechanisms, sensors, and PLC programming.
This document discusses India's Public Distribution System (PDS) ration cards. It outlines the different types of ration cards (Green, Yellow, Antyodaya, APL), who they are issued to, and what commodities cardholders are entitled to at subsidized prices (rice, wheat, sugar, kerosene). The document also lists the documentation required to apply for a ration card and the fees associated with different card types.
The document discusses the logistics involved in India's Public Distribution System (PDS). The PDS procures staple foods like rice and wheat and distributes them through a network of over 462,000 fair price shops to millions of Indian families with ration cards. It describes the key entities involved, including central and state governments, traders who operate fair price shops, and consumers. It then outlines the logistical processes of procurement, storage, transportation, bulk allocation, distribution to shops, and purchases by consumers at subsidized prices. The goal of the PDS is to ensure food security for the people of India.
The document presents a novel spatial query called the K-Best Site Query (KBSQ). The KBSQ finds the K sites from a set of sites S that minimize the total distance from each object to its closest site. The document proposes two approaches for processing the KBSQ - a straightforward approach and the KBSQ algorithm. The straightforward approach directly computes distances and considers all possible site combinations, while the KBSQ algorithm leverages spatial indexes like the R*-tree and Voronoi diagram to improve efficiency. Experimental results demonstrate the KBSQ algorithm outperforms the straightforward approach for large datasets.
Variational quantum gate optimization on superconducting qubit systemHeyaKentaro
This document proposes and experimentally demonstrates a variational quantum gate optimization method on a superconducting qubit system. The method uses a parameterized quantum circuit ansatz, a specialized optimizer, and input-output constraints to achieve fast convergence. Experiments applying this method to generate ZXR gates on a four-qubit superconducting chip achieve fidelities close to the coherence limit, demonstrating improved performance over conventional optimization techniques.
A basic tutorial on using Wannier90 with the VASP code. Includes a brief overview of Wannier functions, tips on how to build VASP with Wannier90 support, and how to use the VASP/Wannier90 interface to compute an HSE06 band structure and perform some other Wannier90 post processing.
This document discusses pipelined processors and different approaches to pipelining. It describes ideal pipelining and how clock period is determined. It then discusses challenges with pipelining like hazards and clocking overhead. Different techniques for pipelining like conventional pipelining, wave pipelining, and self-timed circuits are explained. Issues with wave pipelining like timing constraints and balancing delays are also covered.
This document summarizes the performance of an algebraic multigrid solver on leading multicore architectures. It describes how the multigrid solver works by repeating pre-smoothing, coarse-grid correction, and post-smoothing steps until convergence. It also discusses the SPE10 oil reservoir modeling benchmark problem being solved, the Cray XC30 and Intel Xeon Phi machines studied, and optimizations that improved the performance of the PCG solver. Charts are included showing runtimes, where time is spent in the AMG cycle, and how parameters affect performance.
Experimental Evaluation of a Novel Fast Beamsteering Algorithm for Link Re-Es...Avishek Patra
The millimeter-wave (mm-wave) bands are currently being explored for multi-Gbps wireless local area networks (WLANs). Directional antennas are required to overcome the high attenuation inherent at the mm-wave frequencies. However, directionality makes link maintenance and establishment tasks complex, especially under node mobility, as slight misalignment of antenna beams between nodes leads to link disruption. Consequently, low latency beamsteering algorithms are needed for fast link re-establishment to support seamless data provisioning. Solutions based on exhaustive sequential scanning induce high latency, thereby disrupting communication. On the other hand, existing low latency proposals typically consider only static links, depend on additional hardware, or require a priori information about the network environment. In this paper, we propose a generic, fast mm-wave beamsteering algorithm that utilizes the previous valid link information to initiate the feasible antenna sector pair search and adaptively increases the sector search space around it to re-establish a link. Additionally, we experimentally evaluate the performance of our algorithm through measurements conducted in a real indoor environment using 60 GHz packet radio transceivers. The results show that, compared to exhaustive sequential scanning, our algorithm reduces the required sector search space, and thereby the link re-establishment latency, by 89% on average compared to exhaustive sequential scanning.
Making TIAM-MACRO-SA an Integrated Assessment ModelIEA-ETSAP
The document outlines recent enhancements to the TIMES energy systems modeling framework between versions 3.4.2 and 3.8.1. Key updates include: 1) Improved algorithms for macroeconomic decomposition and integration of climate impacts; 2) New options for modeling variable renewable generation and ensuring sufficient capacity; and 3) Enhanced reporting capabilities and additional constraint formulations. Documentation of the changes is ongoing across user guides and technical notes.
Smart mm-Wave Beam Steering Algorithm for Fast Link Re-Establishment under No...Avishek Patra
Millimeter-wave (mm-wave) wireless local area networks (WLANs) are expected to provide multi-Gbps connectivity by exploiting a large amount of unoccupied spectrum in e.g. the unlicensed 60 GHz band. However, to overcome the high path loss inherent at these high frequencies, mm-wave networks must employ highly directional beamforming antennas, which make link establishment and maintenance much more challenging than in traditional omnidirectional networks. In particular, maintaining connectivity under node mobility necessitates frequent re-steering of the transmit and receive antenna beams to re-establish a directional mm-wave link. A simple exhaustive sequential scanning to search for new feasible antenna sector pairs may introduce excessive delay, potentially disrupting communication and lowering the QoS. In this paper, we propose a smart beam steering algorithm for fast 60 GHz link re-establishment under node mobility, which uses knowledge of previously feasible sector pairs to narrow the sector search space, thereby reducing the associated latency overhead. We evaluate the performance of our algorithm in several representative indoor scenarios, based on detailed simulations of signal propagation in a 60 GHz WLAN in WinProp with realistic building materials. We study the effect of indoor layout, antenna sector beamwidth, node mobility pattern, and device orientation awareness. Our results show that the smart beam steering algorithm achieves a 7-fold reduction of the sector search space on average, which directly translates into lower 60 GHz link re-establishment latency. Our results also show that our fast search algorithm selects the near-optimal antenna sector pair for link re-establishment.
Jorge Silva, Sr. Research Statistician Developer, SAS at MLconf ATL - 9/18/15MLconf
Estimating the Number of Clusters in Big Data with the Aligned Box Criterion: Finding the number, k, of clusters in a dataset is a fundamental problem in unsupervised learning. It is also an important business problem, e.g. in market segmentation. Existing approaches include the silhouette measure, the gap statistic and Dirichlet process clustering. For thirty years SAS procedures have included the option of using the cubic clustering criterion (CCC) to estimate k. While CCC remains competitive, we propose a significant and original improvement, referred to herein as the aligned box criterion (ABC). Like CCC, ABC is based on a hypothesis-testing framework, but instead of a heuristic measure we use data-adaptive reference distributions to generate more realistic null hypotheses in a scalable and easily parallelizable manner. We have implemented ABC using SAS’ High Performance Analytics platform, and achieve state-of-the-art accuracy in the estimation of k.
We introduce a sparse kernel learning framework for the Continuous Relevance Model (CRM). State-of-the-art image annotation models linearly combine evidence from several different feature types to improve image annotation accuracy. While previous authors have focused on learning the linear combination weights for these features, there has been no work examining the optimal combination of kernels. We address this gap by formulating a sparse kernel learning framework for the CRM, dubbed the SKL-CRM, that greedily selects an optimal combination of kernels. Our kernel learning framework rapidly converges to an annotation accuracy that substantially outperforms a host of state-of-the-art annotation models. We make two surprising conclusions: firstly, if the kernels are chosen correctly, only a very small number of features are required so to achieve superior performance over models that utilise a full suite of feature types; and secondly, the standard default selection of kernels commonly used in the literature is sub-optimal, and it is much better to adapt the kernel choice based on the feature type and image dataset.
This document discusses resolving conflicts between MRP (Material Requirements Planning) and Lean. It begins with an MRP demonstration and shows how applying a lot size to production order releases rather than receipts can reduce inventory levels. A computer algorithm is proposed to optimally calculate order quantities based on lot size and yield. Results showed that reducing lot sizes decreases lead times, work-in-process, and inventory while avoiding overproduction. The document concludes that MRP and Lean can co-exist if MRP is optimized for planning and Lean is used for execution.
The document discusses the public distribution system in India. It provides an overview of the evolution and objectives of PDS, the procurement and distribution of grains, issues like diversion and wastage, recent scams, and ongoing reforms including introducing IT and using Aadhar IDs. The key goals of PDS are to provide essential items at reasonable prices, influence open market prices, and promote social welfare. However, significant problems include grain diversion, storage losses, and lack of transparency leading to large scams. Reforms aim to modernize PDS using new technologies and a targeted delivery system.
This paper presents a RFID based Smart Ration System which would overcome drawbacks of conventional ration system. The conventional ration system has drawbacks such as weight of the material may be inaccurate due to human mistakes and if the material is not purchased by the customer, at the end of the month the distributor will sale the material for his profit without taking permission from government. This system will provide RFID tags to customers instead of conventional
ration card. These RFID tags will get scan at distributor and customer will get required material automatically.
Here are some key observations from on-site visits to FPS and interviews with beneficiaries:
- Ration cards are not updated regularly with latest household details like additions, deletions etc.
- Stock registers maintained by FPS owners are often incomplete or tampered with.
- FPS owners sometimes distribute less quantity than entitled or divert stock meant for PDS.
- Many beneficiaries complained of getting poor quality, damaged or wet stock.
- Transporters are involved in pilferage while transporting stock from depots to FPS.
- There is no mechanism to track movement of vehicles carrying PDS stock.
- Multiple/fake ration cards are being used to divert PDS stock meant for poor.
The document discusses food security and the public distribution system (PDS) in India. It provides background on hunger hotspots and the evolution of the PDS. Key points include: India ranks 94th on the Global Hunger Index; states like Jharkhand, Chhattisgarh and Bihar have very high levels of food insecurity. The PDS was revamped in 1992 and further targeted in 1997 to focus on below poverty line families. It currently provides subsidized grains to over 250 million families through fair price shops.
This document summarizes an academic paper that proposes automating ration shops in India using programmable logic controllers (PLCs). Currently, ration shops distribute essential goods manually, which can result in inaccurate quantities, illegal diversions, and long wait times. The proposed automated system would minimize manual intervention and improve transparency and efficiency. It would use sensors and PLCs to automatically measure and dispense goods like rice, sugar, and kerosene based on user input. The document provides details on the proposed system design and components, including storage tanks, delivery mechanisms, sensors, and PLC programming.
This document discusses India's Public Distribution System (PDS) ration cards. It outlines the different types of ration cards (Green, Yellow, Antyodaya, APL), who they are issued to, and what commodities cardholders are entitled to at subsidized prices (rice, wheat, sugar, kerosene). The document also lists the documentation required to apply for a ration card and the fees associated with different card types.
The document discusses the logistics involved in India's Public Distribution System (PDS). The PDS procures staple foods like rice and wheat and distributes them through a network of over 462,000 fair price shops to millions of Indian families with ration cards. It describes the key entities involved, including central and state governments, traders who operate fair price shops, and consumers. It then outlines the logistical processes of procurement, storage, transportation, bulk allocation, distribution to shops, and purchases by consumers at subsidized prices. The goal of the PDS is to ensure food security for the people of India.
This document discusses cost analysis and contains the following key points:
1. Cost analysis is important for managers to find lower cost production methods and compete against firms with lower costs. It examines concepts like fixed, variable, average, and marginal costs.
2. Short-run and long-run cost functions are presented, showing relationships between costs and output. Economies of scale can cause long-run average costs to decline with increased output up to a point.
3. The Cobb-Douglas production function is described and used to derive long-run cost functions based on factor inputs and returns to scale. Constant, increasing, and decreasing returns to scale impact the shape of the long-run average cost curve
Benchmarking Elastic Cloud Big Data Services under SLA ConstraintsNicolas Poggi
The document proposes a new benchmark called Elasticity Test (ET) to evaluate elastic cloud big data systems under service level agreement (SLA) constraints. The ET generates realistic workloads based on production job arrival patterns and scales of data. It measures SLA compliance by calculating the distance between actual query completion times and specified SLAs. This provides a more meaningful metric than the current TPCx-BB metric. Experimental results on Apache Hive and Spark using the new ET and metric show significant differences from the current metric, highlighting weaknesses in elasticity and isolation. Future work includes testing database-as-a-service platforms and further study of specifying and incorporating SLAs into benchmarks.
Optimal Energy Storage System Operation for Peak ReductionDaisuke Kodaira
This document presents a study on using energy storage systems (ESS) for peak reduction on a distribution network. The key points are:
1. Two ESS batteries were installed and controlled remotely by the network operator to reduce peaks. Load and ESS schedules were optimized 24 hours ahead.
2. Accurate load prediction is challenging due to errors. Probabilistic prediction intervals (PIs) accounting for uncertainty were proposed to determine ESS schedules.
3. Different PI construction methods like sample base, confidence interval, and Chebyshev were evaluated. Confidence interval achieved the best yearly peak reduction while minimizing the coverage width-based criterion.
4. A modified objective function considering off-peak duration in
The document discusses energy-saving policies for grid-computing and smart environments. It analyzes seven energy policies for managing resource states in the Grid'5000 infrastructure to reduce energy consumption. The policies are tested through simulation and evaluated using data envelopment analysis. The best policy was found to save up to 162,000 euros, 318 tons of CO2, and 1,163,286 kWh per year for Grid'5000. Locations and policies are also compared to identify efficiency improvements needed based on the results.
Design and Implementation of Different types of Carry skip adderIRJET Journal
The document describes the design and implementation of different types of carry skip adders. It begins with an introduction to carry skip adders and their advantages over other adder types in terms of speed, area usage, and transistor count. It then reviews existing carry skip adder designs and their limitations. A new design called the Common Boolean Logic (CBL) carry skip adder is proposed that aims to reduce area and power consumption by eliminating redundant adder cells through shared logic. Simulation results show that an 8-bit CBL carry skip adder has 64.6% lower power and 18.7% smaller area than a conventional carry skip adder. In conclusion, the CBL carry skip adder achieves improved performance and efficiency.
Recent developments in the field of reduced order modeling - and in particular, active subspace construction - have made it possible to efficiently approximate complex models by constructing low-order response surfaces based upon a small subspace of the original high dimensional parameter space. These methods rely upon the fact that the response tends to vary more prominently in a few dominant directions defined by linear combinations of the original inputs, allowing for a rotation of the coordinate axis and a consequent transformation of the parameters. In this talk, we discuss a gradient free active subspace algorithm that is feasible for high dimensional parameter spaces where finite-difference techniques are impractical. We illustrate an initialized gradient-free active subspace algorithm for a neutronics example implemented with SCALE6.1.
IRJET- Optimal Generation Scheduling for Thermal UnitsIRJET Journal
This document summarizes a research paper that develops an optimal short-term generation scheduling for 10 generating units using particle swarm optimization (PSO). The scheduling problem is formulated to minimize operating costs while satisfying constraints like power balance, unit limits, minimum up/down times, and spinning reserve requirements. PSO is described as an evolutionary algorithm that finds the global best solution by updating particle velocities and positions based on the particle's own experience and the experience of neighboring particles. The steps of applying PSO to the scheduling problem are outlined, with particles initialized randomly within unit limits and then updated iteratively until an optimal schedule is found.
IRJET- Optimal Generation Scheduling for Thermal UnitsIRJET Journal
This document summarizes a research paper that develops an optimal short-term generation scheduling model for 10 generating units using particle swarm optimization (PSO). The objective is to minimize total operating costs including fuel costs and start-up costs while satisfying constraints like power balance, generator limits, minimum up/down times, and reserve requirements. PSO is applied to obtain the optimal scheduling by updating the velocity and position of "particles" representing generator outputs over iterations. Results show PSO efficiently finds near-optimal solutions and provides economic benefits compared to other techniques for solving short-term generation scheduling problems.
This document summarizes a project to reduce shell weight for BH521 6-cavity shells from 27kg to 25kg while maintaining low defect rates. Key actions included measuring current weights, analyzing factors like investment time and temperature, conducting experiments to optimize parameters, implementing controls at 65-70s and 240°C, and verifying the new process achieved an average weight of 25.5kg. Savings of 1.4kg resin per shell and an extra shell per hour were projected to yield annual savings of Rs. 5.88 lakhs. Controls were established for materials, times, and temperatures to sustain the improvement.
- Six Sigma is a quality methodology that aims for near perfection with 3.4 defects per million opportunities. It was developed by Motorola in 1987.
- Key concepts include process capability index (Cp), process variation, and specification limits. A Cp of 2.0 or higher is needed to achieve Six Sigma quality.
- The DMAIC methodology is used for improving existing processes and focuses on defining problems, measuring processes, analyzing causes, improving processes, and controlling future performance. DFSS designs new processes at Six Sigma quality levels using approaches like DMADV.
This document discusses the design of a green building with optimal solar energy generation and human comfort. It outlines the steps to size a solar photovoltaic system to meet the building's power needs. Computational fluid dynamics is used to model air flow and ventilation in the building. The modeling shows stagnation points and improved ventilation with the addition of solar chimneys and strategically placed vents and outlets.
This document outlines the schedule and content for an advanced econometrics and Stata training course taking place from October 17-26, 2019 in Beijing, China. The course will cover topics including single and multi-regression, hypothesis testing, panel data models, time series models, stochastic frontier analysis, data envelopment analysis, and difference-in-differences. Data envelopment analysis will be the focus of sessions 13 and 14, covering concepts such as efficiency measurement, variable returns to scale, and incorporating environmental variables.
Who doesn’t want more glass? The latest version of Ontario’s Building Energy Code, SB-10, limits the amount of glazing to 40%. SB-10 includes 3 prescriptive paths to compliance. All prescriptive solutions permit trade-offs and require them to be determined with approved software.
Gerry Conway will walk you through a project using a simple compliance tool, COMcheck, by which designers may determine effective R or U values and perform simple trade-offs. The trade-offs may be used to increase the allowable glazing above the 40% threshold. COMcheck is free, has the latest version of SB-10 built in, promotes collaboration with others on the project team through a web-based user interface and is well documented. With it the user may prepare a standalone compliance report for the building envelope or a complete building report covering all disciplines. COMcheck compliance reports are acceptable to ASHRAE and most authorities.
Gerry is an Ottawa based Architect whose practice is focused on building science, technology, codes, standards, practice tools, quality assurance and building envelope/energy performance. He fervently believes in a holistic approach to Architecture where beauty, utility and technology are harmoniously synthesized.
Gerry sits on the Engineers, Architects and Building Officials committee, the Coordinating Licensed Professional PEO-OAA Joint Task Group and the Board of BECOR. In 2017 he represented the OAA on the OBC Part 3 Technical Advisory Committee which included discussions regarding changes to Ontario’s Energy Code SB-10.
Gerry is a member of the OAA, past member of the American Institute of Architects and is a fellow of the RAIC.
This document discusses scheduling operations in manufacturing. It begins with the objectives of scheduling, including meeting due dates, maximizing resource utilization, and minimizing lateness and inventory. It then covers loading and sequencing operations, as well as monitoring progress. Advanced scheduling techniques like the Theory of Constraints are also summarized, focusing on identifying and managing bottlenecks. Finally, it briefly discusses employee scheduling and automated scheduling systems.
This document discusses key trade-offs in chip design including time, area, power, reliability, and configurability. It covers topics like cycle time, die area and cost, ideal and practical scaling, power consumption, and how these factors relate to processor design trade-offs between area, time and power. Key considerations in design include optimizing the pipeline for cycle time, minimizing die area and maximizing yield, accounting for the increasing dominance of wire delays over gate delays with scaling, and balancing dynamic and static power sources.
This document discusses production economics concepts including short-run and long-run production functions, marginal product, average product, returns to scale, and cost minimization. It provides examples of production functions, calculates elasticities of output, and discusses estimating production functions from data. Managers must choose production methods to minimize costs while economists use tools like production functions to evaluate efficiency.
Design of Compensators for Speed Control of DC Motor by using Bode Plot Techn...IRJET Journal
This document describes the design of different compensators for speed control of a DC motor using Bode plot techniques. It discusses lead, lag, and lag-lead compensators. The design procedure involves determining the uncompensated system response, specifying desired closed-loop specifications, calculating compensator parameters, and evaluating the compensated system response. As an example, lead, lag, and lag-lead compensators are designed for a sample system to meet different phase margin and velocity error specifications. Simulation results show the lead compensator improves transient response, lag improves steady-state response, and lag-lead improves both responses compared to the uncompensated system.
Similar to Ration-by-Weight of Efficiency and Equity (20)
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
1. Ration-by-Weight of Efficiency and Equity
A new allocation method in ground delay program
planning
Rong Wang, David J. Lovell, Michael O. Ball
University of Maryland ,College Park
1
2. Agenda
• Introduction: Background
• Introduction: Method of “Ration-by”
• Example of Three “Ration-by” Allocation Methods
• Practical Results of Three “Ration-by” Allocation Methods
• Ration-by-Weight of Efficiency and Equity (RBW) Method
• Practical Results of RBW Method
• Equity-based RBW (E-RBW)
• New Concept: Efficiency-Equity Ratio
• E-RBW Practical Results
• Solutions based on RBW
• Contribution & Conclusion
2
3. Introduction: Background
• Benefit of GDP: safer, costs less
• Scenario: if GDP is cancelled early
• Goal: compromise between efficiency and
equity
• This is a flight assignment problem
3
4. Introduction: Method of “Ration-by”
• The idea of Ration-by Method
1. Set up priority for flights by a certain standard
2. Assign slots to flights according to the priority
• RBS: Ration-by-Scheduled Time of Arrival
• RBD: Ration-by-Distance
• Can we try Ration-by-Scheduled Time of
Departure?
4
5. Example of Ration-by Allocation Methods
• Limit: STA ≤ CTA or the assigned slot time
RBS
f1
f2
f3
8:10
8:20
8:30
f1
f2
f3
8:10
8:20
8:30
RBD Ration-by-STD
f1
f2
f3
8:10
8:20
8:30
Flight STA Length STD
f1 8:00 60 min 7:00
f2 8:05 80 min 6:45
f3 8:10 83 min 6:47
STA Length STD
f1 f3 f2
f2 f2 f3
f3 f1 f1
6. Practical Results of Three Allocation Methods
• 4-hour GDP, 2 hours early cancellation time
• Efficiency: total expected delay
• Equity: total positive deviation from RBS slot time
• Max deviation: maximum deviation from RBS slot time of
a single flight
Methods Efficiency Equity Max deviation
RBD 2072 minutes 2346 minutes 244 minutes
Ration-by-STD 2413 minutes 1688 minutes 82 minutes
RBS 2988 minutes 0 minutes 0 minutes
7. Ration-by-Weight of Efficiency and Equity
(RBW) Method
• STD = STA – Length
w = k * STA – (1-k) * Length
• Give priority to flights with small value of w
• Ration-by-Weight of efficiency and equity
7
k w Method
0 - Length RBD
0.5 0.5*STD Ration-by-STD
1 STA RBS
8. Practical Results of RBW Method
• With increasing k , total
delays increase; equity
and max deviation
decrease monotonically.
• Earlier cancellation
less total delays.
• Max deviation can be 244
minutes
• When k > 0.7, max
deviation ≤ 50 minutes
8
Figure1 Efficiency
Figure2 Equity & Max Deviation
0 0.2 0.4 0.6 0.8 1
0
1000
2000
3000
4000
Totaldelay
k
No early Cnx
1 hr. early Cnx
2 hrs. early Cnx
3 hrs. early Cnx
4 hrs. early Cnx
0 0.2 0.4 0.6 0.8 1
0
500
1000
1500
2000
2500
Equity
0 0.2 0.4 0.6 0.8 1
0
50
100
150
200
250
Equity
Max deviation
MaxDeviation
k
9. Equity Based RBW (E-RBW)
• Max deviation limit δ, slot time ≤ RBS+δ
• f1 , f2 , f3 with increasing scheduled time of arrival.
• w2 < w3 < w1 for a certain k, priority queue: f2 , f3, f1
9
Slot 1
Slot 2
Slot 3
Slot 4
f1
f2
f3
f3
f2
f1
f2
f1
f3
f2
f1
f3
f2
f3
f1
f2
f3
f1
10. Efficiency-Equity Ratio
• R =( dRBS – efficiency)/equity
• How valuable the slot exchanges are:
if flights in a GDP get N minutes additional
delay totally, the delay decrease of the
whole system is R*N minutes
10
11. E-RBW Practical Results
• Keep the same trend
as RBW but total
delays and equity
don’t change
monotonically
• Minimum total delay
does not necessarily
happen at k=0.
11
Figure 3 Efficiency
Figure 4 Equity & Max Deviation
0 0.2 0.4 0.6 0.8 1
0
500
1000
1500
Equity
0 0.2 0.4 0.6 0.8 1
0
10
20
30
k
Maxdeviation
Max deviation (Minutes)
Equity (Minutes)
0 0.2 0.4 0.6 0.8 1
500
1000
1500
2000
2500
3000
3500
4000
k
Totaldelay
No early Cnx
1 hr. early Cnx
2 hrs. early Cnx
3 hrs. early Cnx
4 hrs. early Cnx
k
12. E-RBW Practical Results
• 3 hours early Cnx
Max R = 0.565, k = 0.74
• 2 hours early Cnx
Max R = 0.275, k = 0.85
• When a GDP is cancelled
earlier, the Efficiency-
Equity Ratio is bigger.
• Higher max deviation
limit, better efficiency-
equity ratio.
12
Figure 6 Ratio at different δ
Figure 5 Max Ratio (SFO δ = 30)
0 0.2 0.4 0.6 0.8 1
0.1
0.2
0.3
0.4
k
Efficiency-equityRatio
Max deviation limt = 30
Max deviation limt = 20
Max deviation limt = 50
0 0.2 0.4 0.6 0.8 1
0.2
0.4
0.6
0.8
1
k
Efficiency-equityratio
2 hrs. early Cnx
3 hr. early Cnx
13. Which k Can Give Minimum Delay?
• Total delays decrease
when a GDP is
cancelled early.
• No rules for values of
k which give
minimum delay at
different cancellation
time.
13
Figure 7 k & Minimum Delays
(SFO)
Figure 8 k & Minimum Delays
(EWR)
5050 100 150 200 240
30003000
1000
2000
4000
Minimumdelay
24024050 100 150 200
0.750.75
0.25
0.5
1
k
k
Minimum Delay
Early Cancellation Time (Minutes)
Early Cancellation Time (Minutes)
24050 100 150 200
30003000
1000
0
2000
4000
Minimumdelay
24050 100 150 200
0.750.75
0.25
0.5
1
k
Minimum delay
k
14. Which k Can Give Max Efficiency-Equity Ratio?
• δ = 30 minutes.
• Max Ratios increase with
increasing earlier GDP
cancellation time.
• For SFO airport, the
interval of k is [0.7,1], if
we ignore two jumps.
• For EWR airport, the
interval of k is [0.8 ,1] ,if
we ignore one jump.
• The interval of k depends
on airports.
14
Figure 9 Ratio at difference Cnx (SFO)
Figure 10 Ratio at difference Cnx (EWR)
24050 100 150 200
0.750.75
0.25
0.5
1
MaxRatio
0 50 100 150 200
0.750.75
0.25
0.5
1
k
Max ratio
k
Early Cancellation Time (minutes)
24024050 100 150 200
0.750.75
0.25
0.5
1
Maxratio
0 50 100 150 200
0.750.75
0.25
0.5
1 k
k
Max ratio
Early Cancellation Time (minutes)
15. Solutions based on RBW & E-RBW
• Give weight of equity (k) or weight of efficiency (1-k)
directly.
• Give max deviation limit δ , and choose the solution with
minimum total delay.
• Give max deviation limit δ , and choose the solution with
maximum efficiency-equity ratio.
• Give max deviation limit δ , and choose average delay no
more than a certain value.
15
16. Contributions
• E-RBW provides a robust framework for
designing rationing methods based on a
small parameter space
• A new metric (efficiency-equity ratio) for
measuring rationing method performance
• A more efficient implementation of E-RBD
16
17. Conclusions
• This is an easily implementable framework
for rationing methods
• The design framework has been established,
but further guidance is needed to understand
the consequences of different design options
17
18. Acknowledgement
• I appreciate my parents’ support from
China. Dr. Michael O. Ball and Dr. David J.
Lovell also contributed a lot for the
research.
18
21. RBW vs. E-RBW
• When k = 0.85, we get best efficiency-equity ratio
for 2 hours early cancellation time
21
k=0.85 Efficiency Equity Max
Deviation
Efficiency-
Equity
Ratio
E-RBW 2667 1160 30 0.2753
RBW 2667 1170 42 0.2743
RBW 2673 1160 42 0.2716
RBW 2672 900 32 0.3511
RBW 2672 892 26 0.3543
22. RBW vs. E-RBW
2100 2200 2300 2400 2500 2600 2700 2800 2900 2988
0
500
1000
1500
2000
2364
Efficiency
Equity
• RBW can give some
solution with small
total delay.
• In some part, E-RBW
gives better solution
both in efficiency and
equity.
22
2640 2660 2680 2700 2720 2740 2760
950
1000
1050
1100
1150
1200
1250
1300
Figure 11 Efficiency-Equity Pair
Figure 12 Part of Efficiency-Equity Pair
25. k in Four Quadrant (RBW)
25
2000 2500 3000 3500
0
500
1000
1500
2000
2500
3000
Efficiency
Equity
1st Quadrant
2nd Quadrant 3rd Quadrant
4th Quadrant
Editor's Notes
Three allocation methods where we get the idea of ration-by-weight of efficiency and equity method.
What is ground delay program, weather,
Set up priority for flights by a certain standard, sort flight according to the standardSatisfy flights from high priority to lowBefore this research ,we have two allocation methods: RBS, RBD
1.Before this paper, there are already two allocation methods. One is ration-by-scheduled time of arrival. 2. RBS give priority to flights by their scheduled time of arrival , or STA in the table. 3. To simplify the explain, in this example, we assume slot times are bigger than all the flights. 4. We consider flight assignment from the RBS flight sequence, i.e. consider f1, then, f2, then f3.
We define the efficiency of a GDP as the total expected delay of flightsWe are not satisfied with the max deviation 82 minutes, we want 30, 45…
Can we get more? Maybe we can add a parameter into the function…If we change k from 0 to 1…
We are interested in solutions with smallmax deviation, for example, 30 minutes. We concern max deviation of single flight. 1. Total delay(minutes), k , No early Cnx, 1 hr. early Cnx, 2 hrs. early Cnx , 3 hrs.early Cnx , 4 hrs. early Cnx2. Equity(minutes), Equity, Max Deviation, k
We can look RBW as a natural way to control max deviation. We can get more solutions by using some skills.The E-RBW algorithm is based on the idea that we pre-assign each flight to a slot based on its RBS+δ time, then execute RBW operation from the first slot. If a flight is not assigned until the RBS+δ time, it will be permanently assigned to the slot with RBS+δ time.For slot2, from priority queue, it should be assigned to f3 , but it is occupied by f1, we need to check out if f1 can move down, since there is empty slot after f2 is assigned. Here, we assume f1 can move down, then slot2 becomes empty.
Before we see the practical result of RBW, lets take a look of efficiency-equity ratio. We define…
No early Cnx, 1 hr. early Cnx, 2 hrs. early Cnx, 3 hrs. early Cnx, 4 hrs. early Cnx
If some flight get additional delays for example 200mintues total, system decrease delay r*200. Is it worthy to do slot exchanges,which brings inequity to some flights. Ratio difference is 0.06% , equity is more than 1000 minutes, total delay difference is 71 minutes.When k=0.249, equity=1444, efficiency=2596;when k=0.851, equity=1166, efficiency=2667.