The document describes an algorithm for solving the single item profit maximizing capacitated lot-size problem (PCLSP) with fixed prices and no set-up costs. The algorithm works as follows:
1. Calculate the optimal "chase demand" solution without capacity constraints.
2. If this solution is feasible, it is optimal. Otherwise, identify periods where capacity is exceeded.
3. Produce as close as possible to the violating period to minimize total inventory. Move production earlier in periods until all constraints are satisfied.
Simple tests show the algorithm runs significantly faster than commercial solvers, making it useful for large problem instances or as a sub-routine in other applications.
The smile calibration problem is a mathematical conundrum in finance that has challenged quantitative analysts for decades. Through his research, Aitor Muguruza has discovered a novel resolution to this classic problem.
Accelerating Pseudo-Marginal MCMC using Gaussian ProcessesMatt Moores
The grouped independence Metropolis-Hastings (GIMH) and Markov chain within Metropolis (MCWM) algorithms are pseudo-marginal methods used to perform Bayesian inference in latent variable models. These methods replace intractable likelihood calculations with unbiased estimates within Markov chain Monte Carlo algorithms. The GIMH method has the posterior of interest as its limiting distribution, but suffers from poor mixing if it is too computationally intensive to obtain high-precision likelihood estimates. The MCWM algorithm has better mixing properties, but less theoretical support. In this paper we accelerate the GIMH method by using a Gaussian process (GP) approximation to the log-likelihood and train this GP using a short pilot run of the MCWM algorithm. Our new method, GP-GIMH, is illustrated on simulated data from a stochastic volatility and a gene network model. Our approach produces reasonable estimates of the univariate and bivariate posterior distributions, and the posterior correlation matrix in these examples with at least an order of magnitude improvement in computing time.
Context-Aware Recommender System Based on Boolean Matrix FactorisationDmitrii Ignatov
In this work we propose and study an approach for collaborative filtering, which is based on Boolean matrix factorisation and exploits additional (context) information about users and items. To avoid similarity loss in case of Boolean representation we use an adjusted type of projection of a target user to the obtained factor space.
We have compared the proposed method with SVD-based approach on the MovieLens dataset. The experiments demonstrate that the proposed method has better MAE and Precision and comparable Recall and F-measure. We also report an increase of quality in the context information presence.
We combined: low-rank tensor techniques and FFT to compute kriging, estimate variance, compute conditional covariance. We are able to solve 3D problems with very high resolution
The smile calibration problem is a mathematical conundrum in finance that has challenged quantitative analysts for decades. Through his research, Aitor Muguruza has discovered a novel resolution to this classic problem.
Accelerating Pseudo-Marginal MCMC using Gaussian ProcessesMatt Moores
The grouped independence Metropolis-Hastings (GIMH) and Markov chain within Metropolis (MCWM) algorithms are pseudo-marginal methods used to perform Bayesian inference in latent variable models. These methods replace intractable likelihood calculations with unbiased estimates within Markov chain Monte Carlo algorithms. The GIMH method has the posterior of interest as its limiting distribution, but suffers from poor mixing if it is too computationally intensive to obtain high-precision likelihood estimates. The MCWM algorithm has better mixing properties, but less theoretical support. In this paper we accelerate the GIMH method by using a Gaussian process (GP) approximation to the log-likelihood and train this GP using a short pilot run of the MCWM algorithm. Our new method, GP-GIMH, is illustrated on simulated data from a stochastic volatility and a gene network model. Our approach produces reasonable estimates of the univariate and bivariate posterior distributions, and the posterior correlation matrix in these examples with at least an order of magnitude improvement in computing time.
Context-Aware Recommender System Based on Boolean Matrix FactorisationDmitrii Ignatov
In this work we propose and study an approach for collaborative filtering, which is based on Boolean matrix factorisation and exploits additional (context) information about users and items. To avoid similarity loss in case of Boolean representation we use an adjusted type of projection of a target user to the obtained factor space.
We have compared the proposed method with SVD-based approach on the MovieLens dataset. The experiments demonstrate that the proposed method has better MAE and Precision and comparable Recall and F-measure. We also report an increase of quality in the context information presence.
We combined: low-rank tensor techniques and FFT to compute kriging, estimate variance, compute conditional covariance. We are able to solve 3D problems with very high resolution
A One-Pass Triclustering Approach: Is There any Room for Big Data?Dmitrii Ignatov
An efficient one-pass online algorithm for triclustering of binary data (triadic formal contexts) is proposed. This algorithm is a modified version of the basic algorithm for OAC-triclustering approach, but it has linear time and memory complexities with respect to the cardinality
of the underlying ternary relation and can be easily parallelized in order to be applied for the analysis of big datasets. The results of computer experiments show the efficiency of the proposed algorithm.
Information-theoretic clustering with applicationsFrank Nielsen
Information-theoretic clustering with applications
Abstract: Clustering is a fundamental and key primitive to discover structural groups of homogeneous data in data sets, called clusters. The most famous clustering technique is the celebrated k-means clustering that seeks to minimize the sum of intra-cluster variances. k-Means is NP-hard as soon as the dimension and the number of clusters are both greater than 1. In the first part of the talk, we first present a generic dynamic programming method to compute the optimal clustering of n scalar elements into k pairwise disjoint intervals. This case includes 1D Euclidean k-means but also other kinds of clustering algorithms like the k-medoids, the k-medians, the k-centers, etc.
We extend the method to incorporate cluster size constraints and show how to choose the appropriate number of clusters using model selection. We then illustrate and refine the method on two case studies: 1D Bregman clustering and univariate statistical mixture learning maximizing the complete likelihood. In the second part of the talk, we introduce a generalization of k-means to cluster sets of histograms that has become an important ingredient of modern information processing due to the success of the bag-of-word modelling paradigm.
Clustering histograms can be performed using the celebrated k-means centroid-based algorithm. We consider the Jeffreys divergence that symmetrizes the Kullback-Leibler divergence, and investigate the computation of Jeffreys centroids. We prove that the Jeffreys centroid can be expressed analytically using the Lambert W function for positive histograms. We then show how to obtain a fast guaranteed approximation when dealing with frequency histograms and conclude with some remarks on the k-means histogram clustering.
References: - Optimal interval clustering: Application to Bregman clustering and statistical mixture learning IEEE ISIT 2014 (recent result poster) http://arxiv.org/abs/1403.2485
- Jeffreys Centroids: A Closed-Form Expression for Positive Histograms and a Guaranteed Tight Approximation for Frequency Histograms.
IEEE Signal Process. Lett. 20(7): 657-660 (2013) http://arxiv.org/abs/1303.7286
http://www.i.kyoto-u.ac.jp/informatics-seminar/
Moment Preserving Approximation of Independent Components for the Reconstruct...rahulmonikasharma
The application of Independent Component Analysis (ICA) has found considerable success in problems where sets of observed time series may be considered as results of linearly mixed instantaneous source signals. The Independent Components (IC’s) or features can be used in the reconstruction of observed multivariate time seriesfollowing an optimal ordering process. For trend discovery and forecasting, the generated IC’s can be approximated for the purpose of noise removal and for the lossy compression of the signals.We propose a moment-preserving (MP) methodology for approximating IC’s for the reconstruction of multivariate time series.The methodologyis based on deriving the approximation in the signal domain while preserving a finite number of geometric moments in its Fourier domain.Experimental results are presented onthe approximation of both artificial time series and actual time series of currency exchange rates. Our results show that the moment-preserving (MP) approximations of time series are superior to other usual interpolation approximation methods, particularly when the signals contain significant noise components. The results also indicate that the present MP approximations have significantly higher reconstruction accuracy and can be used successfully for signal denoising while achieving in the same time high packing ratios. Moreover, we find that quite acceptable reconstructions of observed multivariate time series can be obtained with only the first few MP approximated IC’s.
A One-Pass Triclustering Approach: Is There any Room for Big Data?Dmitrii Ignatov
An efficient one-pass online algorithm for triclustering of binary data (triadic formal contexts) is proposed. This algorithm is a modified version of the basic algorithm for OAC-triclustering approach, but it has linear time and memory complexities with respect to the cardinality
of the underlying ternary relation and can be easily parallelized in order to be applied for the analysis of big datasets. The results of computer experiments show the efficiency of the proposed algorithm.
Information-theoretic clustering with applicationsFrank Nielsen
Information-theoretic clustering with applications
Abstract: Clustering is a fundamental and key primitive to discover structural groups of homogeneous data in data sets, called clusters. The most famous clustering technique is the celebrated k-means clustering that seeks to minimize the sum of intra-cluster variances. k-Means is NP-hard as soon as the dimension and the number of clusters are both greater than 1. In the first part of the talk, we first present a generic dynamic programming method to compute the optimal clustering of n scalar elements into k pairwise disjoint intervals. This case includes 1D Euclidean k-means but also other kinds of clustering algorithms like the k-medoids, the k-medians, the k-centers, etc.
We extend the method to incorporate cluster size constraints and show how to choose the appropriate number of clusters using model selection. We then illustrate and refine the method on two case studies: 1D Bregman clustering and univariate statistical mixture learning maximizing the complete likelihood. In the second part of the talk, we introduce a generalization of k-means to cluster sets of histograms that has become an important ingredient of modern information processing due to the success of the bag-of-word modelling paradigm.
Clustering histograms can be performed using the celebrated k-means centroid-based algorithm. We consider the Jeffreys divergence that symmetrizes the Kullback-Leibler divergence, and investigate the computation of Jeffreys centroids. We prove that the Jeffreys centroid can be expressed analytically using the Lambert W function for positive histograms. We then show how to obtain a fast guaranteed approximation when dealing with frequency histograms and conclude with some remarks on the k-means histogram clustering.
References: - Optimal interval clustering: Application to Bregman clustering and statistical mixture learning IEEE ISIT 2014 (recent result poster) http://arxiv.org/abs/1403.2485
- Jeffreys Centroids: A Closed-Form Expression for Positive Histograms and a Guaranteed Tight Approximation for Frequency Histograms.
IEEE Signal Process. Lett. 20(7): 657-660 (2013) http://arxiv.org/abs/1303.7286
http://www.i.kyoto-u.ac.jp/informatics-seminar/
Moment Preserving Approximation of Independent Components for the Reconstruct...rahulmonikasharma
The application of Independent Component Analysis (ICA) has found considerable success in problems where sets of observed time series may be considered as results of linearly mixed instantaneous source signals. The Independent Components (IC’s) or features can be used in the reconstruction of observed multivariate time seriesfollowing an optimal ordering process. For trend discovery and forecasting, the generated IC’s can be approximated for the purpose of noise removal and for the lossy compression of the signals.We propose a moment-preserving (MP) methodology for approximating IC’s for the reconstruction of multivariate time series.The methodologyis based on deriving the approximation in the signal domain while preserving a finite number of geometric moments in its Fourier domain.Experimental results are presented onthe approximation of both artificial time series and actual time series of currency exchange rates. Our results show that the moment-preserving (MP) approximations of time series are superior to other usual interpolation approximation methods, particularly when the signals contain significant noise components. The results also indicate that the present MP approximations have significantly higher reconstruction accuracy and can be used successfully for signal denoising while achieving in the same time high packing ratios. Moreover, we find that quite acceptable reconstructions of observed multivariate time series can be obtained with only the first few MP approximated IC’s.
An Inventory Management System for Deteriorating Items with Ramp Type and Qua...ijsc
The present paper deals with an inventory management system with ramp type and quadratic demand rates. A constant deterioration rate is considered into the model. In the two types models, the optimum time and total cost are derived when demand is ramp type and quadratic. A structural comparative study is demonstrated here by illustrating the model with sensitivity analysis.
HJB Equation and Merton's Portfolio ProblemAshwin Rao
Deriving the solution to Merton's Portfolio Problem (Optimal Asset Allocation and Consumption) using the elegant formulation of Hamilton-Jacobi-Bellman equation.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
My talk at the "15th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing " MCQMC conference at Johannes Kepler Universität Linz, July 20, 2022, about my recent works "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" and "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation."
Sequential quasi-Monte Carlo (SQMC) is a quasi-Monte Carlo (QMC) version of sequential Monte Carlo (or particle filtering), a popular class of Monte Carlo techniques used to carry out inference in state space models. In this talk I will first review the SQMC methodology as well as some theoretical results. Although SQMC converges faster than the usual Monte Carlo error rate its performance deteriorates quickly as the dimension of the hidden variable increases. However, I will show with an example that SQMC may perform well for some "high" dimensional problems. I will conclude this talk with some open problems and potential applications of SQMC in complicated settings.
Algorithm and its Properties
Computational Complexity
TIME COMPLEXITY
SPACE COMPLEXITY
Complexity Analysis and Asymptotic notations.
Big-oh-notation (O)
Omega-notation (Ω)
Theta-notation (Θ)
The Best, Average, and Worst Case Analyses.
COMPLEXITY Analyses EXAMPLES.
Comparing GROWTH RATES
My talk at the International Conference on Monte Carlo Methods and Applications (MCM2032) related to advances in mathematical aspects of stochastic simulation and Monte Carlo methods at Sorbonne Université June 28, 2023, about my recent works (i) "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://doi.org/10.1080/14697688.2022.2135455), and (ii) "Multilevel Monte Carlo with Numerical Smoothing for Robust and Efficient Computation of Probabilities and Densities" (link: https://arxiv.org/abs/2003.05708).
Production decline analysis is a traditional means of identifying well production problems and predicting well performance and life based on real production data. It uses empirical decline models that have little fundamental justifications. These models include
•
Exponential decline (constant fractional decline)
•
Harmonic decline, and
•
Hyperbolic decline.
Scalable inference for a full multivariate stochastic volatilitySYRTO Project
Scalable inference for a full multivariate stochastic volatility
P. Dellaportas, A. Plataniotis and M. Titsias UCL(London), AUEB(Athens), AUEB(Athens)
Final SYRTO Conference - Université Paris1 Panthéon-Sorbonne
February 19, 2016
Research internship on optimal stochastic theory with financial application u...Asma Ben Slimene
This is a presntation of my second year intership on optimal stochastic theory and how we can apply it on some financial application then how we can solve such problems using finite differences methods!
Enjoy it !
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Ceske budevice
1. The Single Item profit maximizing
capacitated lot-size (PCLSP) problem
with fixed prices and no set-up
by Kjetil K. Haugen1),∗]
Asmund Olstad1)
Krystsina Bakhrankova1)
and
Erik Van Eikenhorst2)
1) Molde
University College, Norway
2) University of Edinburgh, UK
∗] E-mail:
Kjetil.Haugen@hiMolde.no
16th International Scientific Conference on
Mathematical Methods in Economy and Industry
ˇ
Cesk´ Budˇjovice, June 15-18, 2009
e
e
1
2. Idea – Abstract
• Even though modern LP-solvers (and
computers) are extremely efficient, fast
specialized sub-problem solvers may be of
interest.
• Here we focus on an LP arising as a typical
sub-problem in Dynamic Pricing problems.
• We demonstrate the algorithmic development and conclude with some simple
speed tests, demonstrating computational
efficiency.
2
3. Background
Haugen, Olstad and Pettersen defined the
PCLSP problem in:
1) K. K. Haugen, A. Olstad, and B. I. Pettersen. The profit maximizing capacitated
lot-size (pclsp) problem. European Journal of Operations Research, 176:165–176,
2007.
2) K. K Haugen, A. Olstad, and B. I. Pettersen. Solving large-scale profit maximization capacitated lot-size problems by
heuristic methods. Journal of Mathematical Modelling and Algorithms, 6(1):135–
149, 2007.
3
5. PCLSP – variables and constants
Variables:
djt
pjt
xjt
Ijt
=
=
=
=
δjt =
demand for item j in period t
price of item j in period t
amount of item j produced in t
inventory of item j between t, t + 1
1 if item j is produced in period t
0 otherwise
Constants:
αjt
βjt
T
J
sjt
hjt
cjt
ajt
Rt
=
=
=
=
=
=
=
=
=
demand constant, for item j at t
demand slope, for item j at t
number of time periods
number of items
setup cost for item j in period t
storage cost, item j between t, t + 1
unit production cost, item j at t
resource used, item j at t
capacity resource available at t
T
Mjt =
djs
s=t
5
6. Single item – negligible set-up costs
Many modern production settings (JIT) involve negligible set-up costs (and times). In
the previous model we hence focus on a version with J = 1 (single item) and sjt ≈ 0
(negligible set-up costs).
Hence, removal of demand variables (djt) by
substitution gives:
T
Max Z =
[(αt − βt · pt)pt − htIt − ctxt] (9)
t=1
s.t.
atxt ≤ Rt
xt + It−1 − It = αt − βt · pt
xt ≥ 0
It ≥ 0,
αt
≥ pt ≥ 0
βt
∀t
∀t
∀t
∀t
∀t
(10)
(11)
(12)
(13)
(14)
6
7. Simplifying assumptions
• Capacity constraint: Without loss of generality, equation (10) can be substituted
ˆ
ˆ
with xt ≤ Rt where Rt = Rt .
a
t
• Given prices: If we assume that all prices
p1, . . . , pT are given, let’s say by p1, . . . , pT ,
ˆ
ˆ
the objective (9) can be rewritten as:
T
Max Z =
T
(αt − βt · pt)ˆt −
ˆ p
t=1
[htIt + ctxt]
t=1
T
(15)
[htIt + ctxt]
= C−
t=1
or
ˆ
Min Z =
T
[htIt + ctxt]
(16)
t=1
7
8. The reformulated LP
Additionally, defining;
ˆ
Dt = αt − βt · pt
ˆ
(17)
problem (9) – (14) may be redefined as the
follwing LP-problem:
ˆ
Min Z =
T
[htIt + ctxt]
(18)
t=1
s.t.
ˆ
xt ≤ Rt
ˆ
xt + It−1 − It = Dt
xt ≥ 0
It ≥ 0,
∀t
∀t
∀t
∀t
(19)
(20)
(21)
(22)
8
9. Assumptions on c and h
Logistics problems of this type (”Lot-sizing”)
will typically not have a very large time horizon. Consequentually, making assumptions
on stability of production and storage costs
seems reasonable. We assume the following:
c1 = c2 = . . . , cT = c
(23)
h1 = h2 = . . . hT = h
(24)
and
9
10. Minimization of total inventory
Utilizing assumptions (23), (24), the objective (16) may be expressed:
T
T
T
[htIt + ctxt] = h
t=1
It + c
t=1
xt
(25)
t=1
Next, it is straightforward to realize by summing up the left and right side of equation (20) that:
T
T
xt = IT − I0 +
t=1
ˆ
Dt
(26)
t=1
The right hand side of equation (26) is a constant so is h and c, giving:
¯
Min Z =
T
It
(27)
t=1
10
11. The algorithmic logic
• Now, Suppose we relax the capacity constraints (19). Then, the optimal solution
to the LP (18) – (22) is obvious (a ”Chase
Demand” or ”JIT” strategy):
∗
ˆ
x∗ = Dt and It = 0, ∀t
t
(28)
• Taking the capacity constraints back into
consideration, it is likewise obvious that
any period where (19) binds must lead to
production as close to this period as possible in order to minimize total inventory.
11
12. The algorithm
• Summing up: The algorithm could be described verbally as: Start out with the
JIT solution. If it is feasible it is also
optimal. If infeasible, run through all infeasible points (ie all periods where x∗ >
t
ˆ
Rt) and utilize ”closest” possible available
production capacity to remove infeasibilities.
• A formal version:
ˆ
0. LET x∗ = Dt , ∀t
t
ˆ
1. IF x∗ ≤ Rt , ∀t STOP (x∗ is optimal)
t
t
2. IF next period is T + 1 STOP
ˆ
3. ELSE find next period, τ where x∗ > Rt and
t
∗ − R in previous periods τ −
ˆt
produce a total of xt
1, τ −2, . . . as close as possible to τ . (If impossible,
problem is infeasible STOP)
ˆ
4. SET x∗ = Rτ and update x∗ −1 , x∗ −1 , . . . correτ
τ
τ
spondingly
5. GOTO 2.
12
13. Relaxing cost assumptions further
i) c1 = c2 = . . . cT = c and h1 = h2 = . . . hT
Previous arguments hold – similar mathematical reformulation. However, the final
objective changes from total inventory to
toal inventory costs:
ˆ
Min Z =
T
htIt
(29)
t=1
Obviously, the algorithm will still hold. It
is no point moving production to an earlier
period than the closest possible as total
inventory costs must increase under such
a strategy.
13
14. Relaxing cost assumptions further
ii) c1 > c2 . . . > cT and h1 = h2 = . . . hT
In this case, which should be quite natural
– productivity should increase over time,
the algorithm must also hold. Again, as
production costs are larger if we move
back in time, it must be optimal to produce as close to the capacity violation as
possible. Both total inventory and production costs are then minimized.
14
15. Relaxing cost assumptions further
c
iii) ht = Constant = c ⇒ ct = c · ht
t
In most reasonably competitive markets, the
value of a product is proportional to the production costs. Of course, in a perfectly competitive market, price equals marginal costs,
and the above assumption is ”correct” if the
main contribution to inventory costs are due
to storage value – as most inventory experts
assume.
Surely, such an assumption opens up also
for increasing prodction costs, which in certain situations may be predictable – wage increases, economic growth etc.
15
16. Algorithmic consequences if ct = c · ht
Rewriting (11) as:
ˆ
xt = Dt + It − It−1
(30)
and substituting ct = c · ht into the objective (18) yields:
ˆ
Z=
T
ˆ
htIt + c · ht(Dt + It − It−1)
(31)
t=1
Now, assuming a given initial inventory I0 and
ˆ
the elimination of a constant, the objective Z
above may be replaced by the following:
16
17. T
Z=
ˆ t It
h
(32)
t=1
where
ˆt = (c + 1)ht + cht+1 and hT +1 = 0
h
(33)
Finally, comparing the objectives Z of equaˆ
tion (32) and Z of equation (29), we observe
structural equality and our algorithm would
work also for the case with a constant ratio
between production and inventory costs.
18. Some simple numerical experiments
The previsouly defined algorithm was implemented in Fortran 95 and executed and compared with state of the art commercial LPsoftware (CPLEX) on a modern PC. The table below shows the results (CPU secs.).
CPLEX
Algorithm
Change (%)
T = 10k
0.219
0.031
700 %
T = 100k
1.766
0.093
1893 %
T = 1m
31.156
0.672
4637 %
17