• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Stock Decomposition Heuristic for Scheduling: A Priority Dispatch Rule Approach
 

Stock Decomposition Heuristic for Scheduling: A Priority Dispatch Rule Approach

on

  • 91 views

Highlighted in this article is a closed-shop scheduling heuristic which makes use of the traditional priority dispatch rule approach found in open-shop scheduling such as job-shop scheduling. Instead ...

Highlighted in this article is a closed-shop scheduling heuristic which makes use of the traditional priority dispatch rule approach found in open-shop scheduling such as job-shop scheduling. Instead of prioritizing and scheduling one job or project (or stock-order) at a time, we schedule one stock or stock-group at a time where a stock-group is a collection of individual stocks and their one or more stock-orders. These stocks can be feed-stocks, intermediate-stocks or product-stocks of which we focus on product-stocks given that most production is demand-driven. A key feature of this heuristic is our ability to compress the production network or superstructure so that only those unit-operations necessary to produce the stocks in question are included in the model thus reducing the size of the problem considerably at each iteration of the heuristic. The stock-specific network compression technique uses what we call a unit-capacity transshipment linear program to successively determine which unit-operations are redundant when making a particular stock. This heuristic is also particularly useful for those process industries that can potentially produce many product-stocks but only a fraction of these are produced within the scheduling horizon whereby the model is significantly reduced at solve time to include only those stocks that are demanded whereby redundant unit-operations are removed. An illustrative example is provided with recycle loops (i.e., stock flow-reversals) and shared units or equipment (i.e., unit flow-reversals) that demonstrates the effectiveness and efficiency of the technique.

Statistics

Views

Total Views
91
Views on SlideShare
89
Embed Views
2

Actions

Likes
0
Downloads
1
Comments
0

1 Embed 2

http://www.slideee.com 2

Accessibility

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Stock Decomposition Heuristic for Scheduling: A Priority Dispatch Rule Approach Stock Decomposition Heuristic for Scheduling: A Priority Dispatch Rule Approach Document Transcript

    • Stock Decomposition Heuristic for Scheduling: A Priority Dispatch Rule Approach Jeffrey Dean Kelly1 Keywords: Priority dispatch rules, primal heuristics, topological sorting, recycles, transshipment, network compression and mixed integer linear programming. 1 Industrial Algorithms LLC., 15 St. Andrews Road, Toronto, Ontario, Canada, M1P 4C3 E-mail: jdkelly@industrialgorithms.ca
    • 2 Abstract Highlighted in this article is a closed-shop scheduling heuristic which makes use of the traditional priority dispatch rule approach found in open-shop scheduling such as job-shop scheduling. Instead of prioritizing and scheduling one job or project (or stock- order) at a time, we schedule one stock or stock-group at a time where a stock-group is a collection of individual stocks and their one or more stock-orders. These stocks can be feed-stocks, intermediate-stocks or product-stocks of which we focus on product- stocks given that most production is demand-driven. A key feature of this heuristic is our ability to compress the production network or superstructure so that only those unit-operations necessary to produce the stocks in question are included in the model thus reducing the size of the problem considerably at each iteration of the heuristic. The stock-specific network compression technique uses what we call a unit-capacity transshipment linear program to successively determine which unit-operations are redundant when making a particular stock. This heuristic is also particularly useful for those process industries that can potentially produce many product-stocks but only a fraction of these are produced within the scheduling horizon whereby the model is significantly reduced at solve time to include only those stocks that are demanded whereby redundant unit-operations are removed. An illustrative example is provided with recycle loops (i.e., stock flow-reversals) and shared units or equipment (i.e., unit flow- reversals) that demonstrates the effectiveness and efficiency of the technique. Introduction Heuristics or rules-of-thumb are a valuable tool when solving large combinatorial problems of which production scheduling in the process industries is an example due to the mix of both continuous and discrete decision-making. In the process industries there are three key dimensions defining production or manufacturing and they are quantity, logic and quality (Kelly (2003a) and Kelly and Mann (2003)). The quantity and logic dimensions form a production-logistics sub-problem (or production-physics) and the quantity and quality dimensions form a production-quality sub-problem (or production-chemistry) where it is the logistics sub- problem that is the focus of this study. The quality sub-problem can however benefit directly from the approach taken here. In the logistics sub-problem the continuous variables are the flows and holdups and the discrete decision variables are the mode, material and move logic (binary) decision variables. Mode logic variables are identical to the unit-task assignment binary variables found in the state-task network (STN) of Kondili et. al. (1993). Material logic variables are similar except that they are for storage-vessels such as tanks and move logic variables are the explicit binary variables modeling the flow or no-flow decisions between unit-tasks via states or in our syntax from one unit-operation to another unit-operation (see Kelly (2004a) for details). Several successful heuristics and decomposition techniques applied specifically to production scheduling problems found in the process industries can found in Kudva et. al. (1994), Herrmann et. al. (1995), Basset et. al. (1996), Blomer and Gunther (2000), Jain and Grossmann (2001), Kelly (2002), Wu and Ierapetritou (2003), Kelly (2003b) and Kelly (2004b). All heuristics especially
    • 3 those known as primal heuristics try to use in some way the solutions obtained from one or more linear programs (LP) or mixed integer linear programs (MILP). Primal heuristics also try to divide the original problem into smaller more tractable sub-problems in order to reduce the number of binary variables which must be searched over. It is the approach taken by Kudva et. al (1994) that is the closest to our heuristic although their approach did not use a MILP but a homegrown search technique with unit-operation and stock linked-lists based on the STN formulation. In their useful heuristic they took each individual order for a product-stock and scheduled the orders greedily one at a time to build-up reasonably good solutions (i.e., a constructive search). An improved solution was then explored by splitting and merging orders, batches or charges (i.e., an improvement search) where suitable. Their heuristic started from the product-stock demand-orders and moved backwards successively throughout the material-flow-path (upstream into the flowsheet) determining intermediate-stock production-orders until only external feed-stock supply-orders are satisfied. The idea here is similar except we employ MILP and we take one product-stock (or one group of product-stocks) at a time which may have one or more orders attached over the scheduling horizon. The selection of which stock or stock-group (note that stock-groups will be referred to throughout the remainder on the article) to schedule next in succession is determined using one of eight priority dispatch rules. Panwalker and Iskander (1977) provide a useful survey of published scheduling rules found up to 1977 and it is still a relevant and comprehensive list today. Details on the specific rules related to this work are found in the second section to follow. Before we describe the stock decomposition heuristic we need to elaborate on our method to remove or eliminate from the production network only those unit-operations that are required to produce the stock-group in question; this is not to say that these removed unit-operations are not required for other product-stocks. The technique to follow is also valuable for production networks which have many stages it is production process. In other words, when the degree-of-separation or arity is large between feed-stocks and product-stocks then it becomes more difficult to identify which unit-operations are redundant for any particular product-stock production; our technique below handles this issue. Network Compression Technique Described using UCTLP When we schedule one stock-group at a time, the unit-operations that are not required to produce any of the stocks in the stock- group can have their binary variables temporally removed or set to zero thus significantly reducing the size of the MILP sub- problem. How we do this is to focus exclusively on the connectivity of the production as opposed to the capacity of the production. The product of connectivity and capacity is what we call the production’s capability. By connectivity we mean that if a unit- operation is connected in some way to the stocks in the group then the unit-operation is retained else it is removed. When determining this connectivity, all unit-operations are modeled as simple flow-in minus flow-out equal to zero nodes (i.e., they obey the law of conservation of matter with no accumulation). Thus, batch-units and storage-vessels are assumed to have zero capacity and modeled as simple flow through units with no holdup. From a graph theory perspective, our technique is really an LP
    • 4 implementation of determining the reachability (Bang-Jensen and Gutin (2000)) of the production network with respect to each externally demanded product-stock and documenting the unit-operations that are in someway connected or can be reached via what is known as a path. At the heart of the method is our unit-capacity transshipment linear program (UCTLP) similar to transportation or network flow problems found in Dolan and Aldous (1993) with only one time-period defined of which transshipment problems can be easily converted into transportation problems. To be specific, this is a static network as opposed to a dynamic or time-expanded network required to perform the actual scheduling of production (Fleischer (2001) and Langkau (2003)). All internal flows between unit- operations and other unit-operations have a lower bound of zero and upper bound of unity or one hence the term unit-capacity where transshipment means a node with at least one inlet and one outlet flow. All flows directly connected to an external demand- point or sink (i.e., not a transshipment node because there is only in-flow and no out-flow) also have a lower bound of zero and an upper bound of one if there is at least one non-zero quantity demanded for the product-stock sometime over the scheduling horizon. The externally supplied feed-stocks or sources have a lower bound of zero and an upper bound equal to the number of product- stocks demanded. We supply a larger upper bound on the supply of feed-stocks in order not to make these limiting in the UCTLP. We maximize all of the internal flows from supply and demand-points through transshipment nodes. At any LP solution, found using the very efficient dual simplex method for transportation problems, if any flow becomes non-zero then we can safely assume that there is a definite connection from the supply to the demand for that particular set of product-stocks in question and these are removed (ignored) from the objective function. This however does not account for recycles or stock flow-reversals in the network which we deal with rigorously to follow. We then perform a series or sequence of LP’s with different objective functions i.e., the constraints and variables stay the same but previously found non-zero flows are no longer maximized, until the objective function value equals exactly zero and the method terminates even with recycles. Based on the internal flows, unit-operations are determined to be either retained or removed if any of the directly connected internal flows are not connected to the stock in question. A retained unit-operation has a quantity and a logic (binary) variable generated for all time-periods within the scheduling horizon however all removed unit-operations have their quantity and logic variables set to zero or are not generated. Recycles cause a special problem for the UCTLP although there is an effective way to overcome their effect. Topological sorting or acyclic ordering (Knuth (1973) and Bang-Jensen and Gutin (2000)) also suffer negatively from the same problem when cycles appear in the network. For instance, in a directed graph or digraph such as our production network, a topological order is a linear ordering of all of its nodes so that if an arc or flow between unit-operation or node u and node v, then u will always appear before v in the linear ordering. Unfortunately, if the network contains even only one cycle the topological ordering cannot be achieved. It is possible to locate and identify cycles by choosing any node and walking backwards keeping track of whether the same node has
    • 5 been encountered. Here, we offer less of a computer programming approach and more of a mathematical programming approach by again using the UCTLP. If we fix all of the external supply and demand-orders to zero and again maximize all of the internal flows, then if any internal flow is non-zero we have at least one recycle loop (i.e., a detection technique). We can then find individual recycle loops in the network by turning one of the internal flows found to be non-zero to zero and solving the UCTLP again. All internal flows that are driven to zero because they are part of the same cycle as that of the internal flow set to zero can be recorded as the recycle loop and so on and so forth. The UCTLP to compress the network can be revisited by breaking or tearing the recycle loop by adding for example a temporary storage-vessel connected to only one of the internal flows included in the recycle; this storage-vessel must be removed after the stock network compression is completed. This will properly determine which of the unit-operations potentially involved in a recycle can be removed based on connectivity reasoning only. It should also be pointed out that the UCTLP method includes all flows along paths and around cycles (i.e., a specific collection of arcs in a directed graph) that are possible to produce a particular set of product-stocks. If there is more than one path or cycle possible involved in producing stock A, which is very common in the process industries, then all of these connected arcs or internal flows will be properly identified. This is an important feature given that we require all of the production’s capability when scheduling. Stock Decomposition Heuristic Described with Priority Dispatch Rules Our stock decomposition heuristic (SDH) is actually relatively intuitive in light of how discrete-part manufacturers or software development houses schedule their businesses except that we aggregate individual stock-orders (or jobs or projects) into stock- groups. As mentioned it is also similar to the heuristic provided by Kudva et. al. (1994) except that again we focus on the stock- group and not on an individual stock-order in addition to using several MILP sub-problems to solve for solutions of the production- logistics. Priority Dispatch Rules The eight priority dispatch rules plus one are as follows: 1. Choose the stock-group with the largest individual stock-order quantity required (over the scheduling horizon). If there are lower and upper bounds on the quantities then choose the one with the largest lower quantity given that these quantities must be met at a minimum else infeasibilities will occur. 2. Choose the stock-group with the largest total stock-order quantity required. If there are lower and upper bounds on the quantities then choose the one with the largest total sum of the lower quantity bounds. 3. Choose the stock-group with the most number of stock-orders attached. 4. Choose the stock-group with the earliest due-date for any stock-order. There can also be a delivery, distribution, sales or shipping-date which is a hard or final date before the product-stock is shipped or distributed to customers. The due-date
    • 6 can be considered as a softer dead-line as was done in Kudva et. al. (1994). The duration between the due-date and the sales-date is usually defined by policy. 5. Choose the stock-group with the shortest stock-order release to due-date duration. 6. Choose the stock-group with the shortest duration outside of the release and due-dates i.e., the time between the due-date of the previous stock-order and the release-date of the next stock-order. 7. Choose the stock-group with the highest value using its price times its quantity. 8. Choose the stock-group with the lowest random number assigned. 9. Choose the stock-group with the highest user-specified priority. We have added a ninth which is simply a user override so that if a particular stock-group must be scheduled first for example it can be specified by the user. The eighth rule is usually used as the bench-mark for comparing priority dispatch rules because it is the easiest to implement. Obviously other priority rules can be used structured around some of the particulars found in Panwalker and Iskander (1977) and Pinedo (1995) where we have chosen the ones that are the most evident. Combinations of the eight rules can of course be used to generate other schedules even with the same model and data. A different sequencing, sorting or ordering of individual stock-orders was also used by Kudva et. al. (1994) to create many different schedules from the same model and data and was used to provide a distribution of solutions to estimate the schedule quality against the root LP relaxation of the overall MILP logistics problem. These rules should also be somewhat instinctive from the perspective that the larger the stock-order or the earliest it must be produced constitutes some level of production priority. From the perspective of infeasibility handling, we recommend using artificial, elastic or penalty variables attached to several of the key quantity and logic constraints of the model. When using heuristics there is no guarantee that the decisions made along the path to a solution are consistent with all of the constraints that we must satisfied for the overall problem. In Kudva et. al. (1994) they handle infeasibilities by removing stock-orders when they cannot be scheduled. This is essentially setting the lower and upper bound on the individual stock-order to zero. A less restrictive approach that can be used when MILP is employed as a component of the heuristic search method is to set the lower bound to zero but leave the upper bound as is. This allows for partial fulfillment of a stock-order which may be better than simply ignoring it completely. Partial fulfillment implies that back-orders exist and must be shipped to the customer at a later date usually at a higher manufacturing and distribution cost given that the original order needs to be split and less economies-of-scale can be afforded. Unit-Operation to Stock Dependency (UOSD) Matrix The basic algorithm of the SDH is to first populate an array or table called the unit-operation to stock dependency (UOSD) matrix which identifies for each unit-operation in the production network if an individual product-stock requires it from a connectivity
    • 7 perspective. The matrix has as its rows the unit-operation indices and as its columns the various product-stock indices. The elements of the array are simply zero for not required (i.e., it is not connected in any way to a product-stock demand-point) and one for required (i.e., it is connected to a product-stock demand-point). The matrix can be generated off-line by setting to zero all other product-stocks except for the one in question which has a lower bound of zero and an upper bound of unity. The UCTLP is run and the unit-operations required are recorded and entered into the UOSD matrix. The UCTLP must be run for as many times are there are product-stocks. The UOSD matrix can also be determined by inspection of the production network although it is easier automate its population; the same is true for the information matrix described below. Stock to Stock Dependency (UOSD) Matrix The second step of the SDH is to determine the organization of product-stocks into stock-groups. In general, a product-stock should be grouped with other product-stocks when they are found to be dependent, integrated or inter-related. This means that in order to make product-stock P1, product-stock P2 also needs to be produced which is not an uncommon situation for divergent flow process (i.e., one or more feed-stocks can produce two or more co-products and by-products). This is evident in the scheduling example shown in Figure 1 from Kondili et. al. (1994) whereby no capacity details are necessary given our arguments are based solely on the connectivity dimension. A Heating HA B C Reaction-1 BC H1 R1 R2 T1 T2 T3 AB Reaction-2 R2 R1 T4 Reaction-3 T6T5 E R1 R2 Distilling S1 T7 P1 P2 P1 T8 P2 T9 Figure 1. Kondili et. al. (1993) batch-process scheduling example; triangles are tanks; black rectangles are batch-processors, diamonds are demand-points and the circles are stocks similar to states.
    • 8 There are three feed-stocks A, B and C, four intermediate-stocks HA, BC, AB and E and two product-stocks P1 and P2. Because Reaction 2 (which can be supported by reactors R1 and R2) is divergent, when P1 is made then AB must be produced as well. If the holdup in tank T6 is at capacity then AB cannot be made and therefore Reaction 2 cannot start and P1 is not producible hence P1 is dependent on P2. Recall that based on connectivity only, whereby tank T6 is modeled in UCTLP as a node with no accumulation, the UCTLP method will correctly detect that P1 is dependent on P2. On the other hand, when P2 is demanded and P1 is not, Reaction 2 can also not start given that there is no outlet for P1. If some AB is available in tank T6 then one can argue that P2 can be produced without P1 though this is only a short-term gain when there is a finite capacity or availability of AB. The same holds true for P1. If tank T6 is empty then P1 can be produced for a limited amount of time without any P2 being made. It also should be mentioned that the reverse stock flow from the Distilling operation using S1 to and tank T6 was broken and a hypothetical tank was inserted to break the recycle loop as discussed previously. A relatively straight-forward and effective method to algorithmically identify stock dependencies is to fix the sink capacity to one for a product-stock in question, all of the other product-stocks are configured to have a lower bound of zero and an upper bound of one on its flow into its sink node. If after running the UCTLP, which is now modified to minimize the internal or arc flows in the network, one of the other product-stocks has a non-zero flow then the product-stock in question is declared to be dependent on the product-stock with a non-zero detectable activity. This is correct because the dependent product-stock in question has no choice but to be produced in sympathy with the other dependent product-stocks in light of the fact that all of the internal flows are minimized instead of being maximized. A second matrix called the stock to stock dependency (SSD) matrix is configured given the results from the modified UCTLP. Or, the matrix can be populated manually by inspection of the production flowsheet and this provides the details on which product- stocks to group together in a stock-group. In the limiting case with one stock-group we have the original problem. The other extreme is to have as many stock-groups as there are individual product-stock items where a further extreme is to define as many product-stocks as there are stock-order instances which in spirit is identical to the Kudva et. al. (1994) heuristic. It is important to remark about sequence-dependent or spatial switch-over details such as the existence of product-wheels or cycles. Product-wheels force a rigid succession from one product-stock to another over time. Product-wheels do not necessarily pose any special problem except that it may prove beneficial to collect the product-stocks involved in a wheel into the same group. Step three is to take the details of the product-stock orders and calculate using the nine dispatch rules the priority, sequence or queue of each stock-group in the heuristic so that there is a unique in-series index for each stock-group with the first stock-order specified as number one. If any of the product-stocks have zero stock-orders attached then we remove this stock from the problem and any unit-operations only required for that stock. Stock-order prioritization was also a primary feature in the Kudva et. al. (1994) heuristic where an “order list” was used with the first stock-order to be scheduled positioned at the top of the list.
    • 9 Unit to Stock-Group Dependency (USGD) Matrix Step four is to populate what we call the unit to stock-group dependency (USGD) matrix which can be easily generated from the UOSD and SSD matrices and does not require the UCTLP. This matrix defines the physical units with operations involved in multiple stock-groups. These are units that are shared renewable resources amongst one or more stock-groups and can make the production difficult to schedule due to more managed coordination. The USGD matrix has as its rows a list of all of the units in the production superstructure and as its columns a list of all of its stock-groups. For instance, if product-stocks P1 and P3 are in separate stock-groups but they share some of the same units, then the USGD matrix will have more than one column with a one (1) populated for the same unit (or row). This information is required in the SDH in order to be “conservative” when attempting to fix unit-operation logic variables if they are found to be one for a particular stock-group MILP solution. Conservative means not fixing them to be one if they are found to be one at a partial or intermediate solution. If we are “aggressive”, this means that we fix them to one if they are found to be one at an intermediate solution. For tightly equipment constrained problems, conservative fixing will in general take longer to solve because less variables can be fixed to one although better solutions will potentially result given that the heuristic is less myopic or greedy and prolongs the commitment to fix unit-operation logic variables. These identified shared units and their operations are relaxed to between zero and one for all stock-groups except for the last one where they are made explicitly binary for the final stock-group MILP. This relax-and-fix technique can be found in Wolsey (1998) and is used successfully in Kelly and Mann (2004b). Moreover, all of the units that are only required for one stock-group will be properly removed from the problem when another stock-group is solved hence reducing the size of the MILP problem which as mentioned is the primary objective of the SDH. Stock Decomposition Heuristic Algorithm The fifth step is to select the lowest numbered stock-group in the queue that hasn’t already been scheduled and to setup and solve a MILP with only those unit-operations that are required as found in the UOSD matrix. If the MILP takes too long to solve then the problem is most likely tightly resource constrained and the problem data may need to be reviewed and vetted for correctness. If the first MILP for the first stock-group solved in the queue is infeasible, then the entire problem is infeasible and the heuristic terminates. Interestingly, the SDH can used as an MILP infeasibility diagnostic tool even if the schedules are not generated using it. Passing through each stock-group one at a time as if it were the first stock-group to be scheduled can be used to help locate or identify the potential cause of the infeasibility. If the MILP is feasible then keep the best solution found given the available amount of computer time allotted. Fix any logic variable to one if it is one in the best solution. If it is zero then leave the logic variable as a degree-of-freedom for the next stock-group in the queue to use if required. We do not fix the quantity variables to the values
    • 10 found in the current MILP solution. These are also left as degrees-of-freedom in order that other stock-groups can be satisfied using the same unit-operation but by increasing the lot, batch or charge-size for example. Step six is to proceed to the next stock-group in the queue and to setup and solve another MILP. If the problem is infeasible or a feasible solution cannot be found in reasonable time then terminate the SDH. This is an indication that the previous MILP sub- problems for the previous stock-groups have positioned the search into a region that is infeasible or very tight for the other stock- groups to follow. The recourse for the user is to generate an alternate queue sequence for the stock-groups. If a feasible solution has been found then fix the logic variables to one that are found to be one at the solution; again leave the logic variables that are zero and the quantity variables for further adjustment. The seventh step is to repeat steps five and six until all of the stock-groups have been scheduled except for the last stock-group in the queue. If a feasible solution has not been found so far then it is possible to use another priority dispatch rule to re-sequence the stock-group queue. An alternative is to use a simple local search technique to randomly interchange or swap the first stock-group with any other stock-group provided the original premier stock-group is able to find a feasible solution else, as mentioned, the overall problem is truly infeasible and requires scrutiny or analysis. A different approach to the stock-group prioritization is to use a breadth-first search notion (as opposed to the implicit depth-first search used in the above strategy) to solve each stock-group individually and record the objective functions. Rank each stock-group’s objective function from best to worst and use this ranking as the prioritization of the stock-groups. If any of the stock-group’s are infeasible as the first group then the entire problem is infeasible and that stock-group should be investigated. There can also be situations where all of the stock-groups have feasible solutions as the first stock-group but the entire problem is infeasible. When shared resources such as equipment, materials and labor are required it is not until all stock-order or demands are included does a limiting resource become apparent; this can easily occur in the illustrative example below. The eight and final step is to take the last stock-group in the queue and explicitly declare all of the unit-operation binary variables related to the units in the USGD matrix and solve the MILP with all of the stock-group stock-orders included back in to the problem. All integer-feasible solutions generated from this problem are valid for the entire or original problem. It is also possible to modify the SDH so that only a subset of the stock-groups is solved sequentially. The remaining stock-groups are combined into the final group and one more MILP is solved completing the heuristic (i.e., no different than simply increasing the number of stocks in the last stock-group). The idea of truncating the SDH, so to speak, is to increase the chance of finding good feasible solutions given that more degrees-of-freedom are available to the branch-and-bound search in the MILP when all remaining stock-groups are included. That is, there is less chance of incorrectly fixing a logic variable to one in a wrong time-period. The section below should help to clarify the SDH through a simple but representative illustrative example.
    • 11 Illustrative Example This example taken from Kondili et. al. (1993) is used in many studies on production scheduling. It is modified here in several ways in order to exaggerate some of the main points presented above. First, we neglect the cost of inventory and consistent with the original problem description there is no opening holdup for any of the intermediate-stocks. Second, we copy the network shown in Figure 1 two more times to increase the number of product-stocks with Table 1 detailing the unit-operations for the larger problem as well as showing the UOSD matrix. Third, we increase the time horizon to 50-hours instead of 10-hours where the time- period duration is uniform at 1-hour. Fourth, we have shared the heater and still units across the three sub-networks to model in more depth the situation of unit flow-reversal; Table 1 details the units and operations inside the UODS matrix with respect to the product-stocks. This unit flow-reversal expresses the operating or production logistics detail that a piece of equipment must hypothetically flow, move or transfer between the three sub-networks and is similar to a cycle found in a job-shop scheduling disjunctive-graph (Pinedo (1995)). Unit flow-reversals are similar to how stock or material flows from one place to another in a recycle loop for example except that for unit flow-reversals the units are typically immobile or stationary but the piping is re-routed to the unit as opposed to the stock being mobile or transportable flowing inside the piping. Fifth, we have added two 100-kg capacity tanks before the lifting of all of the product-stocks with opening inventory of zero. And sixth, we add three product-stock orders to all six of the product-stocks shown in Table 2 where the SSD matrix is shown in Table 3.
    • 12 Table 1. Unit-operation to stock dependency matrix; bolded rows mean a shared unit across the three networks. It should be mentioned that we still only have three shared feed-stocks A, B and C with the appropriate opening inventory to satisfy all of the demand-orders of product-stock over the horizon. Stocks Unit Operation P1 P2 P3 P4 P5 P6 Sub-Network 1 T1 A 1 1 T2 B 1 1 T3 C 1 1 H1 Heating1 1 1 T41 HA1 1 1 R11 Reaction-11 1 1 R21 Reaction-11 1 1 R11 Reaction-21 1 1 R21 Reaction-21 1 1 T51 BC1 1 1 T61 AB1 1 1 R11 Reaction-31 1 1 R21 Reaction-31 1 1 T71 E1 1 1 S1 Distilling1 1 1 T81 P1 1 1 P1 P1 1 1 T91 P2 1 1 P2 P2 1 1 Sub-Network 2 H1 Heating2 1 1 T42 HA2 1 1 R12 Reaction-12 1 1 R12 Reaction-12 1 1 R12 Reaction-22 1 1 R22 Reaction-22 1 1 T52 BC2 1 1 T62 AB2 1 1 R12 Reaction-32 1 1 R22 Reaction-32 1 1 T72 E2 1 1 S1 Distilling2 1 1 T82 P3 1 1 P3 P3 1 1 T92 P4 1 1 P4 P4 1 1 Sub-Network 3 H1 Heating3 1 1 T43 HA3 1 1 R13 Reaction-13 1 1 R23 Reaction-13 1 1 R13 Reaction-23 1 1 R23 Reaction-23 1 1 T53 BC3 1 1 T63 AB3 1 1 R13 Reaction-33 1 1 R23 Reaction-33 1 1 T73 E3 1 1 S1 Distilling3 1 1 T83 P5 1 1 P5 P5 1 1 T93 P6 1 1 P6 P6 1 1
    • 13 Table 2. Product-stock order details; time in hours and quantity rates in kg/hour. Order Details Order # Stock Start-Time End-Time Lower Rate Upper Rate 1 P1 6 8 50 50 2 P1 18 20 75 75 3 P1 30 32 60 100 4 P2 8 10 50 50 5 P2 20 22 75 75 6 P2 32 34 85 100 7 P3 10 12 50 50 8 P3 22 24 75 75 9 P3 34 36 60 100 10 P4 12 14 50 50 11 P4 24 26 75 75 12 P4 36 38 85 100 13 P5 14 16 50 50 14 P5 26 28 75 75 15 P5 38 40 60 100 16 P6 16 18 50 50 17 P6 28 30 75 75 18 P6 40 42 85 100 Table 3. Stock to stock dependency (symmetric) matrix. Stocks Stocks P1 P2 P3 P4 P5 P6 P1 1 1 P2 1 1 P3 1 1 P4 1 1 P5 1 1 P6 1 1 Table 4. Unit to stock-group dependency matrix. Sub-networks 2 and 3 are similar but with proper stock-group dependencies. Stock-Groups Unit P1, P2 P3, P4 P5,P6 T1 1 1 1 T2 1 1 1 T3 1 1 1 H1 1 1 1 S1 1 1 1 Sub-Network 1 T41 1 R11 1 R21 1 R11 1 R21 1 T51 1 T61 1 R11 1 R21 1 T71 1 T81 1 T91 1 In this example we maximize the profit which is simply the revenue or sales of product-stocks each with a value of $10/kg. All MILP runs are performed using XPRESS-MILP from Dash Optimization Inc. (Gueret et. al. (2002)) release 2004B and executed on
    • 14 a 1.7 Giga Hertz Pentium IV laptop. The original MILP with no SDH and after presolve or preprocessing has 21,962 rows, 10,553 columns, 77,307 non-zero coefficients and 1,200 binary variables. The root LP relaxation objective function is $27,060. After one and half hours of running time (5,400-seconds) using default MILP settings the best integer-feasible solution found is $25,862.66 with the best bound still at the root LP value indicating further optimization can be achieved in theory. The accumulated time to run the UCTLP for the six product-stocks is under 1-second (i.e., to populate Table 1) and the time to determine the stock-groups using the modified UCTLP also took under 1-second (i.e., Table 3). Given that there is no priority distinction between the order details for the three stock-groups except for the earliest start-times, the choice of which stock-group to choose first is stock-group one with stock-group two second and so on. A conservative (relax-and-fix) strategy is used to handle the operations on the physical units found in Table 4 except for the tanks because these are in a dedicated service. This implies that when scheduling the final stock-group the binary variables associated with the unit-operations in Table 4 are explicitly declared to be binary. When we choose the first stock-group (i.e., P1 and P2) as the first in the queue, which means that we can remove all the unit-operations that are redundant, after presolve the size of the MILP sub-problem has 7,377 rows, 3,431 columns, 24,045 non- zeros and 300 binary variables. It takes 299-seconds to find the optimal objective function of $9,020. This is a relatively long solution time which is a result of the first stock-order for stock P1 having a start-time of 6-hours and finding good solutions requires some level of backtracking in the MILP. It should be mentioned that when we solve for stocks P1 and P2 alone as if the other four stocks are not present and we declare the units in Table 4 to be explicitly binary, it takes the MILP over 3600-seconds to find the optimal solution with a profit of $9,020 implying a tightly constrained problem. We then fix any logic variable that is found to be one at this partial solution (and not in Table 4) and we proceed to stock-group two. The second MILP sub-problem solved, which includes only those orders and unit-operations that are required for stock-group two, has identical problem statistics as above except that it takes 4-seconds to solve instead of 299-seconds. We proceed to stock-group three whereby we now include as explicit binary variables those that are found in Table 4 and we include all of the stock-group orders. The size of this MILP problem has after presolve 12,266 rows, 6,273 columns, 43,149 non-zeros and 636 binary variables and takes 47-seconds to find a good integer-feasible solution of $27,014.08 which is better than the original MILP running over 5,400-seconds. After 185- seconds it finds a provably optimal value of $27,060 which is identical to the root LP relaxation indicating that our SDH did not lose any production capability during the solution. When we add the total time to solve for the optimal solution it takes 299 + 4 + 185 + 2 (for the UCTLP network compression procedure) = 490-seconds considerably better than solving the original MILP without the SDH.
    • 15 Conclusion In conclusion, we have articulated and successfully demonstrated a new heuristic that can be used to help in the scheduling of large and complex production for the process industries. These production systems are known as closed-shops, and involve at the core, a lot-sizing problem as well as the assignment, sequencing and timing activities found in open-shop scheduling systems (Graves (1981)). Admittedly only one problem instance was highlighted. However, all production scheduling problems have stocks and unit-operations and therefore have a structure that can be exploited in a manner described in this study. Additionally, the SDH provides an analytical framework to understand the fundamental details of production especially around the dependency of unit- operations with respect to stocks. A novel unit-capacity transshipment LP (UCTLP) technique was described which can quickly and accurately determine which unit-operations are required for each stock (i.e., the UOSD matrix) when recycle loops exist. This information is used to implement the SDH and is required to recognize which stocks are inter-dependent in order that they may be grouped together for the purpose of potentially finding better schedules (i.e., the SSD matrix). The UOSD and SSD matrices are used to generate a third matrix (i.e., USGD matrix) which exposes physical units that are required by more than one stock-group. These units are shared resources and if not properly managed during the search may cause solution error manifested as apparent infeasibilities when the problem is truly feasible. Finally, this heuristic should be considered as another tool when solving difficult production scheduling problems although it can also provide valuable insight into the organization and arrangement of the production system itself. Acknowledgement The author would like to thank John L. Mann also of Honeywell Process Solutions for his assistance and technical support of this paper. References Bang-Jensen, J. and Gutin, G., Digraphs: Theory, Algorithms and Applications, Springer-Verlag, London, (2000). Bassett, M.H., Pekny, J.F., and Reklaitis, G.V., “Decomposition techniques for the solution of large-scale scheduling problems”, AIChE Journal, 42, 12, 3373-3387, (1996). Blomer, F., and Gunther, H-O., “LP-based heuristics for scheduling chemical batch processes”, International Journal of Production Research, 38, 5, 1029-1051, (2000). Dolan, A., and Aldous, J., Networks and algorithms: an introduction approach, John Wiley & Sons, New York, (1993). Fleischer, L. K., “Faster algorithms for the quickest transshipment problem”, SIAM Journal of Optimization, 12, 18-35, (2001). Graves, S. C., “A review of production scheduling”, Operations Research, 29, 4, 646-675, (1981). Gueret, C., Prins, C., Sevaux and Heipcke S. (revisor and translator), Applications of Optimization with Xpress-MP, Dash Optimization, Blisworh, Northan, UK., (2002). Herrmann, J.W., Ioannou, G., Minis, I., Nagi, R. and Proth, J.M., "Design of material flow networks in manufacturing facilities", Journal of Manufacturing Systems, 14, 277-289, (1995).
    • 16 Jain, V. and Grossmann, I.E., “Algorithms for hybrid MIP/CP models for a class of optimization problems”, INFORMS J. Computing, 13, 258-276, (2001). Kelly, J.D., “Chronological decomposition heuristic for scheduling: a divide & conquer method”, AIChE Journal, 48, 2995-2999, (2002). Kelly, J.D. and Mann, J.L., “Crude-oil blend scheduling optimization: an application with multi-million dollar benefits – parts I and II”, Hydrocarbon Processing, June/July, (2003). Kelly, J.D., “Next generation refinery scheduling technology”, NPRA Plant Automation and Decision Support Conference, September, San Antonio, Texas, (2003a). Kelly, J.D., “Smooth-and-dive accelerator: a pre-milp primal heuristic applied to scheduling”, Computers & Chemical Engineering, 27, 827-832, (2003b). Kelly, J.D., “Production modeling for multimodal operations”, Chemical Engineering Progress, February, (2004a). Kelly, J.D. and Mann, J.L., “Flowsheet decomposition heuristic for scheduling: a relax-and-fix method”, 28, 2193-2200, Computers & Chemical Engineering, (2004b). Knuth, D.E., “Fundamental algorithms”, The Art of Computer Programming, Volume 1, Second Edition, Addison-Weslsy Longman Publishing, Redwood City, (1973). Kondili, E., Pantelides, C.C. and Sargent, R.W.H., "A general algorithm for short-term scheduling of batch operations – I milp formulation", Computers & Chemical Engineering, 17, 211-227, (1993). Kudva, G., Elkamel, A., Pekny, J.F., and Reklaitis, G.V., “Heuristic algorithm for scheduling batch and semi-continuous plants with production deadlines, intermediate storage limitations and equipment change-over costs”, Computers chem. Engng., 18, 9, 859-875, (1994). Langkau, K., Flows over time with flow-dependent transit times, Ph.D. Dissertation, Technical University of Berlin, Germany, (2003). Panwalker, S.S. and Iskander, W., “A survey of scheduling rules”, Operations Research, 25, 45-61, (1977). Pinedo, M., Scheduling: theory, algorithms and systems, Prentice Hall, New Jersey, (1995). Wolsey, L.A., Integer Programming, John Wiley & Sons, New York, (1998). Wu, D. and Ierapetritou, M. G., “Decomposition approaches for efficient solution of short-term scheduling problems”, Computers & Chemical Engineering, 27, 1261-1276, (2003).