Free Essay

Flow Shop Scheduling Problem

In:

Submitted By rebeccachang
Words 9053
Pages 37
A Comparison of Solution Procedures for the Flow Shop Scheduling Problem with Late Work Criterion

Jacek Błażewicz *) Erwin Pesch 1) Małgorzata Sterna 2) Frank Werner 3)

*)

Institute of Computing Science, Poznań University of Technology Piotrowo 3A, 60-965 Poznań, Poland phone: +48 (61) 8790 790 fax: +48 (61) 8771 525 blazewic@sol.put.poznan.pl Institute of Information Systems, FB 5 - Faculty of Economics, University of Siegen Hölderlinstrasse 3, 57068 Siegen, Germany pesch@fb5.uni-siegen.de Institute of Computing Science, Poznań University of Technology Piotrowo 3A, 60-965 Poznań, Poland Malgorzata.Sterna@cs.put.poznan.pl Faculty of Mathematics, Otto-von-Guericke-University PSF 4120, 39016 Magdeburg, Germany Frank.Werner@mathematik.uni-magdeburg.de

1)

2)

3)

1

A Comparison of Solution Procedures for the Flow Shop Scheduling Problem with Late Work Criterion

Abstract In this paper, we analyze different solution procedures for the two-machine flow shop scheduling problem with a common due date and the weighted late work criterion, i.e. for problem F2 | dj = d | Yw, which is known to be binary NP-hard. In computational experiments, we compare the practical efficiency of a dynamic programming approach, an enumerative method and a heuristic list scheduling procedure. Test results show that each solution method has its advantages and none of them can be rejected from the consideration a priori.

Keywords: flow shop, late work, dynamic programming, enumerative method, list scheduling

2

1. Introduction The process of investigating every new optimization problem follows the same well known scheme (cf. e.g. (Błażewicz, Ecker, Pesch, Schmidt & Węglarz, 2001), (Brucker, 1998)). If researchers have failed in constructing an optimal polynomial-time method solving the problem under consideration, they attempt to prove the hardness of the problem. For NP-hard problems, exact but exponential time methods can be proposed (including pseudo-polynomial time approaches for binary NP-hard problems) or efficient but approximate ones. We have followed the same route in our research on the two-machine flow shop problem with a common due date and the weighted late work criterion, F2 | dj = d | Yw, which appeared to be binary NP-hard (Błażewicz, Pesch, Sterna & Werner, 2004b). In the problem considered in this paper, we have to find an optimal schedule for a set of jobs J = {J1, …, Jj, …, Jn} on two dedicated machines M1, M2. Each job Jj ∈ J consists of two tasks T1j and T2j, executed on machines M1, M2 for p1j, p2j time units, respectively. Moreover, to each job Jj a weight wj is assigned. Each job has to be performed, without preemptions, first on machine M1 and then on M2. Furthermore, each job can be processed on at most one machine at the same time and each machine can perform at most one task at the same time. Solving the problem we are looking for a schedule minimizing the total weighted late work in the system (Yw). The late work (Yj) for job Jj ∈ J is determined as the sum of late parts of its tasks, i.e. the task parts executed after the common due date d. Denoting with Cij the completion time of task Tij, Yj equals to n i 1, = 2

{ } ∑ min max{0,C -d , ij ij

p . The criterion value is }

determined as Yw = ∑ w jYj (cf. Figure 1). j1 =

Figure 1. An illustrative instance of problem F2 | dj = d | Yw

3

The late work objective function is a due date involving criterion, which was proposed in the context of parallel machines ((Błażewicz,1984), (Błażewicz & Finke, 1987)) and then applied to the one-machine scheduling problem (Potts & Van Wassenhove, 1991a, 1991b). Recently, this performance measure has been analyzed in the shop environment ((Błażewicz, Pesch, Sterna & Werner, 2000, 2003, 2004a, 2004b), (Sterna, 2000)). Moreover, the late work criterion can be also considered as a special case of the imprecise computation model (cf. e.g. (Ho, Leung & Wei, 1994), (Shih, Liu & Chung, 1991)) applied in real time scheduling. Minimizing the amount of late work in a system finds many practical applications, e.g. in data collecting in control systems (Błażewicz,1984), agriculture technologies, e.g. when perishable goods are considered (Potts & Van Wassenhove, 1991), or designing production plans within predefined time periods in flexible manufacturing systems (Sterna, 2000). Within the earlier research (Błażewicz et al., 2004b), binary NP-hardness of problem F2 | dj = d | Yw has been proved by presenting a transformation from the partition problem (Garey & Johnson, 1979) and proposing an extensive pseudo-polynomial time dynamic programming method (DP) of complexity O(n2d4). Usually, pseudopolynomial time approaches are considered only theoretically. We have decided to implement the dynamic programming method in order to validate the correctness proof by computational experiments results and, first of all, to compare in practice some basic approaches to solve hard scheduling problems. A dynamic programming method with a pseudo-polynomial time complexity is intuitively considered as more efficient than other enumerative methods. The computational analysis, presented in this paper, shows that even a simple enumerative approach cannot be a priori rejected form the consideration. On the other hand, a heuristic procedure based on a list scheduling framework appeared to be a really effective and efficient approximate

4

method. This frequently used scheduling technique (e.g. (Błażewicz et al., 2001), (Brucker, 1998), (Haupt, 1989)) confirms once more its advantages. The paper is organized as follows. In Section 2, we sketch a dynamic programming method, while in Sections 3 and 4, an enumerative approach and a heuristic procedure are presented, respectively. Computational experiments results are provided and discussed in Section 5. The paper is concluded in Section 6.

2. Dynamic Programming Method Problem F2 | dj = d | Yw, as a binary NP-hard case, can be solved optimally by a dynamic programming approach (Błażewicz et al., 2004b) in pseudo-polynomial time O(n2d4). This approach is based on the property (Błażewicz et al., 2004b) that in an optimal solution, all early jobs have to be scheduled before the common due date in Johnson’s order (Johnson, 1954). Johnson’s order, which is optimal from the schedule length viewpoint, requires that jobs with the first task shorter than the second one (p1j < p2j) are scheduled in non-decreasing order of p1j, while the remaining ones, with a task on M1 requiring at least as many time units as a task on M2 (p1j ≥ p2j), in non-increasing order of p2j. As a consequence of this property of the problem under consideration, the proposed DP method determines the first late job in the system and, then, it divides the set of the remaining jobs into two subsets containing activities being totally early and partially or totally late. All early jobs have to be scheduled in Johnson’s order, while the sequence of totally late activities can be arbitrary. Because the sequence of analyzing jobs is determined by Johnson’s order, the method does not need to investigate all possible permutations of jobs on machines. Denoting with Ĵn the job selected as the first late job and numbering the remaining jobs J\{Ĵn} from Ĵ1 to Ĵn-1 in Johnson's order, the dynamic programming algorithm takes decisions based on an initial condition fn(A, B, t, a) and a recurrence function fk(A, B, t, a). The value of

5

these functions denotes the maximum amount of early work of jobs Ĵk, Ĵk+1, ..., Ĵn assuming that Ĵn is the first late job, the first job among Ĵk, Ĵk+1, ..., Ĵn starts on machine M1 exactly at time A and not earlier than at time B on M2. Moreover, exactly t time units are reserved for executing jobs Ĵ1 to Ĵk-1 after Ĵn on M1 before the common due date d. Moreover, it is assumed that no job (a = 0) or exactly one job (a = 1) among Ĵ1 to Ĵk-1 is partially executed on machine M1 after Ĵn before d. The criterion value corresponding to a certain selection of the first late job is represented by f1(0, 0, 0, 0). Obviously, maximizing the weighted early work is equivalent to minimizing the weighted late work in the system. To find an optimal solution of the problem, each job is considered as the first late job in order to determine the best first late job, J*, ensuring the optimal criterion value F*= f1(0, 0, 0, 0). The general scheme of the dynamic programming approach (Błażewicz et al., 2004b) running in O(n2d4) time is presented below.

J = {J1, ..., Jn}; for j = 1 to n do set Ĵ = J \ {Jj} and Jj as the first late job Ĵn; renumber jobs from Ĵ in Johnson’s order as Ĵ1, …, Ĵn-1; calculate initial conditions fn(A, B, t, a) for Ĵn, for 0 ≤ A, B, t ≤ d and a ∈ {0, 1}; for k = n-1 to 1 do calculate recurrence relations fk(A, B, t, a) for Ĵk, for 0 ≤ A, B, t ≤ d and a ∈ {0, 1}; set Fj = f1(0, 0, 0, 0) as the total weighted early work subject to the first late job Ĵn; set F* = max { Fj } as the optimal total weighted early work and set J* to be a job with Fj = F*;
= j 1,... n

based on dynamic programming results for the first late job J* determine: J - the set of early jobs, J - the set of jobs performed between J* and d on M1, J - the set of late jobs; construct an optimal schedule by: executing jobs from J in Johnson’s order followed by J*, performing the first tasks of jobs from J after J* an arbitrary order.
P E P L

E

before d on M1 and executing jobs from J after d in

L

6

3. Enumerative Method To check the correctness of the rather complicated dynamic programming method sketched in the previous section, we have designed an enumerative approach too, which finds an optimal solution by systematic exploring the solution space. This algorithm checks all possible subsets of early jobs (E), executed in Johnson’s order. Then, it considers each job among the remaining ones as the first late job Jx (if possible) and it completes a partial schedule with other jobs from J\E sequenced according to the non-increasing weights. Obviously particular sets of early jobs (and, in consequence, sequences of late ones) are considered only once in the order of non-decreasing cardinalities. Thus, not all possible permutations of n jobs are explicitly checked by the method, which, despite this fact, is obviously an exponential one with complexity O(n2n). method mentioned is presented below. for each set E⊆ J such that a partial schedule obtained by sequencing jobs from E in Johnson’s order does not exceed d and Johnson’s schedule for E∪{Jx} where Jx ∈ J\E exceeds d do for each job Jx∈J\E do construct a schedule by executing jobs from E in Johnson’s order, followed by Jx and jobs from J\(E∪{Jx}) sequenced according to non-increasing weights; store the best solution constructed for set E, if it is better than the already found one.

The outline of an enumerative

4. List Scheduling Method In practice, the exact methods presented in the previous sections can be applied only for small problem instances, because of their exponential complexities. To obtain good solutions for instances of bigger sizes in acceptable time, heuristic algorithms have to be applied. Within this research, we compare the exact methods with Johnson’s algorithm and a list scheduling method.

7

Johnson’s algorithm (JA) designed for problem F2 | |Cmax (Johnson, 1954) can be used as a fast heuristic for the problem under consideration. It runs in O(nlogn) time necessary for sorting two subsets of jobs with p1j < p2j and p1j ≥ p2j according to non-decreasing p1j values and to non-increasing p2j values, respectively. With regard to the fact that all early jobs are sequenced in Johnson’s order in an optimal solution for F2 | dj = d | Yw, Johnson’s algorithm may construct feasible schedules of this problem of rather good quality. The list approach is a technique commonly used in the scheduling area, especially for practical applications (Haupt, 1989). It constructs a feasible solution by scheduling jobs on available machines, one by one, in a certain order determined by a given priority dispatching rule. The constructive procedure presented in this paper adds particular jobs, one by one, to a set of executed jobs (E). All jobs from this set are scheduled on machines in Johnson’s order. At each step of the method, a new job is selected from the set of the remaining (available) jobs A = J\E according to a certain priority dispatching rule and it is added to E. Then, set E is rescheduled in Johnson’s order. In consequence of adding a new job, the set of early jobs may change and the criterion value may be improved. Some jobs, which were early so far, can be shifted to the right by a newly added job, which precedes them in Johnson’s order. The solution returned by the heuristic is the best solution obtained for particular sets of early jobs E, for which the partial schedule length exceeds the common due date d. In order to improve the method efficiency, the additional optimization step is performed, when the algorithm has constructed the first feasible schedule. When Johnson’s schedule for a certain set of executed jobs E exceeds the common due d for the first time, the newly added job has been removed. Then, similar steps are performed for the remaining jobs from the set of available jobs A, i.e. they are introduced one by one to the set of executed activities E, in order to create a new solution, and then removed before the next job is taken into consideration.

8

Table 1. Definitions of static (S) and dynamic (D) priority dispatching rules We have proposed 15 rules for selecting jobs from the set of available jobs A (cf. Table 1). Some of them determine the sequence of adding the jobs to a schedule once, at the beginning of the algorithm (static rules), while others arrange jobs from set A at each step of the method with regard to the length of the partial schedule obtained so far (dynamic rules). Because a selection rule (i.e. a priority dispatching rule) influences the execution order of jobs, applying different strategies makes it possible to construct different feasible solutions of the problem under consideration. In consequence, the list scheduling algorithm can be used for constructing a set of feasible solutions of different quality in a reasonable amount of time. The general framework of the list scheduling approach can be formulated as follows. set E = ∅ and A = J, where E and A denote the set of executed and available jobs, respectively; set R = ∅, where R denotes the set of feasible schedules constructed; set S = ∅, where S denotes a partial schedule obtained so far; for a static selection rule, sort A according to this rule obtaining the sequence in which the jobs are analyzed; while A ≠ ∅ do for a dynamic selection rule update job parameters (xj or zj) with regard to S; take the job Ĵ1 from A according to a selection rule; E = E∪{Ĵ 1}; A = A \ {Ĵ 1}; schedule E by Johnson’s algorithm obtaining solution S; if the schedule length of S exceeds d, then R = R∪{S}; if S is the first feasible solution, then E = E\{Ĵ 1}; for each Jx∈A\{Ĵ 1} do E = E∪{Jx};

9

schedule E by Johnson’s algorithm obtaining solution S; if the schedule length of S exceeds d, then R = R∪{S}; E = E\{Jx}; E = E∪{Ĵ 1}; select the best solution from R.

The presented method performs n iterations adding every job to a partial schedule. Moreover, for each set of executed jobs E, the algorithm constructs Johnson’s schedule in O(nlogn) time. If a static job selection rule is applied, then all activities are sorted once, at the beginning of the algorithm in O(nlogn) time. In the case of a dynamic selection rule, the selection order is determined at each iteration in O(n) time, necessary for updating the job parameters (zj or xj) and selecting the job with the minimal/maximal value of the parameter being the subject of a certain priority dispatching rule (zj or xj). Consequently, the algorithm runs in O(n2logn) time. The additional optimization step executed after obtaining the first feasible solution, requires O(n2logn) time too, but because it is performed only once, it does not increase the overall complexity of the whole approach.

5. Computational Results Computational experiments are devoted to check the time efficiency of particular methods, described in the paper, solving problem F2 | dj = d | Yw and to compare the quality of solutions generated by them. Intuitionally, the dynamic programming approach with its pseudopolynomial time complexity should be more time efficient than an enumerative method of exponential time complexity (obviously, the quality of solutions determined by these two exact methods must be the same). On the other hand, the performance list scheduling approach and Johnson’s algorithm applied as a heuristic for F2 | dj=d | Yw is difficult to be predicted. Similarly, the best selection rule (priority dispatching rule) cannot be

10

determined in advance based only on a theoretical analysis. All algorithms proposed have been implemented in ANSI C++ and run on AMD Duron Morgan 1GHz PC (Kowalski, 2003). The test results obtained for the exact algorithms support with numerical examples the correctness proof for the dynamic programming method (Błażewicz et al., 2004b). DP and the enumerative method can be applied only for small problem instances. During the experiments, we analyzed 25 small instances of 5 different sizes for the number of jobs n = 10, 12, 14, 16, 18. The task processing times were generated from the range [1, 10], while job weights were taken from the range [1, 5]. All instances contain an equal number of jobs with p1j < p2j and p1j ≥ p2j. The common due date value was settled to 30% of the mean machine load (i.e. to 30% of a half of the total processing time of all jobs). Table 2. Average running times [μs] and standard deviation of the dynamic programming (DP), enumerative (EM) methods, Johnson’s algorithm (JA) and the list algorithm with S1 rule for different numbers of jobs n The running times of all methods, i.e. the dynamic programming approach (DP), the enumerative method (EM), Johnson’s algorithm (JA) applied as a heuristic for F2 | dj = d | Yw and, finally, the list scheduling algorithm with a static selection rule S1 (the time differences between particular selections rule are neglectedly small) are compared in Table 2 and in Figure 2. Surprisingly, the pseudopolynomial time method appeared to be less efficient than the enumerative one. Obviously, DP is important form a theoretical viewpoint, because it made it possible to classify problem F2 | dj=d | Yw as NP-hard in the ordinary sense, but the enumerative method seems to be a better approach for constructing exact solutions for the problem. The time complexity of DP depends mostly on the due date value (O(n2d4)), while the efficiency of the enumerative method is mainly influenced by the number of jobs (O(n2n)). The DP method is more time-oriented, because looking for an optimal solution, it

11

considers job sequences with regard to particular time moments between zero and the common due date. In contrast, the EM method operates with sets of early jobs and sequences of late ones. Because the sequence of early jobs is determined by Johnson’s order, the enumerative method does not need to generate permutations of all jobs. The special structure of the problem under consideration makes this approach more efficient than the DP one. Obviously, both exact algorithms required much more computational time than the heuristic ones. Figure 2. Average running times [μs] of the dynamic programming (DP), enumerative (EM) methods, Johnson’s algorithm (JA) and the list algorithm with S1 rule for different numbers of jobs n The test results show that enumerative methods cannot be rejected from the consideration a priori, even if pseudopolynomial approaches are available. Taking into account the specific structure of the problem analyzed, we constructed the exact method, which was relatively easy to implement and which delivers reference solutions for heuristic approaches in shorter time than DP. However, the EM algorithm appeared to be less stable than the latter one. The running time of EM depends strictly on an instance of the problem not only on its size. The longest running time was detected for an instance with 16 jobs, although instances with 18 jobs have been considered as well. The structure of the dynamic programming method makes it less sensitive to instance parameters in comparison to the enumerative one. In order to investigate the influence of the problem parameters on exact methods more carefully, we performed additional experiments for a selected instance with 10 jobs changing the due date value from 10% to 90% of a half of the total processing time of all jobs, cf. Table 3. As one can expect, the due date value does not influence the running times of heuristic approaches: Johnson’s method and the list scheduling algorithm, whose time

12

complexity and real efficiency depend only on the number of jobs analyzed. In contrast, the influence of the due date on the efficiency of the exact approaches is very strong. Table 3. Running times [μs] of the dynamic programming (DP), enumerative (EM) methods, Johnson’s algorithm (JA) and the list algorithm with the S1 rule for different due dates values d (as a percentage of the half of the total processing time of all jobs) The running times of the dynamic programming method significantly increases with the due date value (cf. Table 3 and Figure 3). This algorithm explores the solution space in O(n2d4) time by considering particular time moments between zero and d. Thus, the due date value is the main factor in the complexity function, responsible for its pseudopolynomial status. Figure 3. Average running times [μs] of the dynamic programming method (DP) for different due date values d (as a percentage of the half of the total processing time of all jobs) Although the complexity function of the enumerative approach depends on the number of jobs (O(n2n)), its running time depends strongly on the problem parameters. Looking for an optimal solution, the enumerative method generates explicitly only a small number from all possible schedules for n jobs. But the part of the solution space explicitly explored, and, in consequence EM running time, decreases for very small and very big due date values (with regard to the total processing time of all jobs, cf. Table 3 and Figure 4). If the due date value is small, then only a few jobs can be executed before d and the number of possible feasible schedules is small. These feasible schedules are generated by the enumerative method in a short time (the sequence of activities executed after d is irrelevant). Similarly, if the due date value is big, then most jobs are executed early in Johnson’s order, and only a few jobs can be late. Figure 4. Average running times [μs] of the enumerative method (EM) for different due dates values d (as a percentage of the half of the total processing time of all jobs)

13

The running times required by the exact approaches are usually unacceptable from the practical point of view. To solve a hard problem efficiently, one has to resign from optimal solutions and to apply heuristic methods. The test results show a significant difference between the running times of exact and approximate approaches (cf. Table 2 and 3). However, heuristic methods are evaluated not only from the running time point of view. The quality of solutions constructed by such algorithms is a very important factor in their analysis. Because exact methods can be applied only for small instances, the comparison of heuristic solutions obtained for different priority dispatching rules with optimal schedules was possible for a small set of test examples (cf. Table 4, Columns 1-3). For bigger instances, only the relative efficiency of the list algorithm for particular priority dispatching rules could be estimated with regard to the best heuristic solution (cf. Table 4, Columns 4-6). We analyzed 25 instances of 5 different sizes for n = 50, 100, 150, 200, 250, with the processing time and weight ranges settled to [1, 20], and the due date value determined as 50% of the mean machine load. In both cases (i.e. small and big instances), the priority dispatching rule performance is expressed as a percentage of the optimal or maximal weighted early work obtained by the considered rule with regard to the exact or respectively. Table 4. Ranking of priority dispatching rules based on the solution quality compared to the optimal criterion value for small instances (Columns 1-3) and to the best heuristic solution for big instances (Columns 4-6) – the average rule efficiency and the standard deviation The computational experiments show that list scheduling algorithm constructs heuristic solutions of very high quality (cf. Figure 5). The best priority dispatching rule (S1) selecting jobs according to non-decreasing weights constructed solutions with almost 98% of the optimal criterion value. Moreover, the standard deviation was the lowest for this selection strategy among all implemented ones. It has to be underlined that such a high performance is best heuristic solution,

14

obtained with incomparably small computational effort with regard to exact algorithms (cf. Table. 2) Figure 5. Ranking of priority dispatching rules for small instances based on the solution quality compared to the optimal criterion value As we expected, the most effective priority dispatching rules selected jobs with regard to their weights (S1) and, additionally, to their processing times on both machines (S2) or on only one machine (S6, S7). These parameters – weights and processing times – are the most important ones because in an optimal solution, the weighted early work has to be maximized. They are involved in the process of determining the criterion value. All the static rules mentioned above ensure a solution quality near to 90% of the optimum. Surprisingly, the dynamic selection rules, taking into account the job weights and a partial schedule structure, appeared to be less efficient (i.e. the rule determining the current late work for particular jobs – D4, as well as, the rule calculating the ratio between the job processing time and the gap between the partial schedule length and the common due date – D2). The test results show that simple static rules are more efficient than more time-consuming dynamic ones.

Nevertheless, these dynamic rules involving job weights constructed better schedules than any other rule not taking into account job weights. When job weights are ignored, dynamic rules made it possible to construct solutions of higher quality than static ones (D1, D3, D5, D6 dominated S3, S4, S8 and S9). Johnson’s algorithm applied as a heuristic to F2 | dj = d | Yw appeared to be a less efficient heuristic for the problem under consideration. This algorithm constructs schedules within an extremely short time (cf. Table 2) but their quality is of almost 30% worse than the ones determined by the list scheduling approach (for the best selection rule). Test results confirmed that the strategy devoted to the schedule length minimization cannot be efficient for minimizing the weighted late work. However, taking into account the extremely short

15

running time, the performance about 70% of the optimum might be enough for some specific applications. As we have mentioned, heuristic solutions were compared to the optimal ones only for small problem instances. In computational experiments performed for a larger number of jobs, only the relative performance of particular priority dispatching rules could be validated with regard to the most efficient selection rule. However, the rule ranking obtained based on the comparison with the optimal and to the best heuristic solution is almost identical (cf. Figure 5 and Figure 6). The best solutions were always generated by selecting jobs according to their weights (S1). The static selection rules involving weights appeared to be more efficient than the dynamic rules. In addition, the latter ones were less stable, that is reflected in the higher values of the standard deviation. The test results confirmed once more the observation that designing more complicated rules, requiring more computational effort may not be rewarded with the solution quality. Figure 6. Ranking of priority dispatching rules for big instances based on the solution quality compared to the best heuristic solution Similarly as for small instances, Johnson’s algorithm generated schedules of the poorest quality, however, for a larger number of jobs its performance was closer to the worst priority dispatching rule. Taking into account the huge difference in running times (cf. Table 5), this algorithm could be competitive to the list scheduling one (if the selection rule is not properly chosen). Table 5. Average running times [μs] and standard deviation of the list scheduling algorithm with the S1 rule and of Johnson’s algorithm (JA), and the minimal and maximal running times for the list algorithm for all selection rules and the difference between them and S1 The running times of the list scheduling algorithm does not vary too much for particular priority dispatching rules (see Table 5). We compared the running time of the best selection

16

rule (S1) to the minimal and maximal running times for particular instance sizes (the average value for 5 instances with the same number of jobs was calculated). The S1 rule was about 7% slower on average than the fastest one, and it was 11% faster on average than the slowest one (the dynamic selection rules). Hence, a priority dispatching rule determines the sequence of analyzing jobs and it does not affect the number of steps performed by the algorithm, its influence on the running time is rather inconspicuous. Figure 7. Average running times [μs] of the list scheduling algorithm with S1 rule and the minimal (MIN) and maximal (MAX) running times for the list algorithm for different number of jobs n The relation between running times and the number of jobs perfectly illustrates the complexity functions of both heuristic methods: the list scheduling algorithm and Johnson’s one. In the first case, the parabolic function reflects O(n2logn), cf. Figure 7, in the latter, the linear function corresponds to O(nlogn), cf. Figure 8. Figure 8. Average running times [μs] of Johnson’s algorithm for different number of jobs n Investigating the features of the list scheduling algorithm applied to problem F2 | dj=d | Yw, we have also studied the influence of different problem parameters on its behavior (a similar analysis has been done for Johnson’s method as well). The running times of the heuristics depend mostly on the number of jobs n. As one could predict from the structures of the algorithms, their time efficiency is not influenced by the common due date value (cf. Table 3), the maximum weight value (cf. Table 7) as well as by the percentage of jobs with shorter first task than the second one (cf. Table 9). However, the quality of solutions obtained is dependent on the problem parameters values. Table 6. Solution quality for Johnson’s algorithm (JA), the worst (S8), the best (S1) priority dispatching rule and the average rule efficiency for different due date values d (as a percentage of the half of the total processing time of all jobs)

17

For large due date values, all heuristic methods determine solutions of a similar quality (cf. Table 6 and Figure 9). If the value of d increases (it is getting closer to the half of the total processing time of all jobs), then almost all jobs can be executed early and only a few of them are executed late. In consequence, the difference between Johnson’s algorithm and the list scheduling one becomes less visible. Moreover, almost all jobs are early, the sequence of their analysis (i.e. the sequence of adding jobs into the set of executed jobs) does not influence a final solution too much. Therefore, the selection rule does not affect the performance of the list algorithm. For the largest due date values, schedules generated by the worst (S8) and the best rule (S1) as well as by Johnson’s algorithm are almost identical. For strict due date values, the differences in the solution quality are much bigger, because only a few jobs can be processed early and their selection affects the criterion value significantly. Figure 9. Solution quality for Johnson’s algorithm (JA), the worst (S8), the best (S1) priority dispatching rule and the average rule efficiency for different due dates values d (as a percentage of the half of the total processing time of all jobs) Similar computational experiments were performed in order to investigate the influence of the maximum job weight on the heuristic methods. 25 instances with 5 different values of the maximum weight wmax = 2, 5, 10, 20, 50 were analyzed for 100 jobs, the due date value was set to 50% of the mean machine load and the processing times were taken from the range [1,10]. Table 7. Running times for Johnson’s algorithm (JA), the best rule (S1), the second rule in ranking (S2) and the worst rule (S8) for different values of the maximum job weight wmax As we have already mentioned, the maximum job weight does not influence the running times of the heuristic methods (cf. Table 7), but it affects the solution quality obtained for particular priority dispatching rules (cf. Table 8 and Figure 10). Table 8. Solution quality for Johnson’s algorithm (JA), the worst (S8), the second best (S2) priority dispatching rule and the average rule efficiency for different values of the maximum job weights wmax 18

Increasing the value of wmax does not affect the efficiency of the selection rules involving job weights, but it deepens the difference between the rules selecting jobs with regard to wj and the rules not taking wj into account. In all tests, the priority dispatching rule analyzing jobs in non-decreasing order of wj generated the best results. The second rule in the ranking, S2, based on the weighted job processing time, constructed schedules that are 5% worse and its performance did not depend on wmax. However, the solution quality for the worst rule applying Johnson’s order (S8) deteriorates with the increase of wmax. A similar effect was observed for Johnson’s algorithm and for the average performance of the list algorithm (only 6 rules among 15 involve job weights). In the case of the weighted performance measure, the larger the weights, the more important is to take them into account in the process of scheduling jobs on machines. Figure 10. Solution quality for Johnson’s algorithm (JA), the worst (S8), the second best (S2) priority dispatching rule and the average rule efficiency for different values of the maximum job weight wmax Finally, we checked the influence of the structure of the job set on the heuristic methods, changing the percentage of jobs with the first task shorter than the second one. The experiments were performed for 15 different instances with 10%, 50% and 90% of jobs with the first task shorter than the second one, for 100 jobs with the due date set to 50% of the mean machine load and the processing times and weights from the range [1,10]. Table 9. The running times for Johnson’s algorithm (JA), the best rule (S1), the second rule in ranking (S2) and the worst rule (S8) for different values of the percentage of jobs with the first task shorter than the second one The structure of the job set does not affect the running time of Johnson’s algorithm and its influence on the time requirements of the list scheduling method is rather inconspicuous – a small increase of the running time was observed for instances including mostly jobs with

19

shorter first task than the second one (cf. Table 9). This effect is caused by an additional optimization done on the implementation level, because, theoretically, there should be no influence of the job set structure on the method efficiency. In the implementation, when a job newly added to the set of executed jobs succeeds early jobs, the list method does not run Johnson’s algorithm to construct a new partial schedule for a new set of selected jobs. When an instance contains many jobs with shorter first task, a newly added job has to be often executed before the common due date and it precedes jobs which were early in a previous solution. In consequence, Johnson’s algorithm has to be executed for the set of selected jobs more frequently increasing the running time of the list method. Table 10. Solution quality for Johnson’s algorithm (JA), the worst (S8), the second best (S2) priority dispatching rule and the average rule efficiency for different values of the percentage of jobs with the first task shorter than the second one Comparing the solution quality obtained for different structures of the job set (cf. Table 10 and Figure 11), one could conclude that the heuristic methods slightly prefer instances containing jobs of the same type. But bearing in mind the fact, that the solution quality was compared to the best heuristic schedule, the test results showed, in fact, that the analyzed heuristics construct similar schedules, when the job set is dominated by one type of jobs. In this case, the solution quality is comparable for particular priority dispatching rules and they achieve a performance closer to the best selection rule. For a more heterogeneous job set, there exist more distinct schedules with regard to the criterion value and the distance between good and bad priority dispatching rules increases. Figure 11. Solution quality for Johnson’s algorithm (JA), the worst (S8), the second best (S2) priority dispatching rule and the average rule efficiency for different values of the percentage of jobs with the first task shorter than the second one

20

4. Conclusions In this paper, we have compared three different solution methods for problem F2 | dj = d | Yw, which is known to be binary NP-hard: a dynamic programming approach, an enumerative method and a list scheduling algorithm. Moreover, we tested Johnson’s algorithm devoted to problem F2 | | Cmax as a heuristic for the problem under consideration. The computational experiments showed that intuitive assumptions on time efficiency and quality performance of different solving methods may not be always confirmed in practice. Based on the complexity functions, one could suppose that the pseudopolynomial time dynamic programming method would be more efficient than the enumerative algorithm. The test results disclosed that the enumerative method is more efficient than the DP one owing to some special features of the problem under consideration. Looking for an optimal solution by analyzing job sequences, as in the enumerative approach, appeared to more efficient than searching for the best schedule from a time-oriented point of view, as in the DP algorithm investigating time moments between zero and the common due date. Taking into account the computational experiments performed within this research, when a binary NP-hard problem is investigated, an enumerative method should not be rejected from the consideration a priori as being less efficient than a pseudopolynomial one. Our studies confirms that the simplest solutions are not always the worst ones. Of course, the dynamic programming method is still very important from a theoretical point of view, because its existence made it possible to classify problem F2 | dj = d | Yw as binary NP-hard. But, for determining optimal solutions, the enumerative method is a better choice in this case. Computational experiments confirmed the well known fact, that exact approaches can only be applied for problem instances of small size. In order to find feasible solutions in reasonable time, we proposed a list scheduling algorithm. This technique is commonly used in scheduling theory, especially for practical applications, first of all, because of its easy

21

implementation. The test results supported the general opinion on the high utility of the list scheduling method. It constructed solutions of a good quality in a reasonable running time. Hence we proposed a number of static and dynamic priority dispatching rules, it was possible to perform extensive computational experiments in order to determine the influence of the selection rule on the solution quality and to detect the influence of problem parameters on the method behavior. Moreover, we checked the efficiency of Johnson’s algorithm, dedicated originally to the schedule length minimization, as a heuristic approach to the weighted late work minimization. This idea was inspired by the special feature of the problem under consideration, according to which all early jobs have to be sequenced in Johnson’s order. The quite high performance of Johnson’s algorithm for problem F2 | dj = d | Yw showed that searching for similarities between scheduling problems may be advantageous. The existing methods can act as heuristics for similar problems. Within the future research, we are designing metaheuristic approaches for problem F2 | dj = d | Yw. The exact methods presented in this paper will be used for validating the solution quality for small problem instances, while the list algorithm will be applied as a method of generating initial schedules for metaheuristics. Since we have proposed a set of different priority dispatching rules, the list method can generate initial solutions of different quality, that makes it possible to start the solution space exploration from distinct points. Moreover, one can obtain a set of various feasible solutions, which may be used as an initial population for a population-based metaheuristic. Additionally, the schedules constructed by the list scheduling algorithm can be reference solutions for validating the metaheuristics performance for big problem instances. Acknowledgment We would like to express our appreciation to Krzysztof Kowalski for implementing the methods proposed and performing the computational experiments designed within this research.

22

References Błażewicz, J. (1984). Scheduling preemptible tasks on parallel processors with information loss. Recherche Technique et Science Informatiques, 3(6), 415-420. Błażewicz, J., Ecker, K., Pesch, E., Schmidt, G., & Węglarz, J. (2001). Scheduling computer and manufacturing processes. Berlin, Heidelberg, New York: Springer-Verlag. Błażewicz, J., & Finke, G. (1987). Minimizing mean weighted execution time loss on identical and uniform processors. Information Processing Letters, 24, 259-263. Błażewicz, J., Pesch, E., Sterna, M., & Werner, F. (2000). Total late work criteria for shop scheduling problems. In: K. Inderfurth, G. Schwödiauer, W. Domschke, F. Juhnke, P. Kleinschmidt, & G. Wäscher, Operations Research Proceedings 1999 (pp. 354-359) Berlin: Springer-Verlag. Błażewicz, J., Pesch, E., Sterna, M., & Werner, F. (2003). Revenue management in a jobshop: a dynamic programming approach (pp. 1-21). Preprint Nr. 40/03, Magdeburg: Ottovon-Guericke-University. Błażewicz, J., Pesch, E., Sterna, M., & Werner, F. (2004a). Open shop scheduling problems with late work criteria. Discrete Applied Mathematics, 134, 1-24. Błażewicz, J., Pesch, E., Sterna, M., & Werner, F. (2004b). The two-machine flow-shop problem with weighted late work criterion and common due date. European Journal of Operational Research, to appear. Brucker, P. (1998). Scheduling algorithms. Berlin, Heidelberg, New York: Springer-Verlag.

23

Garey, M.R., & Johnson, D.S. (1979). Computers and intractability. San Francisco: W.H. Freeman and Co. Haupt, R. (1989). A survey of priority rule based scheduling. OR Spectrum, 11, 3-16. Ho, K., Leung, J.Y.T., & Wei, W.D. (1994). Minimizing maximum weighted error for imprecise computation tasks. Journal of Algorithms,16, 431-452. Johnson, S.M. (1954). Optimal two- and three-stage production schedules. Naval Research Logistics Quarterly, 1, 61-68. Kowalski, K. (2003) Scheduling algorithms for the shop scheduling problems with the late work criteria. Master thesis. Poznań University of Technology. Potts, C.N., & Van Wassenhove, L.N. (1991a). Single machine scheduling to minimize total late work. Operations Research, 40(3), 586-595. Potts, C.N., & Van Wassenhove, L.N. (1991b). Approximation algorithms for scheduling a single machine to minimize total late work. Operations Research Letters, 11, 261-266. Shih, W.K., Liu, J.W.S., & Chung, J.Y. (1991). Algorithms for scheduling imprecise computations with timing constraints. SIAM Journal on Computing, 20, 537-552. Sterna, M. (2000). Problems and algorithms in non-classical shop scheduling. Ph.D. thesis. Poznań: Scientific Publishers of the Polish Academy of Sciences.

24

p11 M1 M2 C11 J1 J2 J1 p21 Y1=0 Figure 1. C21 J3 J2 d Y2

Y3 J4 J3 Y4 J4 t

An illustrative instance of problem F2 | dj = d | Yw

25

Notation S1 S2 S3 S4 S5 S6 S7 S8 S9 D1 D2 D3 D4 D5 D6

Priority Dispatching Rule (PDR) – selection rule max{wj} for Jj∈A max{wj (p1j + p2j)} for Jj∈A min{p1j} for Jj∈A max{p1j} for Jj∈A min{p1j + p2j} for Jj∈A max{wj p1j} for Jj∈A max{wj p2j} for Jj∈A job selection in Johnson’s order random job selection min{xj} for Jj∈A, where xj = p1j/(d - T1) + p2j/(d - T2) and Ti denotes the partial schedule length on machine Mi min{wj xj} for Jj∈A max{xj} for Jj∈A max{wj xj} for Jj∈A max{zj} for Jj∈A, where zj = max{0, d - T1 - p1j} + max{0, d - T2 - p2j} min{zj} for Jj∈A

Table 1. Definitions of static (S) and dynamic (D) priority dispatching rules

26

n 10 12 14 16

DP time [μs] avg. 451 782 1 081 864 2 821 465 st. dev. 191 576 326 380 844 267

EM time [μs] avg. 1 939 20 357 127 221 st. dev. 492 22 592 100 223 5 564 582

JA time [μs] avg. st. dev. 11 11 13 14 15 0,50 0,50 0,00 0,50 0,58

S1 time [μs] avg. 323 448 614 758 1039 st. dev. 20 24 26 51 39

4 101 604 1 666 507 7 928 479 14 959 867

18 12 445 751 3 489 505 6 205 041

Table 2. Average running times [μs] and standard deviation of the dynamic programming (DP), enumerative (EM) methods, Johnson’s algorithm (JA) and the list algorithm with S1 rule for different numbers of jobs n

27

100 000 000 10 000 000 1 000 000 100 000 10 000 1 000 100 10 1 10 12 14 16 18 DP EM JA S1

Figure 2. Average running times [μs] of the dynamic programming (DP), enumerative (EM) methods, Johnson’s algorithm (JA) and the list algorithm with S1 rule for different numbers of jobs n

28

d [%] 10 20 30 40 50 60 70 80 90

DP [μs] 31 580 238 335 648 829 1 791 391 3 490 018 5 512 625 8 790 002 14 079 739 20 049 948

EM [μs] 63 138 547 3 654 7 289 8 845 9 311 4 841 1 991

JA [μs] 12 11 12 12 12 11 12 11 11

S1 [μs] 340 343 344 347 359 358 341 333 336

Table 3. Running times [μs] of the dynamic programming (DP), enumerative (EM) methods, Johnson’s algorithm (JA) and the list algorithm with the S1 rule for different due dates values d (as a percentage of the half of the total processing time of all jobs)

29

25000000 20000000 15000000 10000000 5000000 0 10% 20% 30% 40% 50% 60% 70% 80% 90%

Figure 3. Average running times [μs] of the dynamic programming method (DP) for different due date values d (as a percentage of the half of the total processing time of all jobs)

30

10000 9000 8000 7000 6000 5000 4000 3000 2000 1000 0 10% 20% 30% 40% 50% 60% 70% 80% 90%

Figure 4. Average running times [μs] of the enumerative method (EM) for different due dates values d (as a percentage of the half of the total processing time of all jobs)

31

PDR average standard ranking I perform.[%] deviation [%] 1 S1 S2 S6 S7 D4 D2 D1 D6 S5 D5 D3 S4 S9 S3 S8 JA 2 97.66 92.40 91.55 91.03 85.15 83.25 83.23 81.85 81.64 81.25 80.44 78.68 78.41 76.17 73.87 69.57 3 2.80 5.40 5.80 5.10 6.70 8.10 7.70 8.10 8.10 8.80 6.80 9.10 6.80 8.30 6.90 8.40

PDR ranking II 4 S1 S2 S6 S7 D2 D6 D1 D4 D3 S5 D5 S4 S3 S9 S8 JA

average perform.[%] 5 100.0 93.98 92.19 88.36 75.23 73.89 73.64 73.45 73.37 72.84 72.76 72.63 71.44 71.18 70.43 70.06

standard deviation [%] 6 0.00 1.85 2.10 2.66 4.27 4.36 4.01 4.28 4.28 3.73 3.24 5.05 3.65 4.26 3.95 3.86

Table 4. Ranking of priority dispatching rules based on the solution quality compared to the optimal criterion value for small instances (Columns 1-3) and to the best heuristic solution for big instances (Columns 4-6) – the average rule efficiency and the standard deviation

32

100% 90% 80% 70% 60% 50%

98% 92% 92% 91% 85% 83% 83% 82% 82% 81% 80%

79% 78%

76%

74% 70%

S1

S2

S6

S7

D4

D2

D1

D6

S5

D5

D3

S4

S9

S3

S8

JA

Figure 5. Ranking of priority dispatching rules for small instances based on the solution quality compared to the optimal criterion value

33

100% 90% 80% 70% 60% 50%

100% 94% 92% 88%

75% 74% 74% 73% 73% 73% 73% 73% 71% 71% 70% 70%

S1

S2

S6

S7

D2

D6

D1

D4

D3

S5

D5

S4

S3

S9

S8

JA

Figure 6. Ranking of priority dispatching rules for big instances based on the solution quality compared to the best heuristic solution

34

n 50 100 150

MIN time [μs] avg. 6 990 27 366 93 531 vs. S1 4.87% 6.98% 7.37% 6.89% 7.59%

S1 time [μs] avg. 7 331 29 275 100 422 180 624 289 967 st. dev. 207.22 409.64

MAX time [μs] time 8 254 32 848 vs. S1 12.59% 12.20% 8.89% 10.44% 10.90%

JA time [μs] avg. 36 72 100 132 165 st. dev. 3.71 0.55 3.81 3.56 4.67

3186.13 109 346 2921.68 199 472 4734.13 321 586

200 168 984 250 269 500 \

Table 5. Average running times [μs] and standard deviation of the list scheduling algorithm with the S1 rule and of Johnson’s algorithm (JA), and the minimal and maximal running times for the list algorithm for all selection rules and the difference between them and S1

35

350 000 300 000 250 000 200 000 150 000 100 000 50 000 0 50 100 150 200 250 MI N S1 M AX

Figure 7. Average running times [μs] of the list scheduling algorithm with S1 rule and the minimal (MIN) and maximal (MAX) running times for the list algorithm for different number of jobs n

36

180 160 140 120 100 80 60 40 20 0 50 100 150 200 250

JA

Figure 8. Average running times [μs] of Johnson’s algorithm for different number of jobs n

37

d [%] 10 20 30 40 50 60 70 80 90 avg. st.dev.

JA [%] 70.00 75.76 73.61 73.71 76.50 81.85 86.62 91.48 96.28 80.65 9.02

S8 [%] 70.00 75.76 81.94 77.84 76.92 81.85 86.62 91.48 96.28 82.08 8.23

avg. [%] S1 [%] 83.13 83.08 85.42 83.02 84.27 86.53 90.76 94.80 96.40 87.49 5.21 100.00 95.96 95.14 93.30 92.31 89.96 94.01 97.05 96.28 94.89 2.92

Table 6. Solution quality for Johnson’s algorithm (JA), the worst (S8), the best (S1) priority dispatching rule and the average rule efficiency for different due date values d (as a percentage of the half of the total processing time of all jobs)

38

100% 95% 90% 85% 80% 75% 70% 65% 60% 55% 50% 10% 20% 30% 40% 50% 60% 70% 80% 90% JA S8 AVG S1

Figure 9. Solution quality for Johnson’s algorithm (JA), the worst (S8), the best (S1) priority dispatching rule and the average rule efficiency for different due dates values d (as a percentage of the half of the total processing time of all jobs)

39

wmax 2 5 10 20 50

JA time [μs] avg. 69 71 72 72 68 st. dev. 4 1 1 1 4

S1 time [μs] avg. 29467 28483 29033 28529 29111 st. dev. 727 636 288 284 772

S2 time [μs] avg. 29741 29136 29420 29201 29534 st. dev. 529 343 218 171 752

S8 time [μs] avg. 29934 29377 29777 29747 29857 st. dev. 572 290 212 369 677

Table 7. Running times for Johnson’s algorithm (JA), the best rule (S1), the second rule in ranking (S2) and the worst rule (S8) for different values of the maximum job weight wmax

40

wmax 2 5 10 20 50

JA [%] avg. 79.11 71.02 72.93 67.48 68.29 st. dev. 2.73 2.72 4.05 2.95 2.90

S8 [%] avg. 80.03 72.34 74.67 68.09 68.90 st. dev. 2.75 2.34 4.22 3.26 2.93

Avg. [%] avg. 82.79 78.76 79.84 75.23 75.31 st. dev. 4.12 7.13 6.64 7.84 9.12

S2 [%] avg. 94.07 94.13 94.10 92.40 94.97 st. dev. 0.72 1.60 1.18 2.24 1.88

Table 8. Solution quality for Johnson’s algorithm (JA), the worst (S8), the second best (S2) priority dispatching rule and the average rule efficiency for different values of the maximum job weights wmax

41

100% 95% 90% 85% 80% 75% 70% 65% 60% 55% 50% 2 5 10 20 50

JA S8 AVG S2

Figure 10. Solution quality for Johnson’s algorithm (JA), the worst (S8), the second best (S2) priority dispatching rule and the average rule efficiency for different values of the maximum job weight wmax

42

p1i

Similar Documents

Free Essay

Zara

...and scheduling problems. This article has been cited by an increasing number of researchers in their work on algorithmic aspects of sequencing and scheduling. Chen (1999) surveys research on efficient on-line scheduling algorithms and their competitiveness with off-line algorithms, where on-line algorithms differ fundamentally from off-line ones: In on-line problems, partial solutions are required before all problem data are available and these data are made gradually available over time. Decisions based on partial solutions are irrevocable due to the passage of time. In another article (Chen, 2000a) published in the Encyclopaedia of Mathematics, both experience-based approach and sciencebased approach have been described and commented. One of the algorithms strikes an excellent balance between efficiency and effectiveness and has been used as a touchstone for more involved approaches. At a more basic level, several smaller articles (Chen, 2000b) to be published in The Informed Student Guide to the Management Sciences provide beginners in management sciences with basic knowledge of quantitative methods and analysis. All of the aforementioned articles have served as a knowledge base for the project. All scheduling problems this project studied are inherently intractable, i.e., it is extremely unlikely that these problems can be solved to optimality in their general form in practice. An efficient algorithm is given in (Du, Han & Chen, 1997) for solving a scheduling problem with...

Words: 1013 - Pages: 5

Premium Essay

Chapter 15 Questions

...1. Scheduling involves the timing of operations to achieve the efficient movement of units through a system. The overall objective of scheduling is faster movement of goods and services through a facility, better customer service and dependable delivery. 2. The four criteria for determining the effectiveness of a scheduling decision are Minimize completion time Maximize utilization Minimize work in process Minimize customer waiting time 3. Loading means the assignment of jobs to work or procession centers. Operations managers assign jobs to work centers so that costs, idle time, or completion times are kept to a minimum. Loading work centers takes two forms. One is oriented to capacity; the second is related to assignment specific jobs to work centers. The two techniques used to loading jobs are: Gannt charts and the assignment method of linear programming. The four priority sequencing rules are: FCFS: first come, first served. The first job to arrive at a work center is processed first. SPT: shortest processing time. The shortet jobs are handled first and completed EDD: earliest due date. The job with the earliest due date is selected first. LPT. Longest processing time. The longer, bigger job are often very important and are selected first. 5) What are the advantages and disadvantages of the shortest processing time rule? First and foremost, priority rules provide a groundwork for how jobs should be worked. The rules are most commonly used for process...

Words: 1400 - Pages: 6

Free Essay

Custom Gear Case

...will be increase more in the next few years. 1.1 Organizational chart of the company [pic] 1.2 Duties and responsibilities The President of Custom Gear is Mr. Roger Rhodes, the founder of Custom Gear. His responsibilities are: a) Contacts with some of the large customers b) Arranges financing needed by company c) Sits in weekly production meeting d) Discussing on problems like scheduling, employee and also production. The Engineer is responsible on: a) Design company’s products b) Procurement and maintenance of equipment c) Oversee the supervisor/foreman d) Attends weekly production meetings e) Spend most time on factory floor The Expediter: a) Review work in progress in the shop b) Select orders which are behind schedules and ensure they are treated on rush basis c) Looking for past due raw materials and lost orders The work force consists of 50 highly skilled or semiskilled employees which are managed through family type approach. 2.0 Problem Being Faced By Custom Gear i. Accepting...

Words: 2012 - Pages: 9

Free Essay

J1Wbsjdxb

...Planning Master Production Scheduling Production-Planning and Control Systems 1 HOGWARTS SCHOOL OF IMPROVEMENT INITIATIVES PPC Capacity Planning, Aggregate Planning, Master Schedule, and ShortTerm Scheduling Capacity Planning 1. Facility Size 2. Equipment Procurement Long-term Aggregate Planning 1. Facility Utilization 2. Personnel needs 3. Subcontracting Master Schedule 1. MRP 2. Disaggregation of master plan Short-term Scheduling 1. Work center loading 2. Job sequencing Intermediate-term Intermediate-term Short-term 2 HOGWARTS SCHOOL OF IMPROVEMENT INITIATIVES PPC PRODUCTION PLANNING HIERARCHY Long-Range Capacity Planning Aggregate Planning Master Production Scheduling Production Planning and Control Systems Pond Draining Systems Push Systems Pull Systems Focusing on Bottlenecks 3 HOGWARTS SCHOOL OF IMPROVEMENT INITIATIVES PPC PRODUCTION PLANNING HORIZONS Long-Range Capacity Planning Long-Range (years) Aggregate Planning Medium-Range (6-18 months) Master Production Scheduling Short-Range (weeks) Very-Short-Range (hours - days) Production Planning and Control Systems Pond Draining Systems Push Systems Pull Systems Focusing on Bottlenecks 4 HOGWARTS SCHOOL OF IMPROVEMENT INITIATIVES PPC Production Planning: Units of Measure Long-Range Capacity Planning Entire Product Line Aggregate Planning Product Family Master Production Scheduling Specific Product Model...

Words: 4529 - Pages: 19

Free Essay

Sequence the Work and Execute

...for the certification examinations. The CPIM certification is the recognized standard for individual assessment in the field of production and inventory management. The certification is designed to validate the candidate’s in-depth knowledge of a variety of subjects specific to the field. APICS has ensured that CPIM exams are consistently reliable and that the highest professional standards are used in developing and administering the program. The program consists of five examinations and the candidate must pass all five examinations to earn the CPIM designation. The examinations that make up the program are: • Basics of Supply Chain Management (BSCM)   • Master Planning of Resources (MPR)   • Detailed Scheduling and Planning (DSP)   • Execution and Control of Operations (ECO)   • Strategic Management of Resources (SMR) A CPIM Exam Content Manual is published annually by APICS. It is a key resource for anyone preparing for the APICS certification examinations. The manual addresses all five of the examinations by documenting the scope of the module, the content outline, the key terms, and primary and secondary references. The CPIM Exam Content Manual can be ordered directly from APICS. Click here for APICS contact information. The APICS Web site is www.apics.org. APICS is located in Alexandria, Virginia, USA, and the telephone numbers are: 1.800.444.2743 and 1.703.354.8851. The fax number is 703.354.8106.   Course Components ...

Words: 4514 - Pages: 19

Premium Essay

Finance and It Consultant

...1 PRODUCTION AND OPERATIONS MANAGEMENT Introduction Product. Production. Management. Production and Operations Management an Overview. Definition of Production Operations Management. Objectives of Production Management. Scope of Production Management. Benefits derived from efficient Production Management Department. Functions of Production Management. Types of Production Systems. Characteristics of production systems and Production cycle. INTRODUCTION The Subject of Production Management is studied under different Headings-such as Production Planning and control, Production and Inventory control, production and operations control and many more. What ever may be the title of the subject, the contents of the subject are more or less one and the same. Before we discuss about production management, let us discuss about product, production and management. This will give us a rough idea about production Management and with what a production manager has to deal with. 1.1. PRODUCT Though many authors define the product with Consumer orientation, it is better for us to deal with different angles, because it will be helpful for us to understand the subject of production and Operation Management. (i) For a Consumer: The product is a combination of or optimal mix of potential utilities. This is because every consumer expects some use or uses from the product. Hence he/she always identifies the product in terms of the uses. Say for example-Soap can be identified by complexion, cleanliness...

Words: 6419 - Pages: 26

Premium Essay

Process Mangemenr

...are obviously desirable, it is usually not possible for an operation to perform significantly better than the competition in more than one or two. The five key decisions in process management are: I. Process Choice II. Vertical Integration III. Resource Flexibility IV. Customer Involvement V. Capital Intensity These decisions are critical to the success of any organization and must be based on determining the best was to support the competitive priorities of the enterprise. PROCESS CHOICE The first choice typically faced in process management is that of process choice. Manufacturing and service operations can be characterized as one of the following: 1. Project 2. Job Shop 3. Batch Flow 4. Line Flow 5. Continuous Flow The nature of these processes are discussed below and summarized in the manufacturing product-process matrix on page 8. Project Process. Examples of a project process are building a shopping center, planning a major event, running a political campaign, putting together a comprehensive training program, constructing a new hospital, doing management consulting work, or developing a new technology or product. A project process is characterized by a high degree of job customization, the large scope of each project, and the release of substantial resources, once a project is completed. A project process lies at the...

Words: 3683 - Pages: 15

Premium Essay

Service Marketing

...are obviously desirable, it is usually not possible for an operation to perform significantly better than the competition in more than one or two. The five key decisions in process management are: I. Process Choice II. Vertical Integration III. Resource Flexibility IV. Customer Involvement V. Capital Intensity These decisions are critical to the success of any organization and must be based on determining the best was to support the competitive priorities of the enterprise. PROCESS CHOICE The first choice typically faced in process management is that of process choice. Manufacturing and service operations can be characterized as one of the following: 1. Project 2. Job Shop 3. Batch Flow 4. Line Flow 5. Continuous Flow The nature of these processes are discussed below and summarized in the manufacturing product-process matrix on page 8. Project Process. Examples of a project process are building a shopping center, planning a major event, running a political campaign, putting together a comprehensive training program, constructing a new hospital, doing management consulting work, or developing a new technology or product. A project process is characterized by a high degree of job customization, the large scope of each project, and the release of substantial resources, once a project is completed. A project process lies at the...

Words: 3683 - Pages: 15

Premium Essay

The Goal

...Name: Thien Duong The Goal Assignment Question 1 Based on the definition of each type, we can realize that Alex Rogo’s plant does not follow the repetitive process, because his plant receives different kinds of orders to product different outputs. Furthermore, all orders do not follow the same sequence of tasks. For instance, there are only 80% of materials that come through the NCX-10 machine. In fact, they only use the repetitive process as the final process to assemble their finished products. In contrast, Alex Rogo ‘s plant is a mix of job shop and batch process. The orders of his plant are similar to job shops, in that have a high variety of inputs and tasks. Each job should be scheduled scientifically or the whole system will be inefficient. However, in my point of view, Alex Rojo still manages his plant more like a batch process, since his plant take moderate volume orders. Each batch moves from one work center to another work center. The employment and the equipment used are flexbile. In particularly, when Alex Rogo recognizes the bottleneck from the NCX-10 machine, he assigns more mechanics from other work centers to operate that NCX-10. He also takes advantage of old machines with the same function of NCX-10 in order increase the capacity at the bottleneck. In my opinion, Alex Rogo’s plant has two competitive advantages, which are quality and innovation. Alex Rogo’s plant focuses on the quality of the products they deliver. In particular, they operate the quality...

Words: 1927 - Pages: 8

Free Essay

Cellular Production

...waste, while providing a smooth workflow, with minimal transport or delay. Additional benefits of cellular production include reduced work in progress; reduce space requirements, and improvement in quality and productivity (Stevenson, 2015, p. 256). The article, “Integrating Cell Formation with Cellular Layout and Operation Scheduling” is an investigation into designing a cellular system. The research is on two mathematical proposed models. In the first model is an integration of cellular layout (CL) problem with cell formation (CF) problem to determine the optimal configuration of machine and cell layout to minimize movement cost. The second model included in the integration of the cellular layout (CL) and cell formation (CF) problems, with the cellular scheduling (CS) to minimize the completion time, in addition to minimizing movement cost (Arkat, Farahani, & Hosseini, 2011). The first model uses the sequential approach, with two phases. First, the solution for the CF problem and the CL problem are solved, simultaneously. Then, as a job shop-scheduling problem, CS problem solution is found. The second model solved the CF, CL, and CS problems concurrently. The conclusion of these...

Words: 788 - Pages: 4

Free Essay

Operation Research & Methods

...Operational Research Models and Methods in CIM1 Abstract : Many models and methods of Operational Research can be adapted for industrial applications. In this chapter, we show on one hand the main problems of a manufacturing system and, on the other hand, how they can be ranged in a hierarchical order, derived from a CIM architecture (from the strategic decisions to the production constraints). Then, we present an Operational Research tool for solving each of these problems. 1 Introduction Flexible Manufacturing Systems (FMS) are nowadays installed in the mechanical industry, especially in car factories. However, the market constraints impose to always improve the production system and the whole production organization. The concepts developed by Taylor and applied at the beginning by Ford are progressively abandoned and replaced by the Just-In-Time concept and the Computer Integrated Manufacturing philosophy (CIM). One of the aims of the CIM philosophy is to provide an integrated information system which avoids the rigid separations between the different functionalities of a complete production system. With such integrated information systems, the loss of time on one hand between the customer order and the part delivery, on the other hand between the product design and its manufacture will be drastically reduced. To understand the complete production system, it is relatively easy to find in the scientific literature excellent general books explaining the different aspects...

Words: 5165 - Pages: 21

Premium Essay

Sparkling Glass Limited

...Experience, Socializing and many other factors have been taken into consideration. I hope the analysis I have done satisfies your concerns. Regards. Committee Member 2 Executive Summary : Mr. Ashok, who is in charge of the General Shift at Sparkling Glass Limited , is going to retire after few months. He has been responsible not only to do the basic routine work but also manage the Production Planning, Scheduling and Costing work. Therefore, the company needs to appoint someone on his behalf. Four people have taken into consideration for the post :1. 2. 3. 4. Khanna. Panjabi. Gupta. Gulati. The analysis talks about which persons fits best for the job on analyzing on Educational Qualification, Experience, Knowledge on Planning & Management , and Social Quality and Unions. After analyzing, Mr. Gupta seems fit to be promoted because he is qualified as a Glass technologist, manages the Production Planning, Costing and Scheduling work, had maintained a relationship with the workers which was adequate enough 3 TABLE OF CONTENT Sl. No. 1 2 3 4 5 6 7 Content Situation Analysis Problem Statement Options Available Criteria For Evaluation Evaluation of Options Recommendation Action Plan Page 5 6 6 6 7-8 8 8 4 1. Situation Analysis : The committee has examined all the four candidates resume. While evaluating them the members...

Words: 961 - Pages: 4

Premium Essay

Manufacturing Execution System

...(MES). Nowadays MES is mainly being integrated with Enterprise Resource Planning (ERP) in the Computer Integrated Manufacturing System (CIMS) to reduce the complexities encountered in the production environment. This paper analyses the various functions of an MES used in the present-day industry and how it addresses the responsibilities of the shop floor of an industry. Keywords Manufacturing Execution System (MES), Enterprise Resource Planning (ERP), Computer Integrated Manufacturing System (CIMS) and Shop floor 1. Introduction Manufacturing is best defined as transformation of materials into items of greater value by means of one or more processing and/or assembly operations. A system that integrates all the operations of a manufacturing process right from processing the raw material to the completion of the final product is called a Manufacturing System. Manufacturing Execution System (MES) is a computerized control system used in monitoring the real-time processes of a shop floor. Manufacturing Execution System (MES) provides great flexibility in process planning and also helps reduce the complexities encountered on the shop floor. MES is used on a global level connecting multiple factories of a particular industry located in...

Words: 3485 - Pages: 14

Premium Essay

Flashy Flashers

...Although the MRP system was working, the decision was made to find a system that would exceed expectations. Analysis The analysis indicates that the MRP report shows a shortage. The shortages were resulting from having poor timing with the scheduled releases. By moving forward promptly, the scheduled receipt will document the information so that the imbalances can be corrected. There is no indication if the scheduled receipt could be pushed through quicker. FFI also has a concern as there is no formal plan is in place to ensure that the MRP is being used efficiently. There are also complaints from the shop supervisor that there is no certainty when it comes to scheduled receipts. As the concerns arise the MRP system was put in place to alleviate the problems in knowing that the system has more advantages. An advantage of the MRP system is that it allows scheduling the proper amount of staff according to the demand. The analysis shows that FFI worked together in obtaining information regarding the concerns to develop a strategy and acknowledged that the company needs to take the MRP system a step further. Appendix 1 will provide an overall view of the planned order release schedule and MRP records for both the Headlamps and Sidelamp. Conclusion After further analysis, it is...

Words: 466 - Pages: 2

Premium Essay

Applying Toc to Services

...Abstract Theory of constraints (TOC) is about thinking in logical, systematic, or structured processes similar to the PDCA learning loop and also about analyzing cause and effect, verifying basic assumptions, exploring alternatives and process improvement. The goal of TOC is to maximize the efficiency of a process selectively at the most critical points (constraints) and thereby maximize profitability. The purpose of this paper is to provide a comprehensive descriptive study on applying “Theory of Constraint” principles in improving the effectiveness of the service process that was limiting the entire service system. We have studied based on Schmenner’s classification of service organizations into four quadrants of service process matrix. Clear explanations supplemented by examples for each quadrant define how the theory works, why it works and what issues are resolved and what benefits are accrued. Introduction Operations management of services talks about how to plan, execute and improve service delivery so that more services are delivered faster and reliably with same/similar resources without compromising on quality. Services, in general, can be any of – Public Services, Professional Services, IT Services, Healthcare services, Banking/Insurance etc. Theory of Constraints, as proposed by Eli Goldratt, works well in a Manufacturing setup, but to replicate in a Service model requires adaption more than adoption. It should be accepted that trying to force a service organization...

Words: 2659 - Pages: 11