AN ABSTRACT OF THE DISSERTATION OF Nasser Salmasi for the degree of Doctor of Philosophy in Industrial Engineering presented on September 15, 2005. Title: Multi-Stage Group Scheduling Problems with Sequence Dependent Setups. Abstract approved: Redacted for privacy Rasaratnam Lodndran The challenges faced by manufacturing companies have forced them to become more efficient. Cellular manufacturing is a concept that has been accepted as a technique for increasing manufacturing productivity in batch type production by efficient grouping of parts (jobs) with some similarities in processing operations into groups and sequentially matching machine cell capabilities for performing these operations. In each cell, finding the best sequence of processing the assigned groups to the cell and the jobs in each group by considering some measure of effectiveness, improves the efficiency of production. In this research, it is assumed that n groups are assigned to a cell that has m machines. Each group includes bL jobs (i = 1, 2, ..., n). The set-up time of a group for each machine depends on the immediately preceding group that is processed on that machine (i.e., sequence dependent set-up time). The goal is to find the best sequence of processing jobs and groups by considering minimization of makespan or minimization of sum of the completion times. The proposed problems are proven to be NP-hard. Thus, three heuristic algorithms based on tabu search are developed to solve problems. Also, two different initial solution generators are developed to aid in the application of the tabu search-based algorithms. The lower bounding techniques are developed to evaluate the quality of solutions of the heuristic algorithms. For minimizing makespan, a lower bounding technique based on relaxing a few constraints of the mathematical model is developed. For minimizing sum of the completion times, a lower bounding approach based on Branch-and-Price (B&P) technique is developed. Because several versions of tabu search are used to solve the problem, to find the best heuristic algorithm, random test problems, ranging in size from small, medium, to large are created and solved by the heuristic algorithms. A detailed statistical experiment, based on nested split-plot design, is performed to find the best heuristic algorithm and the best initial solution generator. The results of the experiment show that the tabu search-based algorithms can provide good quality solutions for the problems with an average percentage error of 8.15%. ©Copyright by Nasser Salmasi September 15, 2005 All Rights Reserved Multi-Stage Group Scheduling Problems with Sequence Dependent Setups by Nasser Salmasi A DISSERTATION submitted to Oregon State University In part fulfillment of the requirement for the degree of Doctor of Philosophy Presented September 15, 2005 Commencement June 2006 Doctor of Philosophy dissertation of Nasser Salmasi Presented on September 15, 2005 Approved: Redacted for privacy Major Professor, representing j{ndustrial Engineering Redacted for privacy Head of the Department of Industrial and Mifacturing Engineering Redacted for privacy Dean of th'eJGduate School I understand that my dissertation will become part of the permanent collection of Oregon State University libraries. My signature below authorizes release of my dissertation to any reader upon request. Redacted for privacy Nasser Salmasi, Author ACKNOWLEDGEMENTS This dissertation is the result of my work as a PhD student at Oregon State University (OSU). It could not have been completed without the significant contribution of many people and organizations that have supported and encouraged me continuously during this period. I would like to acknowledge their contributions for helping me complete my PhD at OSU. First of all, I thank my major professor, Dr. Rasaratnam Logendran, for his excellent guidance and financial support while I was his research assistant on the NSF grant (Grant No. DMI-0010118). His mentorship was of great value while he patiently and generously spent time with me in our meetings throughout these years. I am grateful for his comments, inspiration, and encouragement, all of which led me to a deeper understanding of the topic of this dissertation. I specially thank the other members of my PhD committee. I am thankful to Dr. Jeff Arthur, my minor professor in the area of operations research, for his help with my research and his wonderful classes. Many thanks also to Dr. David Porter, my Minor professor in the area of information systems, for his advice and providing me with a laptop and sharing his research lab with me as a place to perform my experiments. I must also express my sincere gratitude to Dr. David S. Kim, the next member of my PhD committee, for his help with my research and for having me as his teaching assistant. I learned many things by working with him. Additionally, I am thankful to Dr. Saurabh Sethia for being my Graduate Council Representative (GCR) during the first two years of this research. I also thank Dr. Mark Pagell for being my GCR for the final defense. I wish to thank Dr. Dave Birkes and Mr. Raghavendran Dharmapuri Nagarajan (Rags) for helping me with the experimental design process. Many thanks to the department of Industrial and Manufacturing Engineering (IME) at Oregon State University (OSU), and all of its faculty and staff members for financially supporting me as a Graduate Teaching Assistant during these years. Many thanks to Bill Layton, for providing such a stable computer network that allowed me to worry less about our computer system. I also appreciate the help of my friends from all over the world. I am grateful for their support, prayers, encouragement, and advice during these challenging years. I am especially grateful for the friendship of Dr. Shakib Shaken, who always generously shared his valuable experience and inforniation with me on every possible matter. During my PhD program, I had a wonderful colleague, Cumhur Alper Gelogullari, whose help always led to identifying a short cut for solving problems. I must also step back and thank the faculty and staff of Sharif University of Technology in my home country, Iran, for raising my interest in operations research, while I earned my Bachelor's and Master's degrees in Industrial Engineering. It is clear that I would not have been able to travel down this path without the help, support, and encouragement of my lovely parents. Their encouragement has always been my greatest motivational force. There is no way to thank them enough for their efforts in helping me to earn my PhD degree. TABLE OF CONTENTS Page CHAPTER 1: INTRODUCTION ......................................................... 1 CHAPTER 2: LITERATUPE REVIEW ................................................. 4 2.1 Sequence Independent Job Scheduling (SIJS) ........................... 4 2.2. Sequence Dependent Job Scheduling (SDJS) ........................... 5 2.3. Sequence Independent Group Scheduling (SIGS) ......................... 6 2.4. Sequence Dependent Group Scheduling (SDGS) ........................ 8 CHAPTER 3: MOTIVATION AND PROBLEM STATEMENT ..................... 9 3.1 Motivation .................................................................... 9 3.2 Problem Statement ............................................................ 10 CHAPTER 4: MATHEMATICAL MODELS ........................................... 13 4.1 Models ........................................................................... 13 4.2 Complexity of Problems ...................................................... 16 4.3 Example .......................................................................... 17 CHAPTER 5: HEURISTIC ALGORITHM (TABU SEARCH) ........................ 19 TABLE OF CONTENTS (Continued) 5.1 Overview of Tabu Search . 19 5.2 Tabu Search Mechanism .................................................... 20 5.2.1 Forbidden Strategy ................................................... 20 5.2.2 Freeing Strategy ...................................................... 21 5.2.3 Short-Term and Long-Term Strategies ............................. 21 5.3 Initial Solution ............................................................... 24 5.3.1 Initial Solution Techniques for Minimization of Makespan Criterion................................................................ 24 5.3.1.1 Rank Order .................................................... 24 5.3.1.2 Applying the Result of Schaller et al. 'S (2000) Algorithm as an Initial Solution ........................... 25 5.3.1.2.1 Step 1. Applying CDS (Campbell-Dudek-Smith, 1970) Based Procedure to Find the Best Job Sequence for Groups ...................................... 25 5.3.1.2.2 Step 2. Applying NEH Based Procedure to Find the Best Group Sequence .................................. 26 5.3.2 Initial Solution Techniques for Minimization of the Sum of the Completion Times Criterion .................................. 27 5.3.2.1 Rank Order .................................................... 27 5.3.2.2 Relaxing the Problem to a Single Machine, SIGS Problem......................................................... 28 5.4 Generation of Neighborhood Solutions .................................... 28 5.5 Steps of Tabu Search ......................................................... 29 5.5.1 Step 1: Initial Solution ................................................ 29 TABLE OF CONTENTS (Continued) 5.5.2 Step 2: Evaluate the Objective Function Value of the Seed 30 5.5.3 Step 3: Inside Search ................................................ 30 5.5.3.1 Step 3.1: Find Inside Neighborhood Solutions ............ 30 5.5.3.2 Step 3.2: Evaluate the Inside Neighborhoods ............ 30 5.5.3.3 Step 3.3: Stopping Criteria .................................. 32 5.5.4 Step 4: Outside Search ............................................... 34 5.5.4.1 Step 4.1: Find outside Neighborhood Solutions .......... 34 5.5.4.2 Step 4.2: Evaluate the Objective Function Value of Outside Neighborhoods ..................................... 34 5.5.4.3 Step 4.3: Stopping Criteria .................................. 36 5.6 Two-Machine SDGS Problem with Minimization of Makespan Criterion...................................................................... 39 5.7 Applied Parameters for Proposed Research Problems.................... 40 5.7.1 Empirical Formulae for Two Machine Problems by Considering Minimization of Makespan Criterion ............... 40 5.7.2 Empirical formulae for Three Machine and Six Machine Problems by Considering Minimization of Makespan Criterion 40 5.7.3 Empirical Formulae for Two, Three and Six Machine Problems by Considering Minimization of Sum of the Completion Times Criterion................................................................. 41 5.8 Application of Tabu Search to an Example Problem by Considering Minimization of Makespan Criterion ..................................... 42 5.8.1 Step 1: Initial Solution ............................................... 43 TABLE OF CONTENTS (Continued) 5.8.2 Step 2: Evaluate the Objective Function Value of the Initial Solution................................................................ 43 5.8.3 Step 3: Perform Inside Search ....................................... 43 5.8.3.1 Step 3.1: Evaluate Inside Neighborhoods ................. 44 5.8.3.2 Step 3.2: Evaluate the Stopping Criteria for Inside Search.............................................................. 45 5.8.3.3 Repeat the Cycle .............................................. 46 5.8.4 Step 4: Perform Outside Search ..................................... 46 5.8.4.1 Step 4.1: Evaluate Outside Neighborhoods ................ 46 5.8.4.2 Step 4.2: Evaluate the Stopping Criteria for Outside Search.............................................................. 48 5.8.4.3: Repeat the Cycle ............................................. 48 5.9 Application of Tabu Search to an Example Problem by Considering Minimization of Sum of the Completion Times Criterion ............. 49 5.9.1 Step 1: Initial Solution .............................................. 49 5.9.2 Step 2: Evaluate the Objective Function Value of the Initial Solution................................................................... 49 5.9.3 Step 3: Perform Inside Search ...................................... 50 5.9.3.1 Step 3.1: Evaluate Inside Neighborhoods .................. 50 5.9.3.2 Step 3.2: Evaluate the Stopping Criteria for Inside Search.............................................................. 52 5.9.3.3: Repeat the Cycle ............................................. 52 5.9.4 Step 4: Perform Outside Search...................................... 52 5.9.4.1 Step 4.1: Evaluate Outside Neighborhoods ................ 53 TABLE OF CONTENTS (Continued) 5.9.4.2 Step 4.2: Evaluate the Stopping Criteria for Outside Search.............................................................. 55 5.9.4.3: Repeat the Cycle ................................................ 55 CHAPTER 6: LOWER BOUNDS ....................................................... 56 6.1 Lower Bounding Techniques for Minimization of Makespan ........... 56 6.1.1 Application of the Lower Bounding Technique to a Problem Instance.................................................................. 60 6.2 Lower Bounding Technique for Minimization of Sum of the Completion Times .......................................................... 60 6.2.1 Simplifying the Two-Machine Problem........................... 65 6.2.1.1 The Relaxing Rule for SP1 in the Two Machine Problem ......................................................... 67 6.2.1.2 The Relaxing Rule for SP2 in the Two-Machine Problem ........................................................ 69 6.2.2 Simplifying the Three-Machine Problem .......................... 70 6.2.2.1 The Relaxing Rule for SP1 in the Three-Machine Problem ........................................................... 73 6.2.2.2 The Relaxing Rule for SP2 in the Three-Machine Problem ........................................................... 73 6.2.2.3 The Relaxing Rule for SP3 in the Three-Machine Problem.......................................................... 75 6.2.3 A Generalized Model for Simplifying the Multiple-Machine Problems.............................................................. 75 TABLE OF CONTENTS (Continued) 6.2.3.1 The Relaxing Rule for SP1 in the Multiple-Machine Problem............................................................. 77 6.2.3.2 The Relaxing Rule for SP2 through SPmi in the Multiple Machine Problem ................................................ 77 6.2.3.3 The Relaxing Rule for SPm in the Multiple Machine Problem............................................................. 78 6.2.4 Adding an Auxiliary Constraint to Simplify Finding the Sequence of Dummy Jobs ................................................ 79 6.2.5 Solving Sub-Problems .................................................. 79 6.2.6 Branching ............................................................... 80 6.2.7 Stopping Criteria ...................................................... 82 6.2.8 The Software Application ............................................. 82 6.2.9 The Lower Bound for the Original Problem ........................ 82 6.2.lOExample ................................................................ 84 CHAPTER 7: EXPERIMENTAL DESIGN ........................................... 87 7.1 Steps of the Experiment ..................................................... 87 7.2 Test Problems Specifications ............................................... 93 7.3 Two Machine Test Problems ................................................ 95 7.4 Three Machine Test Problems ............................................... 96 7.5 Six Machine Test Problems .................................................. 101 TABLE OF CONTENTS (Continued) CHAPTER 8: RESULTS . 103 8.1 The Results for the Makespan Criterion .................................... 103 8.1.1 The Results of Two-Machine Problems by Considering Minimization of Makespan Criterion ................................ 103 8.1.1.1 Comparison among Heuristic Algorithms and Lower Bound............................................................... 104 8.1.1.2 The Experimental Design to Compare the Time Spent for Heuristic Algorithms for Two Machine Problems by Considering Minimization of Makespan Criterion ............ 109 8.1.1.3 The Comparison between the Best Tabu Search and the Results of Schaller et al. (2000) Algorithm .................... 113 8.1.2 The Results of Three-Machine Makespan Criterion ................ 113 8.1.2.1 Comparison among Heuristic Algorithms and Lower Bound for Three Machine Problems by Considering Minimization of Makespan ....................................... 114 8.1.2.2 The Experimental Design to Compare the Time Spent for Heuristic Algorithms for Three Machine Problems by Considering Minimization of Makespan ....................... 120 8.1.2.3 The Comparison Between the Best Tabu Search and the Results of Schaller et al. (2000) Algorithm ..................... 126 8.1.3 The Results of Six-Machine Makespan Criterion ................... 127 8.1.3.1 Comparison among Heuristic Algorithms and the Lower Bound............................................................... 127 8.1.3.2 The Experimental Design to Compare the Time Spent for Heuristic Algorithms for Six Machine Problems by Considering Minimization of Makespan ...................... 133 TABLE OF CONTENTS (Continued) 8.1.3.3 The Comparison Between the Best Tabu Search and the Results of Schaller et al. (2000) Algorithm for Six-Machine Problems by Considering Minimization of Makespan Criterion............................................................. 136 8.2 The Results for Minimization of Sum of the Completion Times Criterion ........................................................................ 137 8.2.1 The Results of Two-Machine Problems by Considering Minimization of Sum of the Completion Times Criterion ......... 137 8.2.1.1 Comparison among Heuristic Algorithms for Two Machine Problems by Considering Minimization of Sum of the Completion Times ........................................... 138 8.2.1.2 The Experimental Design to Compare the Time Spent for Heuristic Algorithms ............................................. 141 8.2.1.3 Evaluating the Quality of Solutions ........................... 144 8.2.2 The Results of Three-Machine Problems by Considering Minimization of Sum of the Completion Times Criterion ......... 145 8.2.2.1 Comparison among Heuristic Algorithms for Three Machine Problems by Considering Minimization of Sum of the Completion Times ............................................. 146 8.2.2.2 The Experimental Design to Compare the Time Spent for Heuristic Algorithms ....................................... 152 8.2.2.3 Evaluating the Quality of Solutions .......................... 158 8.2.3 The Results of Six-Machine Problems by Considering Minimization of Sum of the Completion Times Criterion ...... 159 8.2.3.1 Comparison among Heuristic Algorithms for Six Machine Problems by Considering Minimization of Sum of the Completion Times ............................... 159 TABLE OF CONTENTS (Continued) Page 8.2.3.2 The Experimental Design to Compare the Time Spent for Heuristic Algorithms by Considering Minimization of Sum of the Completion Times Criterion ................... 163 8.2.3.3 Evaluating the Quality of Solutions .......................... 167 CHAPTER 9: DISCUSSION ............................................................. 168 9.1 Analyzing the Results of Minimization of Makespan Criterion ...... 168 9.2 Analyzing the Results of Minimization of Sum of the Completion Times Criterion ............................................................. 170 CHPATER 10: CONCLUSIONS AND SUGGESTIONS FOR FUTHURE RESEARCH ................................................................ 172 10.1 Suggestions for Future Research ............................................ 174 10.1.1 Defining Related Research Problems ............................... 174 10.2 Applying New Techniques (tools) to Solve Proposed Problems ....... 176 BIBLIOAGRAPHY ........................................................................... 178 APPENDICES .............................................................................. 183 A The ANOVA and Test of Effect Slices Tables for the Result Chapter. .. 184 B The Percentage Errors for Schaller et al. (2000) Algorithm ................ 217 LIST OF FIGURES Figure Pge 3.1 The Scheduling Tree Diagram ................................................... 12 4.1 The Gantt chart of processing groups as well as jobs in rank order ... 17 5.1 Flow chart for outside search ................................................... 38 5.2 Flow chart for inside search ..................................................... 39 5.3 The Gantt chart of the initial solution ........................................... 43 5.4 The Gantt chart of the tabu search sequence ................................... 49 5.5 The Gantt chart of the initial solution ........................................... 50 5.6 The Gantt chart of the tabu search sequence ................................... 55 6.1 The Gantt chart of processing two different sequences ...................... 67 6.2 The branching rule flow chart .................................................... 81 6.3 The objective function value of nodes for an incomplete problem ......... 83 8.1 The normal probability plot of the experimental design of finding the best heuristic algorithm for two machine problem by considering minimization of makespan ................................................. 107 8.2 The normal probability plot of the experimental design of finding the most efficient heuristic algorithm for two machine problem by considering minimization of makespan ...................................... 111 8.3 The normal probability plot of the experimental design of finding the best heuristic algorithm for three machine problem by considering minimization of makespan ................................................. 119 8.4 The normal probability plot of the experimental design of finding the most efficient heuristic algorithm for three machine problem by considering minimization of makespan ................................... 125 8.5 The normal probability plot of the experimental design of finding the best heuristic algorithm for six machine problem by considering minimization of makespan ............................................... 131 LIST OF FIGURES Figure 8.6 The normal probability plot of the experimental design of finding the most efficient heuristic algorithm for six machine problem by considering minimization of makespan ................................. 135 8.7 The normal probability plot of the experimental design of finding the best heuristic algorithm for two machine problem by considering minimization of sum of the completion times .......................... 140 8.8 The normal probability plot of the experimental design of finding the most efficient heuristic algorithm for two machine problem by considering minimization of sum of the completion times criterion.. 143 8.9 The normal probability plot of the experimental design of finding the best heuristic algorithm for three machine problem by considering minimization of sum of the completion times ........................... 150 8.10 The normal probability plot of the experimental design of finding the most efficient heuristic algorithm for the three machine problem by considering minimization of sum of the completion times criterion 157 8.11 The normal probability plot of the experimental design of finding the best heuristic algorithm for six machine problem by considering minimization of sum of the completion times criterion ................ 162 8.12 The normal probability plot of the experimental design of finding the most efficient heuristic algorithm for six machine problem by considering minimization of sum of the completion times criterion.. 165 LIST OF TABLES Table Page 4.lThe run time ofjobs in groups ..................................................... 17 4.2 The set up times for groups ........................................................ 17 5.1 The outside search parameters for two machine problems with makespan criterion......................................................................... 40 5.2 The outside search parameters for three machine and six machine problems with makespan criterion ........................................... 41 5.3 The inside search parameters for three machine and six machine problems with makespan criterion .......................................... 41 5.4 The outside search parameters for two, three, and six machine problems with minimization of sum of the completion times criterion............. 42 5.5 The inside search parameters for two, three, and six machine problems with minimization of sum of the completion times criterion........... 42 5.6 The neighborhoods of the inside initial solution .............................. 44 5.7 The neighborhoods of the outside initial solution .............................. 47 5.8 The neighborhoods of the inside initial solution ............................... 50 5.9 The neighborhoods of the outside initial solution ............................. 53 6.1 The completion time ofjobs in Si and S2 ......................................... 68 6.2 The coefficient ofX1Jk's in SPs ................................................... 72 6.3 The coefficient ofX]k's in SPs .................................................... 6.4 The result of the first node ......................................................... 84 6.5 The branching coefficients OfASq,(j+J)l at the end of the first node .......... 85 6.6 The result of the second node ..................................................... 86 6.7 The result of the third node ........................................................ 86 7.1 The set-up time of each machine on two-machine problems ................. 94 LIST OF TABLES Table ____ 7.2 The set-up time of each machine on three-machine problems .................. 94 7.3 The set-up time of each machine on six-machine problems .................... 94 7.4 Small size problems based on group category (two machine) ..................... 95 7.5 Medium size problems based on group category (two machine) .............. 95 7.6 Large size problems based on group category (two machine) ................. 95 7.7 The specification of test problems generated for two machine problem ........................................................................ 96 7.8 Small group, small job size problems (three machine) ......................... 97 7.9 Small group, medium job size problems (three machine) ..................... 97 7.10 Small group, large job size problems (three machine) ......................... 97 7.11 Medium group, small job size problems (three machine) ..................... 97 7.12 Medium group, medium job size problems (three machine) .................. 97 7.13 Medium group, large job size problems (three machine) ...................... 97 7.14 Large group, small job size problems (three machine) ........................ 98 7.15 Large group, medium job size problems (three machine) ..................... 98 7.16 Large group, large job size problems (three machine) .......................... 98 7.17 The test problems generated for three machine problem ...................... 98 7.18 Small size problems based on group category (six machine) ................. 101 7.19 Medium size problems based on group category (six machine) ............. 101 7.20 Large size problems based on group category (six machine) .................. 101 7.21 The specification of generated test problems for six machine problem 102 8.1 The results of the experiments with test problems for two machine problems by considering minimization of makespan ................. 105 LIST OF TABLES Table Page 8.2 The ANOVA for two machine problem by considering minimization of makespan for algorithm comparison ...................................... 108 8.3 Test of effect slices for two machine problem by considering minimization of makespan for algorithm comparison ................................... 109 8.4 The time spent for the test problems of two machine problems (in seconds) by considering minimization of makespan criterion ..................... 109 8.5 The results of the experiments with test problems for three machine problems by considering minimization of makespan criterion .......... 114 8.6 The time spent for the test problems of three machine problems (in seconds) by considering minimization of makespan criterion ........... 121 8.7 The results of the experiments with test problems for six machine problems by considering minimization of makespan criterion ......... 128 8.8 The lower bound value of test problems for six machine problems by considering minimization of makespan criterion ......................... 130 8.9 The time spent for the test problems of six machine problems (in seconds) by considering minimization of makespan criterion ..................... 133 8.10 The results of the test problems for two machine problems by considering minimization of sum of the completion times ............... 138 8.11 The time spent for the test problems of two machine problems (in seconds) by considering minimization of sum of the completion times........................................................................... 141 8.12 The results of the lower bounding technique for two machine problems by considering minimization of sum of the completion times criterion .......................................................................... 145 8.13 The results of the test problems for three machine problems by considering minimization of sum of the completion times criterion... 146 8.14 The experimental cells of three machine problems by considering minimization of sum of the completion times criterion in which the heuristic algorithms do not have the same performance ................ 151 LIST OF TABLES Table 8.15 The time spent for the test problems of three machine problems (in seconds) by considering minimization of sum of the completion times criterion .................................................................. 152 8.16 The results of the lower bounding technique for three machine problems by considering minimization of sum of the completion times criterion 158 8.17 The heuristic algorithms results of the test problems for six machine problems by considering minimization of sum of the completion times criterion ...................................................................... 160 8.18 The experimental cells of six machine problems by considering minimization of sum of the completion times criterion in which the initial solution generators do not have the same performance .......... 163 8.19 The time spent for the test problems for six machine problems (in seconds) by considering minimization of sum of the completion times criterion ................................................................. 163 8.20 The experimental cells of six machine problems by considering minimization of sum of the completion times criterion in which the heuristic algorithms do not have the same time spent .................. 166 8.21 The result of the lower bounding technique for six machine problems by considering minimization of sum of the completion times criterion ......................................................................... 167 9.1 The results of test problems for minimization of makespan criterion ...... 169 9.2 The result of the most efficient initial solution generator by considering minimization of makespan criterion ........................................ 169 9.3 The results of test problems for minimization of sum of the completion times criterion ................................................................. 171 9.4 The percentage error of the test problems for minimization of sum of the completion times by removing problems with more than 50% percentage error ............................................................... 171 Multi-Stage Group Scheduling Problems with Sequence Dependent Setups CHAPTER 1: iNTRODUCTION The challenges faced by manufacturing companies have forced them to become more efficient and more flexible. In the 1970s, a method of manufacturing, called Cellular Manufacturing (CM), was developed. CM is a suitable approach to increase the productivity and flexibility of production in a manufacturing company that produces a variety of products in small batches. In CM, the parts are assigned to different groups based on their similarities in shape, material, or similar processing operations. The machines are also assigned to different cells in order to decompose the production line. The groups are then assigned to a particular cell, which includes several machines that have the ability to perform the necessary operations for groups. This decomposition of machines and jobs has several advantages such as significant reduction in set-up time, work-in-progress inventories, and simplified flow of parts and tools (Logendran, 2002). Sequencing and scheduling is a form of decision-making that plays a crucial role in manufacturing and service industries (Pinedo, 2002). They have been applied to improve the efficiency of production since the beginning of the last century. Thus, the next step for increasing the efficiency of production is finding the best sequence of processing the assigned groups to the cell as well as the jobs of a group in order to maximize or minimize some measure of effectiveness. This subject is called Group Scheduling. Two relevant objectives in the investigation of group scheduling problems, minimization of makespan and minimization of the sum of the completion times, are considered in this research. The goal of minimization of makespan is to minimize the completion time of the last job on the last machine. On the other hand, the goal of minimization of the sum of the completion times is to minimize the average completion time of all jobs. The purpose of both of these criteria is to deliver orders as quickly as possible to the customers. In general, the longer the jobs stay on the shop floor, the higher they cost the company. Suppose that a company receives a large order from a 2 customer to produce several different groups ofjobs. The efficient way of preparing the order is to compress (minimize) the completion time of the last job processed so that the entire order can be delivered to the customer as quickly as possible in one shipment. On the other hand, suppose that the company receives several orders from different customers, and that all of them have the same priority (weight) for the company. In this case, minimization of the sum of the completion times is appropriate to maximize the efficiency as it would indirectly minimize the work-in-progress inventories. In group scheduling problems, all jobs that belong to a group require similar set-up on machines. Thus, a major set-up is required for processing each group on every machine. The set-up operation of a group includes preparing the machine, bringing required tools, setting the required jigs and fixtures, inspecting the materials and cleanup (Allahverdi et al, 1999), which should be considered as a separate operation on machines for some problems rather than considering it as a part of processing time. The separable set-up time scheduling problems are divided into two major categories: sequence dependent, and sequence independent scheduling. If the set-up time of a group for each machine depends on the immediately preceding group that is processed on that machine, the problem is classified as "sequence dependent group scheduling," Otherwise, it is called "sequence independent group scheduling". The importance of sequence dependent set-up time scheduling problems has been discussed in several studies. Allahverdi et al. (1999) mentioned the results of a survey performed by Panwalker et al. (1973), in which 75% of the manufacturing managers mentioned that they had the experience of producing parts with sequence dependent set-up time specifications and almost 15% of them believe that all production operations belong to sequence dependent scheduling problems. Wortman (1992) explained the importance of considering sequence dependent set-up times for the effective management of manufacturing capacity. There are many real world applications of sequence dependent scheduling problems. Schaller et al. (2000) discussed an industry case of sequence dependent group scheduling problem in printed circuit boards (PCBs) in which the major set-up is required to switch from a group of PCBs to another. Painting industry is another example of such problems. Vakharia et al. (1995) and Schaller et al. (1997, 2000) present branch and bound approaches to solve the Sequence Dependent Group Scheduling problems (SDGS) with multiple machines by considering minimization of makespan. They also propose a fast heuristic algorithm to minimize the makespan for a SDGS problem by applying some of the existing scheduling heuristic algorithms to find a sequence for processing groups as well as jobs in a group. Because their algorithm does not consider the relationship between groups and job sequences, it may not provide a good quality solution. Considering the widespread practical applications of sequence dependent group scheduling in industry and the importance of minimizing the makespan and minimizing the sum of the completion times, developing a heuristic algorithm to solve these problems in a reasonable time with good quality can help to improve the efficiency of production. The industry needs an algorithm which can provide a sequence of processing groups as well as jobs with a good quality (optimal or near optimal) in a short time. Because the proposed research problems, as will be discussed in the chapters that follow, belong to NP-hard problems, the time required to solve the real world problems optimally by applying an exact algorithm is unreasonably high. Thus, a heuristic algorithm is necessary to get a solution close enough to the optimal solution in a reasonable time. In order to evaluate the performance of the heuristic algorithm, a lower bounding technique is also required to be applied as a yard stick. Thus, a lower bounding technique is developed for each criterion to estimate the quality of solutions. These lower bounds are created based on the mathematical models of the proposed research problems. 4 CHAPTER 2: LITERATURE REVIEW Basically, the flowshop scheduling problems can be classified as job scheduling problems and group scheduling problems. A group scheduling problem, as discussed earlier, occurs when assigned jobs to a cell, based on more similarities in process or shape are set into different groups and are processed in sequence. If the jobs are investigated independently, the problem is classified as a job scheduling problem. Because of some similarity between these two classes of problems, the related articles about both of them in sequence independent and sequence dependent set-up mode are investigated. Cheng et al. (2000) and Allahverdi et al. (1999) did a comprehensive literature review about job scheduling and group scheduling problems. The articles that are most related to the proposed research problems (minimization of makespan and minimization of the sum of the completion times) are presented in four different categories as follows: 2.1 Sequence Independent Job Scheduling (SIJS) Yoshida and Hitomi (1979) pioneered the investigation of SIJS problems. They considered the two machine flowshop problem and proposed an algorithm based on Johnson's (1954) rules to obtain the optimal solution for minimization of makespan. Lageweg et al. (1978) developed a general lower bound for the permutation flowshop problem by considering minimization of makespan. Bagga and Khurana (1986) developed a branch and bound algorithm for a two machine flow shop sequence independent job scheduling problem for minimizing the sum of the completion times. They also developed a lower bound for their problem. The proposed algorithm is applied to solve problems with 5 to 9 jobs. Proust et al. (1991) proposed three heuristic algorithms for minimizing the makespan. The first one is an extension of the CDS heuristic by Campbell, Dudek, and Smith (1970) for the standard flowshop scheduling problem. The second is a greedy procedure which augments an available partial schedule with the job that minimizes a lower bound. The third is constructed by incorporating the 2-job interchange neighborhood search in the above two heuristics for 5 the problem. All these algorithms were evaluated empirically. They also developed a branch and bound algorithm which can be used for small size problems. Gupta (1972) described dominance condition and an optimization algorithm to minimize makespan of SIJS. The proposed algorithm is applied to solve problems with 3 to 6 jobs and 4 to 6 machines. This algorithm can not be used to solve large size problems. Allahverdi (2000) addressed the two-machine SIJS problem by considering the mean-flow time criterion. He developed an algorithm to solve problems optimally up to 35 jobs in a reasonable time. 2.2 Sequence Dependent Job Scheduling (SDJS) Corwin and Esogbue (1974) considered the two-machine flow shop job scheduling problems where only one of the machines is characterized by sequence dependent setup times and proposed a dynamic programming approach to obtain the optimal solution for minimizing the makespan. Gupta and Darrow (1986) provided a few heuristic algorithms to find the minimum makespan for a two-machine SDJS problem. The result of the experiment shows that their heuristic algorithms have good performances for problems where set-up times are smaller than run time. Computational experiments are also performed to find out which proposed heuristic algorithm has the best performance in different size of problems. The results of the experiments reveal that for different size of problems, a different heuristic algorithm has a superior performance. Bellman et al. (1982) developed a dynamic programming model to optimally solve a flow shop scheduling problem with three machines where only one of the machines requires sequence dependent set-up time. Test problems up to 12 jobs are solved optimally by the proposed algorithm. Srikar and Ghosh (1986) proposed a mixed integer linear programming formulation for SDJS. The model is applied to solve several randomly generated instances of SDJS that included six machines and six jobs at most. Based on the results of the computational experiment the time taken to solve the problems was reasonable. Stafford and Tseng (1990) corrected a minor error in the Srikar-Ghosh formulation. The corrected model is used to solve problems with 5 machines and 7 jobs by considering minimization of the sum of the completion times criterion. The problem took about 6 CPU hours to be solved on a personal computer. Gupta et al. (1995) developed a branch and bound technique to find the minimum makespan for SDJS problems. They solved problems up to 20 jobs with the proposed algorithm. Rios- Mercado and Bard (1998) presented a branch-and-cut (B&C) algorithm for minimization of makespan of SDJS with m machines. The same authors (1999) presented a branch and bound algorithm which includes the implementation of both lower and upper bounding procedures, a dominance elimination criterion, and special feature such as a partial enumeration strategy for minimization of makespan of SDJS problems. Gupta (1988) proposed several heuristic algorithms to find the minimum makespan for SDJS problems. Simons (1992) developed four heuristics for this problem. The main idea of two of his heuristics is based on well known Vogel's approximation method in transportation problems. Parthasarathy and Rajendran (1997) proposed a heuristic algorithm based on simulated annealing to minimize the weighted tardiness of SDJS. 2.3 Sequence Independent Group Scheduling (SIGS) Ham et al. (1985) described a two-step procedure to solve SIGS problem optimally by considering minimization of makespan criterion. Baker (1990) generalized these results and provided a polynomial time optimization algorithm consisting of two steps for jobs and group sequences. Hitomi and Ham (1976) proposed a branch and bound procedure to find the minimum makespan of SIGS problems with multiple-machines. Extensions of their work are described in Ham et al. (1985). The procedure first creates a sequence of groups in the job sets, and then develops job sequences within each group. The proposed model is a family version of the heuristic by Petrov (1968). Logendran and Sriskandarajah (1993) addressed the blocking version of the problem with only separate set-up times, i.e., a finished job on the first machine will block the machine from being set-up for another job, until the set-up operation of the job starts on the second machine. They proposed a heuristic by ignoring the set-up times, and analyzed the worst-case performance of the heuristic. Campbell et al. (1970) presented a multiple- 7 pass heuristic for solving flowshop scheduling problems with three or more machines. Vakharia and Chang (1990) modified this heuristic for scheduling groups in a flowshop manufacturing cell. They also performed a computational experiment to compare this heuristic algorithm with a simulated annealing heuristic algorithm. The results show that the simulated annealing heuristic provides good quality solutions at reasonable computational expense. Skorin-Kapov and Vakharia (1993) developed a tabu search approach to minimize the completion time of SIGS. They performed a computational experiment to compare the performance of tabu search algorithm versus simulated annealing algorithm by Vakharia and Chang (1990). The results of their computational experiment reveal that the tabu search has a better performance by generating better solutions in less computational time. The computational experiment is performed with problems including 3 to 10 groups, 3 to 10 machines, and 3 to 10 jobs. The authors investigated six versions of tabu search. The experiment is performed with three different set-up times in which the distribution of set-up times was greater than the run times of jobs in all of them. Based on their computational experiments, LTM-max (tabu search by considering long term memory to intensify the search) has the best performance among fixed tabu-list size versions. Their experiments also show that variable tabu-list size versions provide better solutions than fixed size versions. Sridhar and Rajendran (1994) also developed a genetic algorithm for minimizing the makespan for SIGS. Helal and Rabelo (2004) classified the published heuristics for SIGS problems into three categories, single path, multiple pass, and iterative heuristics based on the complexity of the method. They also compare the performance of simulated annealing versus tabu search for some test problems in which the largest problem includes 8 groups, 8 jobs in a group, and 8 machines. The results show that the tabu search algorithm has a slightly better performance than simulated annealing by considering minimization of makespan criterion. Schaller (2000) performed a design of experiment to compare the performance of tabu search (developed by Skorin-Kapov and Vakharia, 1993) and genetic algorithm (Sridhar and Rajendran, 1994) and reported that the tabu search has a better performance. 2.4 Sequence Dependent Group Scheduling (SDGS) Jordan (1996) discussed the extension of a genetic algorithm to solve the two machine SDGS problem to minimize the weighted sum of earliness and tardiness penalties. Vakharia et al. (1995) and Schaller et al. (1997, 2000) present branch and bound approaches as well as several heuristics to solve the SDGS with multiple machines. The highlight of their research is published in a paper by Schaller et al. (2000). They propose a heuristic algorithm to minimize the makespan for a SDGS problem. In this algorithm, the sequence of groups and jobs that belong to a group are investigated independently. Finding a solution for the proposed problem requires two aspects: finding the sequence of jobs within each group, and finding the sequence of groups. While there is interaction between these two aspects, the authors assumed that these sequences can be developed independent of each other. They applied a few existing heuristic algorithms such as Campbell-Dudek-Smith (CDS) (1970) procedure to find the best sequence of jobs in a group. The sequence of groups is investigated by applying a few algorithms based on the procedure by Gupta and Darrow (1986), and Baker's (1990) scheduling algorithm. The authors also provide a lower bounding technique to evaluate the quality of their solutions by generalizing the machine based bound of traditional flowshop scheduling problems. Reddy and Narendran (2003) investigated the SDGS problems by considering dynamic conditions. The process time of jobs was assumed to have an exponential distribution. They also relaxed the assumption of availability of all jobs at the beginning of the scheduling. Simulation experiments are applied to find the best sequence of jobs and groups to minimize the tardiness as well as the number of tardy jobs. CHAPTER 3: MOTIVATION AND PROBLEM STATEMENT 3.1 Motivation Allahverdi et al. (1999) provide some explanations about the applications of sequence dependent scheduling problems. Panwalker et al. (1973) discovered that about 75% of the managers reported at least some operations they schedule require sequence dependent set-up times, while approximately 15% reported all operations requiring sequence dependent set-up times. Flynn (1987) determined that application of both sequence dependent set-up procedures and group technology principles increase output capacity in a cellular manufacturing shop, and Wortman (1992) explained the importance of considering sequence dependent set-up times for the effective management of manufacturing capacity. Ham et al. (1985) discussed about the importance of applying group technology (combining jobs into groups). They said "Development and implementation of Computer Aided Design (CAD) and Computer Aided Manufacturing (CAM) in the manufacturing industry lead to more integrated applications of group technology concept. It has been recognized that group technology is an essential element of the foundation for successful development and implementation of CAD/CAM through application of the part-family concept based on some similarities between jobs. This approach creates a compatible, economic basis for evolution of computer automation in batch manufacturing through increased use of hierarchical computer control and multi station NC manufacturing systems." The above literature review reveals that while a considerable body of literature on sequence dependent and sequence independent job scheduling has been created, there still exist several potential areas worthy of further research on sequence dependent and sequence independent group scheduling (Cheng et al., 2000). Considering the widespread practical applications of sequence dependent group scheduling in industry (as discussed its necessity in above paragraphs), especially in hardware manufacturing, ILi and the importance of minimizing the makespan and minimizing the sum of the completion times, further research on these subjects is still required. 3.2 Problem Statement In this research, it is assumed that n groups (G1, G2, ..., G) are assigned to a cell that has m machines (M1, M2,..., Mm). Each group includes bjobs (i = 1, 2, ..., n). The set- up time of a group for each machine depends on the immediately preceding group that is processed on that machine (sequence dependent set-up time). The purpose of this research is to find the best sequence of processing jobs as well as groups by considering minimizing some measure of effectiveness. Two such measures include the minimization of makespan and minimization of the sum of the completion times. The assumptions made in this research are: All jobs and groups are processed in the same sequence on all machines (permutation scheduling). This is the only way of production in some industries. For instance, if a conveyer is used to transfer jobs among machines, then all jobs should be processed in the same sequence on all machines. All jobs in each group are available at the beginning of the schedule. This is commonly known as static job releases. It means that the flow time of a job is the same as its completion time on the last machine. All jobs and groups have the same importance (weight) for the company. All machines are available at the beginning of planning horizon. This problem belongs to static flowshop problems. Figure 3.1 shows the classification of all scheduling problems, including the proposed research problem. The size of the problems which are investigated during this research are as follows: Number of groups: Group scheduling problems including 2 to 16 groups are investigated. Based on the reviewed papers, the previous research has focused on problems with at most ten groups. 11 Number of jobs in a group: Problems including 2 to 10 jobs in a group are considered in this research. Based on the papers reviewed, the previous investigations have been limited to at most ten jobs in a group (Schaller et al., 2000). Number of machines in a cell: As discussed before, the goal of applying cellular manufacturing is to decompose the production activities and simplify them. Thus, if in a cell, too many machines are assigned, then the goal of applying cellular manufacturing is violated. Based on this fact, in many cases the number of machines in a cell does not exceed six. Thus, problems up to six machines in a cell are investigated in this research. Another assumption considered in this research is that in all cases the required set-up time for a group on a machine is considerably greater than the run time of jobs on machines. In many production lines the required set-up time of a machine is larger than the run time of individual jobs. It is clear that any one of the n groups of jobs in the current planning horizon can be preceded by the last group that was processed in the previous planning horizon. This "last group" is referred as the reference group 'R' in this research. The reference group is a group which was processed as the last group on a machine in the previous planning horizon. Thus the required set-up time of each group compared to the reference group should be considered to find the best sequence. 12 Scheduling Problems Static Dynamic Scheduling Scheduling Multi machine Machine Problems Problems Parallel Shop Machine Scheduling Problems Problems Job Shop Flowshop Problems Problems Job Group Scheduling Scheduling Sequence Sequence Sequence = Sequeiwe Independent Set- dependent Set-up Independent Set- dependent Set-up up time Problems time Problems up time Problems time Problems Figure 3.1 The Scheduling Tree Diagram 13 CHAPTER 4: MATHEMATICAL MODELS The mathematical programming models for minimization of makespan and minimization of the sum of the completion times criteria for a multi-stage (two or more machines) SDGS problem are demonstrated below. The models belong to Mixed Integer Linear Programming (MILP) models. 4.1 Models The parameters, decision variables, and the mathematical models are as follows: Parameters: a: Number of groups Numberofjobsingroupp m: Number of machines bmax: The maximum number of jobs in groups, max{b} p = 1,2,.. .,a N: Number of jobs in all groups N = p1,2,...,a tpjk. Run time ofjobj in group p on machine k j = 1,2,.. .,bmax k=1,2,...,m tpjk fFor Real Jobs; Run time of job . j in group p on machine k p= j 1,2,. . . 1,2,.. .,bmax For Dummy jobs, -M k = 1,2,. . . ,m The setup time for group 1 on machine k if group p is the p,l = 1,2,. ,a . . p1k. preceding group k 1,2,. . . ,m The summation of run times in group p on machine k Tk = 1pjk Decision Variables: i= 1,2,...,a th The completion time ofjobj in slot on machine k j = 1,2,.. .,bmax k=1,2,...,m Ii; Ifgrouppisassignedtosloti I= 0,1,2,...,a WIP tO; Otherwise p 0,1,2,.. .,a Yijq Ii; Ifjobqisprocessedafterjobjinsloti tO; Otherwise The completion time of th slot on machine k j i= I q 1,2,...,a j,q = 1,2,.. = 0,1,2,. . . ,a 14 The setup time for a group assigned to slot i on machine i= 1,2,...,a e'k. k 1; If group p is assigned to slot i and group 1 is i=0,1,2,...,a-1 AS ip(i + 1)1 = assigned to slot i +1 p0,1,2,...,a-1 1= 0; Otherwise pl 1,2,..,a Model Minimizez' = Cak (makespan objective function) (1.1) a bm Minimize z2 = (sum of the completion times objective '1 2 j1 j1 function) Subject to: = 1 p=1,2,...a (2) Wj =1 i1,2,...,a (3) p=1 =1 i0,1,2,...,a-1 (p 1) (4) p=O lI ASip(ll)l ASl(l)l Wip i0,1,2,..,a-1 p,l= 1,2,...,a pl (5) (6) Setk = >AS(i_1)pilSp1k p=O 1=1 i=1,2,...,a k=1,2,...m pl (7) = C(_I)l + Set1i + T1 i 0,1,2,3,...,a (8) a i = 1,2,.. .,a Xyk C(1_1)k+ Set ik+ >Wiptpjk p=1 j 1,2,...,bmax k'1,2,3...m (9) Xk XJ'k + MYW >Wiptjk i = 1,2,.. .,a (10) j,j'=1,2,...bmaxj<j' XjkXjk + M(l Wipt'.k k = 1,2,3...m (11) i r1,2,...,a j1,2,...,bmax Xk Xu(k_1) !;wip tpjk (12) k2,3...m Cjk = max {Xk} i=1,2,...,a k2,3...m (13) Xjk, CIk ,Setlk 0 Wj, AS1(j+l)l = 0,1 y=0,1 (/<j') The mathematical model for each of the two objective functions is a Mixed Integer Linear Programming (MILP) model. It is assumed that there exist slots for groups and each group should be assigned to one of them. In real world problems, groups have different number of jobs. Because each group can be assigned to any slot, to simplify creating the mathematical model, it is assumed that every group has the same number 15 of jobs, comprised of real and dummy jobs. This number is equal to bmax which is also the maximum number of real jobs in a group. If a group has fewer real jobs than bmax, the difference, i.e., bm number of real jobs, is assumed to be occupied by dummy jobs. The objective function can either be minimize the makespan of jobs on the last machine (1.1) or minimize the sum of the completion times of processing all jobs on the last machine (1.2). Based on the model, there are 'a' slots and each group should be assigned to one of them. It is clear that each slot should contain just one group and every group should be assigned to only one slot. Constraints (2) and (3) support this fact. The set-up time of a group on a machine is dependent on that group and the group processed immediately preceding it. Constraint (4) is included in the model to support this fact. If group p is assigned to slot i and group 1 is assigned to slot i+1, then AS(+J)l must be equal to one. Likewise, if group p is not assigned to slot i or group 1 is not assigned to slot i + 1, then AS1(l+J)l must be equal to zero. Constraints (5) and (6) ensure that each is true. Constraint (7) calculates the required set-up time of groups on machines. The required set-up time for a group on a machine is calculated based on the assigned group to the slot and the group assigned to the preceding slot. The completion time of the group assigned to a slot on the first machine is calculated in constraint (8). The completion time of a group assigned to a slot is equal to the summation of the completion time of the group assigned to the preceding slot, the required set-up time for the group of this slot, and the summation of run time of all jobs in the group. Constraint (9) is added to the model to find the completion time of jobs on machines. The completion time of a job that belongs to a group is greater than the summation of the completion time of the group processed in the previous slot, the set-up time for the group, and the run time of the job. 16 Constraints (10) and (11) are a kind of either/or constraints. They are added to the model to find the sequence of processing jobs that belong to a group. Ifjobj in a group is processed after jobj' of the same group, then the difference between the completion time ofjobj and jobj' on all machines should be greater than or equal to the run time of jobj. A machine can start processing a job only if it is finished on the previous machine. It means that the completion time of a job on a machine should be greater than or equal to the summation of the completion time of the job on the preceding machine plus the run time of the job on that machine. Constraint (12) is added to the model to support this fact. It is clear that the completion time of a group on a machine is equal to the completion time of the last job of the group which is processed by the machine. Constraint (13) is added to the model for this reason. These models can be used as a base to estimate the quality of heuristic algorithms. 4.2 Complexity of Problems Gupta and Darrow (1986) proved that the two machine sequence dependent job scheduling (SDJS) problem is a NP-hard problem. Garey et al. (1976) also proved that: the flowshop job scheduling problem by considering minimization of makespan criterion for more than two machines (m? 3) is an NP-hard problem. the flow-shop job scheduling problem by considering minimization of the sum of the completion times criterion with more than one machine (m? 2) belongs to NP- hard problems as well. Based on these insights, it is easy to see that the proposed problems in this research are easily reducible to the ones already proven NP-hard. Thus, the fact that the proposed problems are NP-hard, follows immediately. 17 4.3 Example An example is shown to demonstrate the problem. Suppose three groups including 3, 2, and 3 jobs in each are assigned to a cell with two machines to process. The required run time of each job on each machine are given in Table 4.1. For instance the first job of the first group (J11) has a run time of 3 on M1 and a run time of 4 on M2. Table 4.lThe run time ofjobs in groups G1 G2 G3 M1 M2 M1 M2 M1 M2 J11 3 4 4 3 J31 5 2 J12 2 5 J22 3 1 3 5 J13 2 1 J33 4 2 The set-up time of each group on each machine based on the preceding processed group is shown in Table 4.2. In this table R stands for the reference group. As explained before, the reference group is a group which was processed as the last group on the machines in the previous planning horizon. Table 4.2 The set up times for groups M1 M2 G22 G3 _ G1 1 G2 4 --- 3 G3 2 3 __3 G32 G1 G2 5 --- 4 G3 2 3 2 5 4 R 3 1 1 A possible schedule of processing groups as well as jobs is processing them according to their rank order: G1 (J11- J12- .113) - G2 (.121- .122) - G3 (J31- J32- J33). The Gantt-chart of this schedule is demonstrated in Figure 4.1. In this Gantt chart the set-up time of each group is shown by S,,k. M1 S011 'i .112 13 S121 J J22 S231 J31 J32 J33 M2 S012 'ii Ji2 .113 S121 J21 '22 S232 I I Figure 4.1 The Gantt chart of processing groups as well as jobs in rank order Based on this Gantt-chart, the completion time of each job on the last machine is as follows: J11: 9 J12: 14 J13: 15 J21: 23 J22: 24 J31: 29 J32: 34 J33: 36 The makespan of a schedule as discussed before is the completion time of the last job on the last machine. In this schedule, this is equal to 36. The sum of the completion times of a schedule is the summation of completion time of jobs on the last machine. Thus, the sum of the completion times of this schedule for this problem is equal to 184. If the problem is solved optimally by the mathematical models, the optimal solution for minimization of makespan is equal to 34 and for minimization of the sum of the completion times is equal to 165. 19 CHAPTER 5: HEURISTIC ALGORITHM (TABU SEARCH) Because the proposed research problems are NP-hard, the mathematical model cannot be applied to solve industry size problems in a reasonable time. It also requires a powerful computer and advanced linear programming software which may not be available in every company. The requirement of applying the proposed ideas (i.e., finding the best sequence of processing groups as well as jobs in a group to improve the efficiency) in industry is finding a technique that is capable of solving large size problems in a reasonable time. Thus, a heuristic algorithm should be developed for finding a solution close enough to the optimal solution of mathematical models in a short time. One category of these heuristic algorithms is diversification/intensification techniques. The most popular algorithms that belong to this category are tabu search, genetic algorithm, and simulated annealing. According to previous research (Skorin-Kapov and Vakharia (1993), Nowicki and Smutnicki (1996), Logendran and Sonthinen (1997)), Schaller (2000), and Helal and Rabelo (2004) tabu search has shown more promising performance than the others for scheduling problems. 5.1 Overview of Tabu Search The tabu search is a heuristic algorithm which is developed independently by Glover (1986) and Hansen (1986) for solving combinatorial optimization problems. The principles and mathematical description of this concept can be found in Glover (1989, 1990a, and 1990b), Laguna et al. (1991), Reeves (1993), Widmar and Hertz (1989), and Taillard (1990). This technique has been used to find a good quality solution for many scheduling problems. Finding a solution for the proposed research problems involves two levels. The first level investigates to find the best sequence of groups. During the first level, a sequence of groups is chosen. The second level investigates to find the sequence of jobs in each group based on the chosen group sequence by the first level. If the tabu search heuristic 20 is applied to solve proposed research problems, it should cover both levels. Thus, a two-level tabu search is developed to solve proposed research problems. In the first (outside) level, the best sequence of groups is investigated. When a sequence of groups by the outside level is chosen, the second (inside) level finds the best sequence of jobs that belong to each group by considering the desired measure of effectiveness. The solution is comprised of the sequence of groups and the sequence of jobs in each group that provides the best objective function value based on the chosen criterion. The tabu search method is used for the outside search to move from a group sequence to another one. This is done for the inside search by moving from a sequence of jobs in a group sequence to another sequence of jobs in the same group sequence. The relationship between the outside and inside search is that once the outside search is performed to get a new group sequence, the search process is switched to inside search. The inside search is performed to find the best sequence of jobs in groups by considering the proposed group sequence by outside search. When the inside search stopping criteria is satisfied, the best found job sequence is considered. Then the search returns back to the outside search to find a new group sequence. The outside search stops when the outside search stopping criteria are satisfied. The best found solution is reported as the final solution. 5.2 Tabu Search Mechanism Based on Glover (1989, 1990(b)), a simple tabu search algorithm consists of three main strategies: Forbidden strategy, Freeing Strategy, and Short term strategy. Pham and Karaboga (1998) provided a brief explanation about these strategies as follows: 5.2.1 Forbidden Strategy The forbidden strategy controls the entries to the tabu-list. It is mainly applied to avoid cycling problems by forbidding certain moves which are called tabu. It prevents the search return to a previously visited point. Ideally all previous visited points should be stored in tabu-list, but this needs too much memory and computational effort. Thus, this 21 is done just for a few very last moves by preventing the choice of moves that represent the reversal of any decision taken during a sequence of the last T iterations. These solutions are stored in the tabu-list. This leads the search to move progressively away from all solutions of the previous T iterations. T is called the "tabu-list length" or "tabu-list size". The probability of cycling depends on the value of T. If T is too small, the probability of cycling is too high. If it is too large then the search might be driven away from good solution regions before these regions are completely explored. During this process, an aspiration criterion is applied to make a move free if it is of good quality and can prevent cycling. While the aspiration criterion has a role of guiding the search, tabu restrictions have a role in constraining the search space. A solution is acceptable if the tabu restrictions are satisfied. However, a tabu solution is also assumed acceptable if an aspiration criterion applied regardless of the tabu status. The move attributes are recorded and used in tabu search to impose constraints that prevent moves from being chosen that would reverse the changes represented by these attributes. 5.2.2 Freeing Strategy The freeing strategy is used to decide what exits the tabu-list. The strategy deletes the tabu restrictions of the solutions so that they can be reconsidered in further steps of the search. The attributes of the tabu solution remain on the tabu-list for a duration of T iterations. A solution is considered admissible if its attributes are not tabu or if it passes the aspiration criterion test. 5.2.3 Short-Term and Long-Term Strategies The above are implemented using short-term and long-term memory functions. The combination of these two memory functions allows for intensifying and diversifying the search. Intensification means more thoroughly searching neighborhoods that are historically found good. On the other hand, diversification occurs when the search is 22 continued in the areas which the search is never performed or performed less than the other areas. Consider a mountain chain with several peaks. Suppose a mountain climber is asked to find the highest pick of this chain by climbing up the picks and measure their height and find the best. He can move during days and stay nights in a rest area. There are stations in the path that the climber can stay nights in these stages. The climber starts his exploration from one of these stations. Each station has some neighbor stations that the climber can go there in one day. In each station, the information of all neighbor stations such as the height of the station and the direction of them is provided. Every night based on these information the climber decides his next day move. He is told he can stop his search, if he can find a predefined number of picks or if he cannot find a neighbor station with higher height of his current rest area for a few days. He is also told he can visit each station at most once in his way. Thus the climber starts with a rest area and performs his measurements to satisfy his sponsors. The act of tabu search to find the optimal or near optimal solution of a problem is the same as the mountain climber's job. Tabu search performs the search like hill climbing. It starts with a feasible solution (point) and moves to the best (highest) neighbor. It finds the nearest and the highest top and then comes down to find another top. It stops if it can find a few tops or if it cannot find a better solution in a few iterations. The hill climbing like tabu search algorithm progresses the search at each step to a better (higher evaluation) move. When a peak is found, it may be the local optimum and not the global one. Tabu search has the capacity of not getting caught in the trap of local optima by moving the search to the new regions until a (near) global optimum is reached. The search stops if one of the stopping criteria such as the number of iterations without improvement or the number of local optimal points found is satisfied. During this process the search may find the optimal solution as one of the visited peaks, but the search cannot identify if a peak is global optimal. 23 The first step for tabu search is having an initial solution. The initial solution can be chosen arbitrarily. It can be a feasible or even an infeasible point. It can be generated randomly or by a systematic procedure. Usually an initial solution with better quality can increase the efficiency of the search and the quality of the results. When an initial solution is chosen, its neighborhood solutions can be explored by perturbing it. The value of each of these neighborhood solutions is determined by the objective functions which in this research problem are minimization of the sum of the completion times or minimization of makespan. These neighborhoods have to be compared with Tabu_List filter whose goal is to prevent the cycle trap of local optima. This filter is implemented through comparison of neighborhood solutions against a set of restricted moves listed in tabu-list (TL). This list is constructed based on the recent change in previous best solutions. The tabu-list records these changes or moves in the order they are applied. The size of the tabu-list is determined through experimentation. After a set of neighborhood solutions is generated, the best local move among them is compared against the tabu-list. If the move is restricted, it is normally ignored and the second best move is considered. There are cases, however, that a restricted move may have a better value than the best global value found so far, the aspiration level. In this case, the tabu restriction is ignored. The best move after filtering against tabu-list and aspiration criterion is compared with the current members of candidate list. If the chosen neighborhood does not belong to the current candidate list, it is selected for next perturbation and generation of new neighborhood. Otherwise, the next best neighborhood is chosen. This move is recorded into the tabu-list (TL). This process is repeated until the search is terminated by satisfying one of the stopping criteria. Short term memory is the core of tabu search process. Long-term memory components can enhance the quality of the solution of the short term memory. Long term memory search can focus further on searching the areas that were historically promising (intensification); or perform the search in neighborhoods that were rarely visited before (diversification). The information on all the previous moves in short term memory is 24 considered for this investigation. After one complete set of search is performed, with the aid of long term memory, a new complete search is restarted. The number of restarts is arbitrary and depends on the required precision of the solution. Applying more restarts may provide better solution, but it prolongs the time required. 5.3 Initial Solution The tabu search needs a feasible solution to start. The quality of the results as well as the efficiency of the search can be significantly improved if a good initial solution generator is applied. For instance, Logendran and Subur (2004) during solving an unrelated parallel machine scheduling problem reported that by choosing different initial solution generators the efficiency and the effectiveness of the search algorithm can be significantly changed. In this research two different initial solution generating mechanisms are developed for each criterion. These mechanisms for each criterion are explained as follows: 5.3.1 Initial Solution Techniques for Minimization of Makespan Criterion The initial solution generators applied for minimization of makespan criterion are as follows: 5.3.1.1 Rank Order The simplest way of defining an initial solution for proposed research problems is considering the rank order of groups as well as jobs that belong to each group as a feasible solution. This sequence is applied as the first initial solution mechanism for minimization of makespan. 25 5.3.1.2 Applying the Result of Schaller et al.'s (2000) Algorithm as an Initial Solution Schaller et al. (2000) suggested a few heuristic algorithms to solve SDGS problems by considering minimization of makespan. Their article is discussed in detailed in literature review. They suggested two different heuristic algorithms to find the sequence ofjobs in a group and six different algorithms to find the sequence of groups. They applied these mechanisms to find the group sequences and job sequences independently. An experimental design is performed by the authors to find the best heuristic algorithm. Based on their experiment, a three step algorithm provides a better solution for the problem than the other algorithms. In this algorithm, at step one an algorithm based on Campbell et al. (1970) procedure, which is known as CDS algorithm, is applied to find the sequence of jobs in a group. During the second step, a modified Nawaz et al. (1983) procedure, which is known as NEH procedure, is applied to find the sequence of groups. Finally, at step three the neighborhoods of the generated sequence by the first two steps are investigated and the best of them is chosen as the final solution. They suggested that this solution can be applied as an initial seed for a meta heuristic algorithm such as tabu search to improve the quality of solutions. Based on their suggestion, the result of their algorithm is considered as one of the initial solutions for minimization of makespan criterion. Thus, the proposed algorithm is applied to generate an initial solution for the heuristic algorithm as the second initial solution provider. Because the tabu search algorithm has the ability of finding the neighborhoods of a sequence, the third step of the proposed algorithm (finding the neighborhoods of the generated sequence by the first two steps) is ignored to generate an initial solution. The steps of generating the initial solution based on Schaller et al. (2000) are as follows: 5.3.1.2.1 Step 1. Applying CDS (Campbell-Dudek-Smith, 1970) Based Procedure to Find the Best Job Sequence for Groups A procedure based on Campbell et al. (1970) algorithm is suggested to find the sequence of jobs of a group. This procedure is applied to find the sequence of processing jobs for each group independently. To find the sequence of jobs in a group for a m machine problem, the algorithm generates rn-i auxiliary two-machine flow shop schedules in the following manner. Let p1,2,...,a tJk: Run time ofjobj in group p on machine k j = 1,2,.. .,ba k=1,2,...,m In the kthl auxiliary problem (k = 2, 3, ..., rn) the run time of each job on each auxiliary machine can be defined as follows: Oi = tpjk The processing time for thejtjob on machine 1 (Mi) = tpjk The processing time for thej" job on machine 2 (M2) i=m+1-k is the summation of the run time of job] on machines 1 through k, and is the summation of the run time of job j on the last k machines in the technological order. Then the Johnson's (1954) two-machine algorithm is applied to find the sequence of processing of jobs in the group. Based on this procedure, for each auxiliary problem, a job sequence is generated. The completion time of each sequence is then calculated and the best of them is chosen as the best job sequence of each group. 5.3.1.2.2 Step 2. Applying NEH Based Procedure to Find the Best Group Sequence Nawaz et al. (1983) proposed a heuristic algorithm for a multi stage, job scheduling problem to minimize makespan. The proposed algorithm is known as NEll in scheduling literature. Schaller et al. (2000) applied this algorithm with small modification to find the sequence of groups. To apply NEH algorithm, the following parameters should be calculated for each group: Parameter 1: the average set-up time of each group on each machine: In this procedure, the average set-up time for each group on each machine is calculated as follows: 27 -k Set = a Parameter 2: The effective run time of each group on each machine: This can be calculated as follows: Ek = + tpjk j=1 The steps of the modified NEll algorithm are as follows: Step 1: Find the effective run time of each group on all machines. T = Epk Step 2: Arrange groups in descending order of T. Step 3: Pick the two groups from the first and second position of the list of step 2, and find the best sequence for those two groups by calculating the makespan for the two possible sequences. Do not change the relative positions of these two groups with respect to each other in the remaining steps of the algorithm. Set i = 3. jh Step 4: Pick the group in the position of the list generated in step 2 and find the best sequence by placing it at all possible I positions in the partial sequence found in the previous step, without changing the relative positions to each other of the already assigned groups. The number of enumerations at this step is equal to i. Step 5: If n = i, stop, otherwise set i = i +1 and go to step 4. 5.3.2 Initial Solution Techniques for Minimization of the Sum of the Completion Times Criterion The initial solution generators applied for minimization of the sum of the completion times criterion are as follows: 5.3.2.1 Rank Order The first initial solution technique considered for minimization of the sum of the completion time criterion is the same as the one applied for minimization of makespan. A1 5.3.2.2 Relaxing the Problem to a Single Machine, SIGS Problem Ham et al. (1985), proposed a procedure to minimize the sum of the completion times of single machine sequence independent job scheduling problem. This procedure is applied to find an initial solution for the proposed heuristic algorithm. The problem is relaxed to 'm' independent single machine job scheduling problems. Then each problem is solved independently and the best of them is considered as the initial solution for the heuristic algorithm. In this procedure, for each independent problem, the jobs that belong to each group are ordered based on their run time. In other words, a job with shorter run time is processed before a job with longer run time. The sequence of groups can be calculated by applying the following steps: Step 1: Calculate the required minimum set-up time for each group on each machine MinSpk=min{Sq,k} i1,2,...,a Step 2: Find the order of groups based on the following inequalities: Mm Sik + Tlk Mm S2k + T2k Mm Sak + Tak < bi b2 ba This procedure is performed for each machine. After finding m different sequence of jobs and groups, the best of them by considering minimization of the sum of the completion times is considered as the initial solution for the heuristic algorithm. 5.4 Generation of Neighborhood Solutions When a feasible solution is considered as a seed, the neighborhoods of the seed should be generated to explore the search. The process of finding the neighborhoods during inside and outside search are discussed separately. 29 During the inside search, a neighborhood of a seed can be generated by applying swap moves, i.e. changing the orders of two sequenced jobs that belong to a group. By changing the position of the last and the first job of a group, another neighborhood can be generated. The number of inside neighborhoods of a seed can be calculated as follows: If there are g groups and each group has njobs, then the number of neighborhoods is g equal to flj, if there is at least 3 jobs in every group. If there are two jobs in a group, i=1 then there is only one neighborhood for that group. The outside neighborhoods can be derived similar to the inside neighborhoods by applying swap moves. The number of outside neighborhoods is equal to the number of groups if there are at least three groups in the cell. For a problem including two groups, there is only one neighborhood. 5.5 Steps of Tabu Search The concentration of this research is to develop a heuristic algorithm to solve optimally or near optimally the SDGS problems by considering minimization of makespan and minimization of the sum of the completion times. The steps of the proposed heuristic algorithm (tabu search) are as follows: 5.5.1 Step 1: Initial Solution As discussed before, it is required that the search starts with an initial solution. The initial solution can be generated by one of the methods discussed before for both inside and outside levels. This initial solution is considered as a seed for both outside and inside search. 30 5.5.2 Step 2: Evaluate the Objective Function Value of the Seed When the first seed of the search is determined, the value of the sequence is calculated based on the investigated objective function. 5.5.3 Step 3: Inside Search When the seed of the outside and inside search is determined, the first step is finding the best job sequences based on the current group sequence (outside seed). Thus the inside search is performed to find the best job sequence. The steps of inside search are as follows: 5.5.3.1 Step 3.1: Find Inside Neighborhood Solutions The neighborhoods of the initial solution are generated by applying swap moves. 5.5.3.2 Step 3.2: Evaluate the Inside neighborhoods Evaluating the generated neighborhoods based on the objective function value (minimization of makespan or minimization of the sum of the completion times). In this step the neighborhoods are checked against the inside tabu-list and disqualified neighborhoods are excluded. This filtering for a neighborhood can be ignored if its value is better (lower) than the inside aspiration level. Aspiration level is the best value found among all the neighborhoods of the current inside search. The neighborhoods which exist in the inside candidate list are also excluded. Then among the available neighborhoods, the best of them is chosen for the next seed and added to the candidate list. The following parameters are updated: 31 Tabu-list: When a new seed is chosen, the tabu-list should be updated. Tabu-list keeps track of recent moves. It prevents the search to return to the search area which is explored recently. Because the fixed tabu-list is applied in this research, if the list is filled, the oldest member of the tabu-list is replaced with the new member of the tabu list. The size of Tabu-list is found based on experiments and it is adjusted based on the size of the problem. Candidate List: All the feasible solutions considered as a seed are saved in inside candidate list. The first member of this list is the initial solution. At every iteration the chosen neighborhood for the next seed is added to the candidate list. Inside Aspiration Level: The inside search aspiration level is equal to the value of the best solution found during current inside search. If the value of new member of candidate list is better (lower) than the current value of aspiration level, then the value of aspiration level is updated. Index list: Index list is a subset of candidate list. It includes the local optimal points visited by the search. If the objective function value of a member of the candidate list is less than the value of the objective function of immediately before and next members, the point is a local optimal point. In such cases, the point is added to the index list. The maximum number of entries to the index list is considered as one of the inside search stopping criteria. It is clear that at each iteration, the current seed can be added to the index list by considering the objective function value of the next seed, thus at each iteration, the previous seed is tested to assess if it qualifies to be added to the index list. 32 Number of Iterations without improvement: One of the criteria of stopping the search is the maximum number of iterations without improvement. When a feasible solution is added to the candidate list, if its value is not less than the value of the previous member of the candidate list, the value of iterations without improvement is increased by one; otherwise this counter resets to zero. If the value of this parameter reaches to the value of inside maximum iterations without improvement, the inside search stops. Long-term Memory. Long-term memory is used to diversify or intensify the search. During diversification, the search explores areas which are not explored or explored less before. On the other hand, intensification leads the search to explore more around the areas which are explored more than other areas before. A three dimensional matrix is used to gather the required information for long term memory inside search. The first dimension points to each group. The second one is about each available job slot, and the third one is for the job number. The value of each member of this matrix reveals the number of times that a job belongs to a group is assigned to a particular job-slot. For instance, if Jnside_LongTerm[1][2][4] =7, it means that the forth job of the first group is assigned seven times to job-slot 2. When the short-term memory search is over, the maximum of this matrix reveals that which job is assigned to which job-slot more than the others. On the other hand, the minimum of this matrix reveals that which job is assigned to which job-slot less than the others. Based on this information, the long term search is performed by fixing the job in the job-slot with the maximum (minimum) number of frequency and the search is performed again. 5.5.3.3 Step 3.3: Stopping Criteria The above process is repeated until one of the stopping criteria below is met: The maximum number of iterations without improvement is reached. 33 The maximum number of entries to the index list is reached. At every restart of inside search the Tabu-list, the candidate list, the Index-list, and the number of iterations without improvement are reset to zero. The two applications of long term memory are as follows: LTM-Max: As explained, the frequency three-dimensional matrix indicates how many times a job that belongs to a group has been assigned to a specific job-slot. Thus, the maximum number in this matrix indicates the maximum number of times a job is assigned to a slot. The LTM-Max method takes advantage of this and intensifies the search around this area. In this section the inside search is performed again by considering a new seed. The seed is the same as the initial solution of inside search with a few changes. The job with the maximum assignment to a job slot is assigned to that job slot and remains in the same place during the search. If the fixed job is assigned to the same job slot which was assigned in the initial solution, then the next maximum should be chosen. LTM-Min: Instead of LTM-Max, this method chooses the minimum value of the frequency matrix. In other words, this algorithm diversifies the search and performs the search in areas which were never explored or explored less during the search. Other steps of LTM-Min are the same as LTM-Max. When the inside search is completed, the best sequence ofjobs is considered as the best solution of the current group sequences. Then the search is switched to the outside search. 34 5.5.4 Step 4: Outside Search When the best job sequence of the outside seed is found by the inside search, the search is switched to the outside search. The steps of the outside search are as follows: 5.5.4.1 Step 4.1: Find Outside Neighborhood Solutions The neighborhoods of the initial solution are generated by applying swap moves. 5.5.4.2 Step 4.2: Evaluate the Objective function Value of Outside Neighborhoods In this step, for each neighborhood, the inside search is performed to find the best job sequence. The neighborhoods are checked against the outside tabu-list and disqualified neighborhoods are excluded. This filtering for a neighborhood can be ignored if its value is lower than the search aspiration level. Aspiration level is the best found feasible solution by the search. The neighborhoods which exist in the outside candidate list are also excluded. Then among the available neighborhoods the best of them is chosen for the next seed and added to the candidate list. The following parameters are updated respectively: Outside Tabu-list: When a new seed is chosen, the tabu-list should be updated. Tabu-list keeps track of recent moves. It prevents the search to return to the search area which was explored recently. Because the fixed tabu-list is applied in this research, if the list is filled, the oldest member of the tabu-list is replaced with the new member of the tabu list. The size of the Outside Tabu-list is found based on experiments and it is adjusted based on the size of the problem. 35 Candidate List: All of the feasible solutions considered as a seed are saved in the candidate list. The first member of this list is the initial solution. At each iteration the chosen neighborhood is added to the candidate list. Aspiration Level: The search aspiration level is equal to the value of the best solution found during the search. If the value of a new member of candidate list is better (lower) than the current value of aspiration level, then the value of aspiration level is updated. Index list: Index list is a subset of candidate list. It includes the local optimal points visited by the search. If the objective function value of a member of a candidate list is less than the value of the objective function of immediately before and next members, the point is a local optimal point. In such cases, the point is added to the outside index list. The maximum number of entries to the index list is considered as one of the outside search stopping criteria. It is clear that at each iteration, the current seed can be added to the index list by considering the objective function value of the next seed. Thus at each iteration, the previous seed is tested to assess if it qualifies to be added to the outside index list. Number of iterations without improvement: One of the critenons of stopping the search is the maximum number of iterations without improvement. When a feasible solution is added to the candidate list, if its value is not less than the value of the previous member of the candidate list, the value of iterations without improvement is increased by one; otherwise this counter is reset to 36 zero. If the value of this parameter reaches to the value of maximum iterations without improvement, the search stops. 5.5.4.3 Step 4.3: Stopping Criteria The above process is repeated until one of the stopping criteria below is met. The maximum number of iterations without improvement is reached. The maximum number of entries to the index list is reached. At every restart of the search the tabulist, the candidate list, the indexlist, and the number of iterations without improvement are reset to zero. Long-term Memory: A two dimensional matrix is used to gather the required information for long term memory outside search. The first dimension points to each slot. The second one is for the group number. The value of each member of this matrix reveals the number of times that a group is assigned to a particular slot. For instance if LongTerm[2][4] 7, it means that the fourth group is assigned seven times to slot 2. The two applications of long term memory are as follows: LTM-Max: As explained the frequency two-dimensional matrix indicates how many times a group has been assigned to a specific slot. Thus, the maximum of this matrix indicates the maximum number of times a group is assigned to a slot. The LTM-Max method takes advantage of this and intensifies the search around this area. In this section the outside search is performed again by considering a new seed. The seed is the same as the initial solution of outside search with a few changes. The group with the maximum assignment to a slot is assigned to that slot and remains in the same place during the 37 search. If the fixed group is assigned to the same slot which was assigned in the initial solution, the next maximum should be chosen. LTM-Min. Instead of LTM-Max, this method chooses the minimum value of the frequency matrix. In other words, this algorithm diversifies the search and performs the search in areas which were never explored or explored less during the search. Other steps of LTM-Min are the same as LTM-Max. When the outside search is completed, the solution with the best objective function value is reported as the result of the search. The steps of performing tabu search for outside and inside search are depicted in the flow charts below which are shown in figure 5.1 and figure 5.2. Start with an outside initial solution Perform the inside search to find the best job sequence and get the value of objective function Admit the Solution to OCL and OIL Initialize OTL, 01W, OAL. OLTM I Apply outside swap moves I Perform inside search to get the Objective function value for each neighbour the move' Yes No I Is OAL Disregard the tabu? satisfied? move Yes No Identify the best solution Yes No Use OLTM to oes the new Solution Identify new of restarts LRestarts NNreathethedV OTL: Outside Tabu List OAL: Outside Aspiration Level OCL: Outside Candidate List Apply the move that corresponds OIL: Outside Index List to the best Solution Terminate the search I OLTM: Outside Long Term memory OTLS: Outside Tabu List Size OIWI: Number of Outside Iterations without Improvement Update OTL, OAL, OCL,OIL, Return the best Solution ONWI: Number of entries to the Outside Index OlWI. OLTM frorn OIL List MOIWI: Maximum Number of Outside Iterations without Improvement MOILS: Maximum Outside Index List Size No stopping criteria met End (OIWI>M01\M) or>_--- (ONW> MOILS) Figure 5.1 Flow chart for outside search 39 Start With an inside initial Solution Initialize ITL. IIW1, IAL, ILTM Apply outside swap moves Yes Disregard the move Identify the best solution the outside maximum number ITL: Inside Tabu List IAL: Inside Aspiration Level ICL: Inside Candidate List Apply the move that corresponds IlL: Index List to the best solution Terminate the search ILTM: Inside Long Term memory ITLS: Inside Tabu List Size flWI: Number of Inside Iterations without Improvement Update ITL, IAL, ICL,IIL, 11W, Retum the best solution INWI: Number of Entries to the Inside Index ILTM from IlL List MIIWI: Maximum Number of Inside Iterations without Improvement MIlLS: Maximum Inside Index List Size No s stopping criteria met Yes End LS) Figure 5.2 Flow chart for inside search 5.6 Two-Machine SDGS Problem with Minimization of Makespan Criterion For the two machine SDGS problems with minimization of makespan criterion, Logendran et al. (2006) showed that the optimal sequence of jobs in each group conforms to Johnson's algorithm (1954). Thus, the heuristic search algorithm for these problems can be relaxed to a one level search in order to find the best sequence of processing groups. During the search for each group sequence, the sequence of processing jobs belonging to each group are calculated according to Johnson's (1954) algorithm. 5.7 Applied Parameters for Proposed Research Problems As mentioned before, the size of the problems are investigated during this research include 2 to 16 groups in a cell and 2 to 10 jobs in a group. The empirical formulae or the value of parameters used for these research problems are presented in tables below. To generate these formulae or parameter values, several test problems, different from the ones applied for the main experiments, are generated. Then these test problems are solved by heuristic algorithms by applying different values for each parameter to find the best value for each parameter. These formulae are generated based on experiments. In some cases a formula for a range can be generated and in some of them a value for a parameter in a range is offered. 5.7.1Empirical Formulae for Two-Machine Problems by Considering Minimization of Makespan Criterion These problems, as discussed in Section 5.6, require a one level search. Thus, it is only necessary to find empirical formulae for outside search parameters. These formulae, presented in Table 5.1, are constructed based on the number of groups of the problem. Table 5.1 The outside search parameters for two machine problems with makespan criterion Index list Iterations without Tabu list size improvement Number of Number of Number of T groups (G) Parameter Parameter value/formula groups (G) Formula groups (G) From To value/formula From To I From To_I 2 3 2 2 G*l.25 I 2 10 (G14)+1 I 4 6 G*3 10 16 G*2 11 15 (G14)+2 7 10 G*10 16 16 5 11 16 Q*5Ø 5.7.2 Empirical Formulae for Three-Machine and Six-Machine Problems by Considering Minimization of Makespan Criterion The empirical formulae for these problems are presented in Table 5.2 and Table 5.3. The formulae for outside search parameters are constructed based on the number of 41 groups, and for the inside search parameters are constructed based on the number of total jobs in groups. In some cases, rather than offering a formula, a value for the parameter in a specific range is offered. Table 5.2 The outside search parameters for three machine and six machine problems with makespan criterion Index list Iterations without . Tabu list size improvement Number of Number of Number of Parameter Parameter groups (G) groups (G) Formula value/formula groups (G) value/formula From To 2 3 2 From 2 1. __________ 2 From 1 To 2 I I 12 (G/5)+1 4 4 G 3 5 (G/2)+1 13 15 (G14)+1 5 6 G*2 6 12 G 16 16 (G14) 7 9 G10 13 16 12 10 16 G*50 Table 5.3 The inside search parameters for three machine and six machine problems with makespan criterion Index list Iterations without . Tabu list size improvement Number of Number of Number of jobs(J) Parameter Parameter Parameter value jobs(J) jobs(J) From To value value From To From To I 2 30 2 2 29 1 2 164 1 31 80 3 30 39 2 65 120 2 81 120 4 40 49 3 50 59 4 60 79 5 80 99 6 100 120 7 5.7.3 Empirical Formulae for Two, Three and Six Machine Problems by Considering Minimization of Sum of the Completion Times Criterion The empirical formulae for these problems are presented in Table 5.4 and Table 5.5. The formulae for outside search parameters are constructed based on the number of groups, and for the inside search parameters are constructed based on the number of total jobs in groups. 42 Table 5.4 The outside search parameters for two, three, and six machine problems with minimization of sum of the completion times criterion Index list Iterations without Tabu list size improvement Number Number Number of groups Parameter of groups Parameter of groups Parameter (G) value/formula (G) value/formula (G) value From [To From To From To 2 5 7 4 6 7 G*2 2 G 10 2 4 6 3 5 6 G12 2 3 67 2 8 9 5 1 2 3 8 12 G*20 7 7 6 10 14 4 13 16 250 8 14 8 15 16 6 15 16 11 Table 5.5 The inside search parameters for two, three, and six machine problems with minimization of sum of the completion times criterion Index list Iterations without Tabu list size improvement Number of Number of T Number of Parameter Parameter Parameter jobs(J) jobs(J) jobs(J) value value value I From To From 1=12=1 From To 2 20 2 2 30 2 2 20 1 21 30 3 31 40 3 21 30 2 31 40 5 41 50 5 31 39 3 41 50 6 51 120 8 40 50 4 51 90 7 51 60 5 91 120 8 61 75 6 76 85 8 86 95 9 96 100 10 101 120 13 5.8 Application of Tabu Search to an Example Problem by Considering Minimization of Makespan Criterion The application of the steps of tabu search, which are listed in Section 5.5, is demonstrated to solve the example presented in chapter four. In this example, minimization of makespan criterion is considered. The tabu search parameters applied for this example are as follows: 43 Inside Tabu-list size: 1 Inside maximum number of entries to the Index-list: 3 Inside maximum iterations without improvement: 2 Outside Tabu-list size: 1 Outside maximum number of entries to the Index-list: 2 Outside maximum iterations without improvement: 1 5.8.1 Step 1: Initial Solution The search starts with an initial solution. The initial solution for this problem, based on the rank order initial solution generator, is as follows: G1 (Jn-J12-J13) G2(J21-J22) G3(J31-J32-J33) 5.8.2 Step 2: Evaluate the Objective Function Value of the Initial Solution The objective function value of the initial solution (makespan) is equal to 38 (Figure 5.3). This solution is considered as the current inside aspiration level as well. S011 J1 .112 J13 S121 J21 J22 S231 J31 J32 J33 M2 S012 fit ft2 ft3 S122 J21 22 232 '32 '33 Figure 5.3 The Gantt chart of the initial solution 5.8.3 Step 3: Perform Inside Search The inside search is performed to find the best job sequences for the outside initial seed (G1- G2- G3). 5.8.3.1 Step 3.1: Evaluate Inside Neighborhoods The neighborhoods of the inside initial seed, and their objective function value, are found. Each seed of this example has seven neighborhoods, shown with their objective function values in Table 5.6. The difference between each neighborhood and the initial seed is shown in bold. Table 5.6 The neighborhoods of the inside initial solution Group seed Objective Neighborhood function G1 G2 G3 value 0(Initial) Jll fT2 JI3 .121 .122 J31 J32 .133 38 1 J12 J11 J13 J21 J22 J31 J32 J33 38 2 Jii J13 J12 J21 J22 J31 J32 J33 38 3 J13 .112 Jll J2l J22 J31 J32 J33 38 4 J11 JI2 J13 J22 J21 J31 J32 J33 38 5* J11 Jl2 Jl3 J22 37 J21 J32 J31 .133 6 Jii Ji2 Jl3 J21 J22 J31 J33 J32 40 7 J11 Jl2 Jl3 J21 J22 J33 J32 J31 37 Based on the objective function value of the neighborhoods, the neighborhoods 5 and 7 have the lowest values and can be considered as the next seed. In this example, as neighborhood 5 is the first best entry into the list, it is chosen as the next seed. By choosing the next seed, the following parameters are updated: Tabu-list: The new seed is generated by changing the process sequence of the first two jobs of the third group. These moves are saved in the tabu list. In other words, in the next iteration (the tabu list size is equal to one) J32 cannot be processed as the second job of the third group, and J31 cannot be processed as the first job of the third group. 45 Candidate-list: The new seed is added to the Candidate-list as the second member of this list. The first member of the Candidate-list is always the initial solution. Aspiration level: Because the value of the objective function of the new seed is better than the aspiration level (37 compared to 38), the inside aspiration level is also updated. Index-list: The initial solution is the first member of the index list. Because the value of the new seed is lower than the previous seed, it has the potential to be added to the index list. Thus, if the next seed has an objective function value more than 37, this seed (the second one) is added to the index list as the second local optimal solution. Thus, in this stage, the Index list just includes the initial solution. Long-term memory frequency matrix The frequency matrix which is used for long-term memory is updated. 5.8.3.2 Step 3.2: Evaluate the Stopping Criteria for Inside Search At the end of each iteration, the stopping criteria are evaluated. If one of the criteria is satisfied, the inside search is stopped, otherwise the next iteration is started. In this stage of this example, the number of iterations without improvement is equal to zero and the number of entries to the index list is equal to one. Because none of the stopping criteria are satisfied, the next iteration is started with the new seed similar to the previous iteration. 5.8.3.3 Repeat the Cycle Inside iterations are performed until one of the inside search stopping criteria is satisfied. The inside search will be terminated by one of the stopping criteria. In this example, the inside search for the current inside seed is terminated after two iterations. The best found job sequence is the one with makespan equal to 37 (the second member of Candidate-list). After terminating the inside search, the search is switched to the outside search. 5.8.4 Step 4: Perform Outside Search The outside search is performed to find the best group sequence. The steps of outside- search are as follows: 5.8.4.1 Step 4.1: Evaluate Outside Neighborhoods The seed of the outside search is used to find the outside neighborhoods and their objective function values. The neighborhoods of the outside initial solution are as follows. The difference between each neighborhood and the seed is shown in bold. G2- G1- G3 G1- G3- G2 G3- G2- G1 The inside search is performed for each neighborhood to get the best job sequence and find the objective function value of the neighborhood. Table 5.7 shows the objective function value of these neighborhoods after performing the inside search. 47 Table 5.7 The neighborhoods of the outside initial solution Neighborhood Sequence Value G G2 G3 0(Jnitial) 37 .Jli G2 ft2 I JI3 L J21 G1 J22 .132 .131 G3 J33 1 I 38 J2l .1221.112 Jll 'l3 .131 '32 33 2 G1 I G 34 Jit I .112 Jl3 J3l J32 "33 "2l J22 G2 [ G1 3 r 40 .131 J32 33 '21 22 1'12 'p11 '13 Based on the objective function values of the neighborhoods, the second neighborhood has the best objective function value, and is considered as the next seed. By choosing the next seed, the following parameters are updated: Tabu-list: As replacement of the last two groups generates the new seed, these moves are saved in the tabu list. In other words, in the next iteration (the tabu list size is equal to one) G2 cannot be processed as the second group and G3 cannot be processed as the third group. Candidate-list: The new seed is added to the outside Candidate-list as the second member of this list. The first member of the outside Candidate-list is always the outside initial solution. Aspiration level: Because the value of the objective function of the new seed is better than the aspiration level (34 compared to 37), the value of the aspiration level is also updated. Index-list: The initial solution is the first member of the index list. Because the value of the new seed is lower than the previous seed, it has the potential to be added to the index list. Thus, if the next seed has an objective function value more than 34, this seed (the second one) is added to the index list as the second local optimal solution. In this stage, the index list just includes the initial solution. Long-term memory frequency matrix The frequency matrix which is used for outside search long-term memory is updated. 5.8.4.2 Step 4.2: Evaluate the Stopping Criteria for Outside Search At the end of each outside iteration, the stopping criteria for outside search are evaluated. If one of the criteria is satisfied, the search for the current restart is stopped; otherwise, the next iteration is begun. In this stage of this example, the number of iterations without improvement is equal to zero and the number of entries to the index list is equal to one. None of the stopping criteria are satisfied. Thus, the next iteration is started with the new seed similar to the previous iteration. 5.8.4.3: Repeat the Cycle These iterations are performed until one of the outside search stopping criteria is satisfied. The outside search will be terminated by one of the stopping criteria. In this example, the best found group sequence is the one with makespan equal to 34 (the second member of Candidate-list). After terminating the outside search, the best sequence of groups, as well as jobs, are reported as the best schedule. The Gantt chart of the result of the tabu search for this problem is shown in Figure 5.4. S011 hi J12 J13 S131 J31 J32 J33 S321 J21 J22 I I 1W2 S012 Jii Ji2 .113 I S132 .131 J32 J33 S322 J21 .122 I Figure 5.4 The Gantt chart of the tabu search sequence 5.9 Application of Tabu Search to an Example Problem by Considering Minimization of Sum of the Completion Times Criterion The application of the steps of tabu search, which are listed in Section 5.5, is demonstrated to solve the example presented in chapter four by considering minimization of sum of the completion times criterion. The tabu search parameters applied for this example are as follows: Inside Tabu-list size: 1 Inside maximum number of entries to the Index-list: 2 Inside maximum iterations without improvement: 1 Outside Tabu-list size: 1 Outside maximum number of entries to the Index-list: 2 Outside maximum iterations without improvement: 2 5.9.1 Step 1: Initial Solution The search starts with an initial solution. The initial solution for this problem, based on the rank order initial solution generator, is as follows: G1 (J11- J12- J) G2(J21- J22) G3 (J31- J32- J33) 5.9.2 Step 2: Evaluate the Objective Function Value of the Initial Solution The objective function value of the initial solution (sum of the completion times criterion) is equal to 186 (Figure 5.5). This solution is considered as the current inside aspiration level as well. M1 soil Jl! .112 J13 S121 J21 J22 S231 J31 .132 J33 M2 S012 J11 J12 .113 S122 21 22 S232 32 Figure 5.5 The Gantt chart of the initial solution 5.9.3 Step 3: Perform Inside Search The inside search is performed to find the best job sequences for the outside initial seed (G1- G2- G3). 5.9.3.1 Step 3.1: Evaluate Inside Neighborhoods The neighborhoods of the inside initial seed, and their objective function value, are found. Each seed of this example has seven neighborhoods, shown with their objective function values in Table 5.8. The difference between each neighborhood and the initial seed is shown in bold. Table 5.8 The neighborhoods of the inside initial solution Group Seed Objective Neighborhood Function G2 G3 Value O(Initial) J .112 ... f1.3....: J21 J22 J3 J32 .133 186 1 J12 J11 J13 J21 .122 J31 J32 .J3 182 2 J11 .113 .112 J21 J22 J31 J32 J3 182 3 J13 J12 J11 J21 J22 J31 J32 J33 179 4 Jll Jl2 J13 J22 J21 J31 J32 J33 184 5* J11 Jl2 J22 J32 J33 187 J13 J21 J31 6 .111 J12 .113 J21 J22 J31 J33 J32 186 7 JlI Jl2 J13 J21 J22 J33 J32 J31 184 Based on the objective function value of the neighborhoods, the third neighborhood has the lowest objective function value and can be considered as the next seed. By choosing the next seed, the following parameters are updated: 51 Tabu-list: The new seed is generated by changing the sequence of processing the first and the third jobs of the first group. These moves are saved in the tabu list. In other words, in the next iteration (the inside tabu list size is equal to one) J13 cannot be processed as the third job of the first group, and .111 cannot be processed as the first job of the first group. Candidate-list: The new seed is added to the Candidate-list as the second member of this list. The first member of the Candidate-list is always the initial solution. Aspiration level: Because the value of the objective function of the new seed is better than the aspiration level (179 compared to 186), the inside aspiration level is also updated. Index-list: The initial solution is the first member of the index list. Because the value of the new seed is lower than the previous seed, it has the potential to be added to the index list. Thus, if the next seed has an objective function value more than 179, this seed (the second one) is added to the index list as the second local optimal solution. Thus, in this stage, the Index list just includes the initial solution. Long-term memory frequency matrix The frequency matrix which is used for long-term memory is updated. 52 5.9.3.2 Step 3.2: Evaluate the Stopping Criteria for Inside Search At the end of each inside search iteration, the stopping criteria are evaluated. If one of the criteria is satisfied, the inside search is stopped; otherwise, the next iteration is started. In this stage of this example, the number of iterations without improvement is equal to zero and the number of entries to the index list is equal to one. Because none of the stopping criteria are satisfied, the next iteration is started with the new seed similar to the previous iteration. 5.9.3.3 Repeat the Cycle Inside iterations are performed until one of the inside search stopping criteria is satisfied. The inside search will be terminated by one of the stopping criteria. In this example, the inside search for the current inside seed is terminated after six iterations. The best found job sequence is found at the forth inside iteration with the objective function value of 174 with the sequence of groups and jobs in each group given as below: G1 (J12-J13-J11) - G2(J22-J21) G3(J33-J32-J31) After terminating the inside search, the search is switched to the outside search. 5.9.4 Step 4: Perform Outside Search The outside search is performed to find the best group sequence. The steps of outside- search are as follows: 53 5.9.4.1 Step 4.1: Evaluate Outside Neighborhoods The seed for the outside search is used to find the outside neighborhoods and their objective function values. The neighborhoods of the outside initial solution are as follows. The difference between each neighborhood and the seed is shown in bold. G2- G1- G3 G1- G3- G2 G3- G2- G1 The inside search is performed for each neighborhood to get the best job sequence and find the objective function value of the neighborhood. Table 5.9 shows the objective function value of these neighborhoods after performing the inside search. Table 5.9 The neighborhoods of the outside initial solution Neighborhood Sequence Value G1 O(Irntial) 174 JI2 'i3 Jll J2! J22 J33 J32 J1 G2 I G1 G3 1 I 181 J21 J22 Jl3 Jl2 Jll J31 J32 J1313 G1 I 2 I 165 Jl2 Jl3 Jll J33 J32 J3l J22 J2l G3 G2 G1 3 214 J12 J3l J33 J2l J22 Jl3 Jl2 Jll Based on the objective function values of the neighborhoods, the second neighborhood has the best objective function value, and is considered as the next seed. By choosing the next seed, the following parameters are updated: 54 Tabu-list: As swapping of the last two groups generates the new seed, these moves are saved in the tabu list. In other words, in the next iteration (the tabu list size is equal to one) G2 cannot be processed as the second group and G3 cannot be processed as the third group. Candidate-list: The new seed is added to the outside Candidate-list as the second member of this list. The first member of the outside Candidate-list is always the outside initial solution. Aspiration level: Because the value of the objective function of the new seed is better than the aspiration level (165 compared to 179), the value of the aspiration level is also updated. Index-list: The initial solution is the first member of the index list. Because the value of the new seed is lower than the previous seed, it has the potential to be added to the index list. Thus, if the next seed has an objective function value more than 165, this seed (the second one) is added to the index list as the second local optimal solution. At this stage, the index list just includes the initial solution. Long-term memory frequency matrix The frequency matrix which is used for outside search long-term memory is updated. 55 5.9.4.2 Step 4.2: Evaluate the Stopping Criteria for Outside Search At the end of each outside iteration, the stopping criteria for outside search are evaluated. If one of the criteria is satisfied, the search for the current restart is stopped; otherwise, the next iteration is begun. At this stage of this example, the number of iterations without improvement is equal to zero and the number of entries to the index list is equal to one. None of the stopping criteria are satisfied. Thus, the next iteration is started with the new seed similar to the previous iteration. 5.9.4.3 Repeat the Cycle These iterations are performed until one of the outside search stopping criteria is satisfied. The outside search will be terminated by one of the stopping criteria. In this example, the best found group sequence is the one with sum of the completion times equal to 165 (the second member of Candidate-list). After terminating the outside search, the best sequence of groups, as well as jobs, are reported as the best schedule. The Gantt chart for the result of the tabu search for this problem is shown in Figure 5.6. 1W1 S011 J12 '13 .111 5131 J32 J3l S321 J22 J21 M2 I so12 J12 J13 .111 I S132 J33 J32 J31 S322 J22 J21 Figure 5.6 The Gantt chart of the tabu search sequence 56 CHAPTER 6: LOWER BOUNDS When a heuristic algorithm is used to solve a problem, the quality of solution of the heuristic algorithm should be evaluated. The precision of the algorithm can be evaluated by comparing the result of the algorithm with that of the optimal solution. However, it is impossible to find the optimal solution of many problems in an efficient way. In many cases, it is almost impossible to find the optimal solution of a problem in a reasonable time. For instance, solving some of the proposed research problems will take days to be solved optimally. Because finding the optimal solution becomes increasingly difficult as the problem size increases, the lower bounding techniques are the most useful ones to evaluate the quality of the heuristic algorithm's solutions. A lower bound for a problem is an algorithm which is used to find a solution, at most equal to the optimal solution or close enough to the optimal solution of the problem in a time interval much faster than solving the mathematical model. During the development of lower bound, both precision and the time efficiency of the algorithm should be considered. It is clear that the optimal solution of a problem is between the result of heuristic algorithm and the value of its lower bound. In this research problem a specific lower bounding technique is developed for minimization of makespan. Another lower bounding technique based on Branch-and-Price is also developed which can be applied for both minimization of makespan and minimization of the sum of the completion times criteria. 6.1 Lower Bounding Technique for Minimization of Makespan For minimizing the makespan criterion, a lower bounding technique based on relaxing the problem from SDGS to SDJS (Logendran et al. 2006) is developed. In this technique, every group is considered as an independent job. The run time of these independent jobs (groups) on each machine is considered equal to the summation of run time of its jobs on each machine. Then the problem is treated as a SDJS problem. The solution of this problem is a lower bound for the original problem because the possible idle times between processing jobs that belong to a group on all machines are ignored. 57 Solving the mathematical model of this relaxed problem is much easier than solving the original problem, but the relaxed one still belongs to NP-hard problems (SDJS problems are NP-hard). The parameters, decision variables, and the mathematical model of this lower bounding model are as follows: Parameters: a: Number of groups b: Number ofjobs in group p p= 1,2,. . ,a . m: Number of machines sp1k. The setup time for group ion machine k if groupp is p,l 0,1,2,.. .,a the preceding group (p l) k = 1,2,. . . ,m The summation of run times ofjobs of group p on Tpk. Tk tpjk machine k The minimum run time ofjobs in group p p 1,2,.. .,a Gpk. on machine k k = 1,2,. . . Decision Variables: Ii; Ifgrouppisassignedtosloti i= 0,1,2,...,a WIP 10; Otherwise p0,1,2,...,a = The completion time of jth slot on machine k . . C1k: The setup time for a group assigned to slot i on i = 1,2,.. .,a se ik. machine k k = 1,2,. . . ,m 1; If group p is assigned to slot i and group l is = 0,1,2,. . ,a-1 . = assigned to slot i + 1 p = 0,1,2,. . . ,a 0; Otherwise li,2,..,a p l Model MinimizeZ = Cak (1) Subject to: 1 p = 1,2,.. .a (2) i1,2,...,a (3) 58 a a ASipi+ii = 1 i=0,1,2,...,a-1 (p 1) (4) p=O 1=1 ASl(+I)l Wip i=0,1,2,..,a-1 p,l=0,1,2,...,a (5) ASI(l+l)l W+1i pl (6) Setk = a p=o1=1 a AS_ii Slk i=1,2,...,a k1,2,...m pl (7) a Ci = C(_I)l+ Set jj+ W1Ti i = 1,2,3,.. .,a (8) p=1 a a (9) Cjk C(j-1)(k-1) + Setl(k-1) + W,,(Tp,) + W(Gp(k-1)) p=1 p=l i 1,2,3,...,a a i = 1,2,3,.. .,a Ck Ci(k-1) + Wjp(GPk) (10) k2,3...,m Ck , Setk 0 , AS,(f+l)/ = 0,1 (I <j') The mathematical model is a Mixed Integer Linear Programming (MILP) model. It is assumed that there exist slots for groups and each group should be assigned to one of them. In real world problems, groups will have different number of jobs. Because each group can be assigned to any slot, to simplify creating the mathematical model, it is assumed that every group has the same number of jobs, comprised of real and dummy jobs. This number is equal to bm which is also the maximum number of real jobs in a group. If a group has fewer real jobs than bm, the difference, i.e., bmax number of real jobs, is assumed to be occupied by dummy jobs. The objective function is the minimization of makespan of jobs on the last machine, given by (1). Based on the model, there are 'a' slots and each group should be assigned to one of them. It is clear that each slot should contain just one group and every group should be assigned to only one slot. Constraints (2) and (3) support this fact. The set-up time of a group on a machine is dependent on that group and the group processed immediately preceding it. Constraint (4) is included in the model to support this fact. If group p is assigned to slot i and group 1 is assigned to slot i+1, then ASi,,(+l)l must be equal to one. Likewise, if group p is not assigned to slot i or group 1 is not 59 assigned to slot 1+1, then AS(+J)l must be equal to zero. Constraints (5) and (6) ensure that each is true. Constraint (7) calculates the required set-up time of groups on machines. The required set-up time for a group on a machine is calculated based on the assigned group to the slot and the group assigned to the proceeding slot. The completion time of the group assigned to a slot on the first machine is calculated in constraint (8). The completion time of a group assigned to a slot is equal to the summation of the completion time of the group assigned to the preceding slot, the required set-up time for the group of this slot, and the summation of run time of all jobs in the group. The time that the process of a group can be started on a machine, other than the first machine, depends on both availability of the machine and the group. Constraint (9) is added to the model to check the soonest possible time that a group can be processed on a machine by considering the availability of the group. A group is available to be processed on a machine if at least one of its jobs is processed on the immediately preceding machine. It is clear that the soonest time to start processing a group on a machine is the time that the job with the minimum run time of the group is processed on the preceding machine. This constraint is added to the model to support this fact. The completion time of a group assigned to a slot on a machine except the first machine should be greater than the completion time of the group on the previous machine, plus the required set-up time for the group on the previous machine, the minimum run time of jobs of the group on the previous machine, and the total run time of all jobs on the current machine. It is also clear that in a real schedule of processing groups, the completion time of a group on a machine should be greater than the completion time of the group on the proceeding machine plus the minimum run time of jobs of the group on the current machine. Constraint (10) is added to the model to support this fact. This mathematical model can be applied to find a lower bound for all size of problems by considering minimization of makespan. 6.1.1 Application of the Lower Bounding Technique to a Problem Instance The model is applied to find a lower bound for the proposed problem in chapter four by considering the minimization of makespan criterion. Based on the optimal solution of the model the lower bound of this problem is equal to 34 which is equal to the result of the optimal solution and the heuristic algorithm. 6.2 Lower Bounding Technique for Minimization of Sum of the Completion Times The proposed lower bounding technique for minimization of makespan cannot be applied to find the lower bound for minimization of sum of the completion times criterion. The research showed that the Branch-and-Price technique is a suitable approach to find a good quality lower bound for this criterion. Thus, a lower bounding approach based on branch-and-price technique is developed to find lower bounds with good quality. In Branch-and-Price (B&P) algorithm (Barnhart et al. (1998), Wilhelm (2001), and Wilhelm et al. (2003)), the problem is reformulated with a huge number of variables. Then the problem is decomposed into a master problem and one or more sub-problems. The sets of columns (variables) are left out of the LP relaxation of master problem because there are too many columns to handle efficiently and most of them will have their associated variable equal to zero in an optimal solution anyway. To check the optimality of an LP solution of the master problem, one or more sub-problem, called the pricing problem(s), which are a separation problem for the dual LP, are solved to try to identify columns to enter the basis. If such columns are found, the LP is re- optimized. Branching occurs when no columns price out to enter the basis and the LP solution does not satisfy the integrality condition. B&P allows column generation to be 61 applied during the branch-and-bound tree. Some of the advantages of using B&P to solve the mathematical programming problems with a huge number of variables are: The LP relaxation of such formulation provides tighter lower bound than LP relaxation of MILPs. A compact formulation of a MILP may have a symmetric structure that causes branch-and-bound to perform poorly because the problem barely changes after branching. A reformulation with a huge number of variables can eliminate this symmetry (Barnhart, 1998). This issue is seriously observed in the proposed research problems. The goal is to develop a mathematical model whose LP relaxation provides a good lower bound to the optimal solution of the original problem. Thus, the problem is reformulated with a huge number of variables. These variables are the set of feasible solutions. Then the mathematical model is decomposed into a master problem and one or more sub-problems (SPs). The number of SPs is equal to the number of machines considered in the original problem. At the beginning, the LP relaxation of the master problem is solved by considering a few of the feasible solutions. Because the master problem at this stage does not include all possible solutions, it is called the Restricted Master Problem (RMP). To check the optimality of the LP solution to the RMP, the sub-problems (SPs), called pricing problem(s), are solved to find colunms to enter the basis. If such columns exist, the LP is re-optimized. If there is no colunm to enter and the LP solution does not satisfy the integrality conditions, then branching is applied for the optimal solution of the LP problem. In the mathematical model of the problem, which was demonstrated in chapter four, constraints (2) through (7) deal with finding the set-up times for groups. Constraints (9) through (11) deal with finding the completion time of jobs on each machine. These constraints can be applied for any machine separately. Constraint (12) is the only 62 constraint that deals with the completion time of jobs on more than one machine. Thus, this constraint is considered as the linking (complicating) constraint in the model. The RMP includes constraint (12), which is a relational constraint between the completion times of jobs on machines and a convexity constraint for each sub-problem. The parameters, decision variables, and mathematical model of a RMP for a m machine problem for minimization of sum of the completion times are formulated as follows: Parameters: The number of colunms that exist in RMP related to the k = 1,2,. . . hk kth machine Xiv: The completion time 0ffth job 0fth slot on machine kin i = 1,2,.. h,' existing solution in RMP related to machine k j = 1,2,. . . k=1,2,...,m j 0,1,2,.. . ,a Ii; If group p is assigned to slot un ht existing solution in = RMP related to machine k p = 0,1,2,. . . Wh k ipk k= 1,2,...,m Otherwise p=1,2,...,a tPJk: Run time ofjobj in groupp on machine k j = 1,2,.. Decision Variable: The decision variable of the existing solution of k = 1,2,. . . ,m h machine k in RMP Model: h ab MinZ =2X' h1 i1 j=1 (11) Subject to: i = 1,2,.. .,a 2(Xk pSI WktJk) h11 2'Xkl) 1= 1,2,...,bmax (12) h=1 k = 2,3,.. hk k 1,2,3,...,m (13) hsl 63 = 0,1 In this model, ,% are decision variables of the RMP. If the djk's are the dual variables of constraint (12) and a1, denote the dual variables of constraint (13), the dual problem of the RMP is as follows: m abmax m MaxZ'= k=2 1=1 j=1 di(0)+ M=1aM (14) ST: - a bmax d(Xi) + ai 0 (15) i=1 j=1 a bmax a a bmax i=1 j=1 dl1i(X W1tJk)+ak( i=1 j=1 d(k+1)XYk)O k=2,3,m-1 (16) p=l abmax a abmax dm(Xm Wiptpjm)+am( Xym)O (17) i=1 j=1 p=l i=1 j=1 i = 1,2,.. .,a d1jk 0 a Unrestricted j = 1,2,.. k=2,3,...,m A feasible solution of RMP is optimal, if its dual can satisfy the constraints of the dual problem. Thus, to check the optimality of RMP, the constraints of the dual problem are checked. As it is shown, each constraint of the dual problem deals with the completion time of jobs on a specific machine. For instance, the first constraint which is shown in (15), deals with the completion time of jobs on the first machine. Thus, to check the optimality, sub-problems, in which their objective functions are the left hand side of the dual constraints, are solved. If the value of the objective function is greater than zero, the column is added to RMP. Decomposition leads to an independent sub-problem for each machine. When the LP relaxation of the master problem is solved, it generates new dual variables for sub problems. The sub-problems are: a bmax SP1 MaxW1- d2(Xi)+ai (18) 1=1 j=1 Subject to: Constraint (2) through (11) of the original problem in chapter 4 SP2 a bmax a a bmjix through SPm..i MaxWk '' ' dk(Xk W,tfk) + ak ( '' 1 dj(k+1)Xyjç) (19) k=2,3,...,m-1 Subject to Constraints (2) through (7), (9) through (11), and (13) of the original problem in chapter 4 a bmax a a bmax SPm Max Wm djm(Xum Wip tpjm) + am ( Xym) (20) i=1 j=1 p=l j=1 j=1 Subject to Constraints (2) through (7), (9) through (11), and (13) of the original problem in chapter 4 The decomposed model has problems below: . If this decomposition model is applied to solve problems, the SPs are still NP-hard and cannot be solved in an efficient way by increasing the size of the problem. The experiment shows that by increasing the number of jobs in a group, the sub- problems become more complicated. The coefficient of Xyk'S in the objective function of SPs, except SP1, can be positive. In this case, because of the maximization of the objective function, as well as deficiency of any constraint in SPs to limit the value of X1k'S, it is possible that a sub-problem generates a solution in which there are unnecessary idle times among the process of jobs. In some cases, it may be possible that one of the SPs be unbounded as well. For instance, consider a two machine problem. The coefficient of X12s in SP2 is equal to (d2 - 1). By considering the maximization of the objective function, it is possible that if du2? 1, SP2 generates a solution in which 65 there are unnecessary idle times among processing jobs of a group. Thus, it is required to add an upper bound to SP2. In this case the quality of solution and the efficiency of the algorithm are highly dependent on the value of the chosen upper bound for sub-problems. Based on these facts, if a model can be created in which the sequence of jobs in a group can be identified easily in sub-problems and the requirement of applying an upper bound for SPs are removed, the SPs can be solved more efficiently. Consider a mathematical model with minimizing the objective function. It is clear that if a new decision variable is added to the model, the value of the objective function will not be increased. Because the goal of the decomposition model is to find a lower bound for the original problem, by adding new variables to the RMP and generate a new model, the optimal solution of the new model can still provide a lower bound for the original problem. This can be used to generate a new model with easier sub-problems. These new models are discussed separately for each machine-size problem. 6.2.1 Simplifying the Two-Machine Problem Consider the two-machine problem. The RMP model is as follows: h1: The number of solutions related to M1 in RMP h2: The number of solutions related to M2 in RMP a bmax MinZ= h=1 ifl:i=1 j=1 x:;2 (21) Subject to: a h1 ,h1 Wp2t pj2) hl=1 2i 1 0 ,..., max 2 p=1 hk h =1 k=l,2 (23) = 0,1 k= 1,2 The sub-problems (SP1 and Sp2) are, respectively, as follows: a SP1 MaxWi = i=1 j=1 d,(X1) (24) Subject to: Constraints (2) through (11) of the original problem in chapter 4 a bmax a a bmax SP2 MaxW2 = 1=1 j=1 d(X pI W1 t) ( 1=1 j1 Xy) (25) Subject to Constraints (2) through (7), (9) through (11), and (13) of the original problem in chapter 4 An upper bound for X2's As discussed, the coefficient of X12's in the objective function of SP2 is equal to (d2 -1). By considering the maximization of the objective function, it is possible that if d2? 1, SP2 becomes unbounded. In order to prevent this, and creating easier SPs, an artificial variable is added to each relational constraint of the RMP. The coefficients of these artificial variables in the objective function are equal to one. In this case, a new constraint is added to the dual problem of RMP which guarantees d2 1. The new RMP model and its dual problem are as follows: a bmax a bmax MinZ = h=1 1=1 j=1 x + i=1 j=1 s (26) i = 1,2,.. .,a p=i W t2) 2X+S 0 j =1, 2,.. .,bmax (27) hk k=1,2 (28) h1 = 0,1 The dual problem of RMP: a bmax MaxZ'= d,j2*@)+ai+a2 (29) i=1 j=1 ST: a bma i=1 j=l d02(X)+a1° (30) W1t2) + a2 ( X2) 0 (3 1) pI j=l d1 j I = 1,2,.. .,a 1,2,.. .,bmax (32) 67 '., U y2 0 ak Unrestricted (33) This model will provide a lower bound for the original problem. The experiments show that the new model requires less iterations to find the optimal solution of each node than the first one. But the most important advantage of this model is that the sequence ofjobs that belong to a group can be identified easier. There are rules that can relax the job sequence constraints of sub-problems. These rules for each sub-problem are discussed in the following sections. 6.2.1.1 The Relaxing Rule for SP1 in the Two-Machine Problem The objective function of SP1 is as follows: a bmax SP1 Max Wi = d2(Xi) i=1 1=1 Consider two different sequences of processing jobs for SP1, which are shown in Figure 6.1. The only difference between these two sequences is that the sequences of processing job i and job j, in which both of them belong to a group, are changed. The completion time of the preceding jobs before these jobs are shown by tA in this figure. d1 and d1 are the dual values of these jobs, respectively. Figure 6.1 The Gantt chart of processing two different sequences Assume that the objective function value of S1 is less than S2. The completion time of these jobs in S1 and S2 are shown in Table 6.1. Table 6.1 The completion time ofjobs in S1 and S2 LJob Job i Dual variable d, Completion time in S Completion time in 52 1A + ti tA + tj + ti [ Jobj d1 tA + tj + tj tA + By substituting these values in the objective functions of SP1, because S1 has a smaller objective function value, inequality below holds true. -dj(tA+t)-dJ(tA+tj+t)-dJ(tA+tJ)-dj(tA+tj+t) (34) by simplif'ing the inequality, the result is: fL (35) Based on this fact, at the optimal solution of SP1, if there is no idle time among the process time of jobs, the sequence of jobs that belong to a group should respect inequality (35). Thus, by applying the constraints below to SP1, for any given group sequence, the sequence ofjobs in a group can be calculated by the model. Yzjq E =1,2,...,a (36) "ci -' ' j,q-1,2,...bm j<q tpjt (37) p=I tpql In these constraints, if job q is processed after job j in slot i, then the value of is positive. In this case, by applying constraint (36) in 5P1, Yijq P1 tpjl tpql will be equal to 1. Constraint (37) supports this value as well. On the other hand, if job a d d2 q is processed before jobj in slot i, then the value of P' w(-- _2) is negative. By tpjl tpql applying constraint (36) in SP1, Y,jq will have a value greater than a negative number. In this case, constraint (37) forces Yq to take the value equal to zero. rJ If there are dummy jobs in a group, they are considered to be processed as the last jobs of the group. In order to force this into the model, a binary parameter is defined as follows: JI; if the job of group p is a dummy job p = 1,2,.. .,a u1= = .,bmax { 0; otherwise By revising (36) and (37), and by adding Upq to each of these constraints, the t pq 1 dummy jobs will be guaranteed to be processed as the last jobs of the group because if job q is a dummy job, then Y will be equal to one. The revised constraints are: a IdY. dg2 diq2' ijq p1 tpji tpql + u pg tpql i = 1,2,.. .,a (38) j,q=1,2,...bmax j<q Y '+W yq a Id f2di2 di2] (39) pg pji tpql tpql 6.2.1.2 The Relaxing Rule for SP2 in the Two-Machine Problem The objective function of SP2 is as follows: a bmax a a bmax Max W2 = j=1 j1 dy2(Xu2 w1 t) ( x,2) (40) p=l 1=1 1=1 Consider two different sequences of processing jobs (Si and S2) which are shown in Figure 6.1. Assume that the objective function value of S1 is less than S2 The completion time of the jobs which make the difference between S1 and S2 i.e., job) and job i, in S1 and S2 are shown in Table 6.1. In this case, by substituting the value of completion times in the objective function of SP2, the inequality below holds true: dj(tA+tj-tj)-(tA+tj)+dj(tA+t+tJ-tJ)-(tA+tj+tJ)dJ(tA+tJ-tJ)- (41) (tA + t3) +d(tA + t+ t1 t) (tA + t+ t) by simplifying the above inequality, the result is: d-1<d-1 (42) ti ti 70 Based on this fact, because the objective function of SP2 is maximization, at the optimal solution of SP2, if there is no idle time among the process time of jobs, the sequence of jobs that belong to a group should respect inequality (42). Thus, by applying the constraints below to SP2, for any given group sequence, the sequence of jobs in a group can be calculated by the model. The reason for adding the last part to the constraints below I U (c/ y2 1) is the same as the one explained for SP1. pq tpj2 ) a Idjq2 -1 d,2 -1 d2 YuqWp1 pq I = 1,2,.. .,a (43) p1 tpq2 12 tp2 J j,q = 1,2,... bmax d2-1 j<q Yyq1+Wip (diq2_1dy21 +Upq a (44) pl tpq2 tpj2 tpj2 ) If the above constraints are added to the model, because there is no more idle time between processing jobs of a group, constraint (8) of the original model in chapter four, can be added to SP2 as follows: C2 = C(ll)2+Setl2+WTP2 i = 1,2,3,.. .,a (45) Adding this constraint helps to solve SP2 easier because it restricts the completion time of each group. 6.2.2 Simplifying the Three-Machine Problem Consider the three-machine problem. The model for the RMP is as follows: h1: The number of solutions related to M1 in RMP h2: The number of solutions related to M2 in RMP h3: The number of solutions related to M3 in RMP h3 a bmax MinZ= h=1 i=lj=1 43 (46) Subject to: h=1 (x2 p=l w2t12)- hl=1 i = 1,2,.. .,a I= 1,2,.. .,bmax (47) i = 1,2,...,a h=l (x3- w3t3)- p=1 hl=1 I= 1,2,.. .,bmax (48) 71 hk =1 k= 1,2,3 (49) = 0,1 The dual problem is as follows: 3 3 MaxZ'= k=2 1=1 j=l d(0)+ Mrrl aM (50) ST: a d2(X1i)+ai0 (51) i=1 j=1 abm a ab i=1 j=I d(X p=I W1 t) + cr2 ( 1=1 j=1 d13X,,2) 0 (52) a a a 1=1 j=1 d(X13- p4 Wj1,t13)+a3-(1=1 j=1 Xy3)° (53) d1k 0 a Unrestricted The sub-problems are as follows, respectively: a bmax SP1 Max w1 = 1=1 j=1 d,(Xi) (54) Subject to: Constraints (2) through (11) of the original problem in chapter 4 a bm, a a bmax MaxW2= /=1 j=1 d2(X2-p=lWt12)-Ei=1 j=1 dj3Xy (55) Subject to Constraints (2) through (7), (9) through (11), and (13) of the original problem in chapter 4 and an upper bound for Xq2 a a a SP3 Max W3=> dy3(Xy3-WtJ3)- (56) i=1 j=1 pl 1=1 j=1 Subject to Constraints (2) through (7), (9) through (11), and (13) of the original problem in chapter 4 and an upper bound for X13 In order to prevent any idle time among the process time of jobs in a group and preventing the unbounded solution in any of the SPs, the coefficient of Xk in any sub- problem should be negative (the objective function of SPs are maximization). The coefficients of1(,1k in sub-problems are shown in Table 6.2: 72 Table 6.2 The coefficient of X1k's in SPs Sub Problem I X,i Coefficient SP1 -d112 SP2 d12 - d,3 SP3 d113-1 Thus, the inequalities below should hold true in order to prevent unbounded solutions: dU2? 0 d2-d3 0 d3 1 To support these rules, a set of artificial variables, Si11, are added to the relational constraint for the Jth job of slot i of the constraint between M1 and M2 with the coefficient equal to 1. The same set of artificial variables is also added for the relational constraint between M2 and M3 with the coefficient equal to -1, respectively. Another artificial variable set, S2, is also added to the relational constraint of the th job of slot i between M2 and M3 with the coefficient equal to 1. The coefficient of this set in the objective function is equal to 1. The new model and its dual problem are as follows: a bmax a bma MinZ = X3 + S2 (57) h=1 1=1 j=1 i=I j=1 Subject to: h2 a A(x2 W,2t12) i = 1,2,.. .,a (58) h=1 p=l hl=1 j= l,2,...,bmax h3 a h2 i = 1,2,.. .,a h=l %(x3 W,3tJ3) p=1 hl=1 h12S2ySl 2'vhl 0 j= 1,2,.. .,bmax (59) H = k=1,2,3 (60) h=1 = 0,1 The dual problem of the new model is as follows: 3 abmax 3 MaxZ'= k=2 1=1 j=1 d,,(0)+ M=1aM (61) ST: 73 - a bmax d,2(X,i) + ai 0 (62) i=1 j=1 a a bmax i=1 j=1 d(Xz p=I W1tJ2)+a2( a bmax j=1 j=1 dy3X,)O (63) i=1 a bniax j=l di(X1 a p=l Wi t3) + a3 ( a bmax 1=1 j= x) 0 (64) dl j2 J i 1,2,.. .,a , ,. . ., max d2-d, 0 (66) AO (4f 3 ak Unrestricted (67) This model will provide a lower bound for the original problem as well. There are rules that can relax the job sequence constraints of sub-problems. These rules for each sub- problem are discussed in the following sections. 6.2.2.1 The Relaxing Rule for SP1 in the Three-Machine Problem The rules to relax SP1 for three-machine problem is the same as the one discussed for two-machine problem. Thus, the same constraints can be applied for solving SP1 of three machine problem. y>W[2_dY2+UdY2] i=l,2,...,a (68) tpq2 tpj2 tpj2 j,q 1,2,... bmax Yiq1+1Vq, d q2 1 d__-1 ,j2 d 2 1 j<q +Upq (69) P=l tpqz tpj2 tpjZ J 6.2.2.2 The Relaxing Rule for SP2 in the Three-Machine Problem The objective function of SP2 is as follows: a bmax a a bmax i=t d(Xj W,ptpJ2)+a2( i=l j=1 dyX) (70) j=1. p=l Consider two different sequence of processing jobs which are discussed in section 6.2.1.1. The Gantt chart of processing jobs is shown in Figure 6.1. Assume that the 74 objective function value of S1 is less than S2 The completion time of these jobs in S1 and S2 is shown in Table 6.1. In this case, by substituting the value of completions times in the objective function of SP2, the inequality below holds true: d2('tA + t, t) d31(tA + t) + d2J(tA + t1 + ,- t) d3(tA + t1 + t) d31(tA + tj (71) tJ)-d3J(tA +t,) +d21(tA +tJ+tj-tj)-d3,(tA +t+t) by simplifying the inequality, the result is: d2J d3J <d2 d3 (72) ti ti Based on this fact, at the optimal solution of SP2, if there is no idle time among the process time of jobs, the sequence of jobs that belong to a group should respect inequality (72). Thus, by applying the constraints below to the 5P2, for any given group sequence, the sequence of jobs in a group can be calculated by the model. Yyg Wp[32 d3-d2 Upg d3d2] i = 1,2,...,a (73) j,q - 1,L,... 0max P1 tpq2 tpi2 tp,2 j q Ygq I+JV,p diq3diq2 d3_du2+U d3-d2 (74) P1 tpq2 tpi2 tpi2 In these constraints, if job q is processed after job j in slot i, then the value of w [d q3 d igZ d y3 d u2J is positive. In this case, by applying constraint (73) in tpq2 tpi2 SP2, Yijq will be equal to 1. Constraint (74) supports this value as well. On the other hand, if job q is processed before job j in slot i, then the value of w [d d iq3 1q2 d3 d2] is negative. By applying constraint (73) in SP2, YzJq tpq2 tp2 will have a value greater than a negative number. In this case, constraint (74) forces Yijq to take the value equal to zero. The reason for adding the last part d d 2] to pg the constraints (73) and (74) is the same as the one discussed in the previous sections. 75 6.2.2.3 The Relaxing Rule for SP3 in the Three-Machine Problem The rules to relax SP3 for three-machine problem is the same as the one discussed for M2 in two-machine problem. Thus, the same constraints can be applied for solving SP3 of three machine problem. These constraints are as follows: v a d3-1 Upq1yq L..i i = 1,2,.. .,a (75) tpq3 tpj3 tpj3 ) j,q = 1,2,... bmax j <q y <1(diq3_1du3_1 +Upq d3-1l a ijq (76) tpq3 tpj3 tpj3 ) 6.2.3 A Generalized Model for Simplifying the Multiple-Machine Problems Consider the rn-machine problem. The RMP of the problem is shown in (11) through (13) and its dual problem is depicted by (14) through (17). The sub-problems are shown in (18) through (20). In order to prevent any idle time among the process time of jobs in a group and preventing the unbounded solution in any of the SPs, the coefficient of Xjk in any sub- problem should be negative (the objective function of SPs are maximization). The coefficients of X,jk in sub-problems are shown in Table 6.3: Table 6.3 The coefficient of X1k's in SPs Sub Problem I X Coefficient SP1 SPk k 2,.. .,m-1 dk- d]) SPm dijmA Thus, the inequalities below should hold true in order to prevent unbounded solutions: ?0 i= 1,2, ...,a duk du 0 j1,2,..., bmax 76 k = 2,3,.. .,m-2 n = k+1 dum <1 To support these rules, a set of artificial variables, Sl,,, are added to the relational constraint for the j job of slot j of the constraint between M1 and M2 with the coefficient equal to 1. The same set of artificial variables is also added to the relational constraint between M2 and M3 with the coefficient equal to -1, respectively. Another artificial variable set, S2, is also added to the relational constraint of the jill job of slot i between M2 and M3 with the coefficient equal to 1. The same set of artificial variables is also added to the relational constraint between M3 and M4 with the coefficient equal to -1, respectively. These artificial variables are added respectively to consecutive constraints from M2-M3 through Mmi Mm. The new model and its dual problem are as follows: h3 a bmax a bmax MinZ = SM (77) h=1 i=1 j=1 i=1 j=1 Subject to: h=1 x2- Wt) hl=1 I = 1,2,...,a j = 1,2,.. .,bmax (78) a hk-1 I = 1,2,.. .,a h=1 p=l W tJk) hl=1 A' k-1) Sk S(k-1) 0 1,2,.. .,bmax (79) k=2,3,...,m H h=1 k1,2,m (80) = 0,1 The dual problem of the new model: m abmax m MaxZ= duk*(0)+ aM (81) k=2 i=1 j=1 M=1 ST: - a bmax d,(X) + ai 0 (82) i=1 j=1 a bmax a a bmax dq(Xjk WitJk)+ak(Y dj(k+1)Xjk)O k=r2,3,...,m1 (83) i=1 j=1 p=l i=1 j=1 77 a a a bmax dym(X,jrn Wiptpjmam Xym) j1 j=1 p=l 1=1 j=t i = 1,2,.. .,a d2 1 85 j= 1,2,. . . ,bma, dk d(k+1) 0 k=2,3,. . .,m-1 (86) dm 0 ak Unrestricted (87) This model will provide a lower bound for the original problem as well. There are rules that can relax the job sequence constraints of sub-problems. These rules for each sub- problem are discussed in the following sections. 6.2.3.1 The Relaxing Rule for SP1 in the Multiple-Machine Problem The rules to relax SP1 for m-machine problem is the same as the ones discussed for two-machine and three machine problems. Thus, the same constraints can be applied for solving SP1 of the multi-machine problem. YjqWipiU H a P=l Idu2 tpjl diq2 tpql diq2' tpqi) i = 1,2,.. .,a (88) a diq2" j,q=1,2,...bmax j<q 1du2 diq2 Y,jq +Upql (89) P=l tpjl tpql tpqi) 6.2.3.2 The Relaxing Rule for SP2 through SPm..i in the Multiple-Machine Problem The objective function of any sub-problem, except the first and the last sub-problems, SPk(k= 2,3,..., rn-i) is as follows: a bmax a a bmax j=1 d(X Wip t) + ak ( d+l) Xk) (90) 1=1 p=l i=1 j=l To find a rule to relax the job sequence constraints, consider two different sequences of processing jobs which are shown in Figure 6.1. Assume that the objective function value of S1 is less than S2 The completion time of the jobs which make the difference between S1 and S2 are shown in Table 6.1. In this case, by substituting the value of completions times in the objective function of SPk, the inequality below holds true: dk(tA + t, t1) d+J)j(tA + t,) + dkf(tA + t1 + t3- t) d+J)J(tA + t1 + t) (91) d+J)J(tA + t) d(kJ)J(tA + t) + dkj(tA + t + t1 t,) d+1)j(tA + tj + t1) by simplifying the inequality, the result is: d kj d(k+l)J d, d(k+l)I (92) ti ti Based on this fact, at the optimal solution of SPk, if there is no idle time among the process time of jobs, the sequence of jobs that belong to a group should respect inequality (92). Thus, by applying the constraints below to the SPk, for any given group sequence, the sequence of jobs in a group can be calculated by the model. The reason (dU(k+1) duk) for adding the last part Upq to the constraints is the same as that tpjk ) discussed in the previous sections. a 'dig(k+I) diqk dy(k+1) dUk du(k+1) dik)') i = 1,2,.. YgJq P +Upq( J,q=z1,2,...bmax (93) p=1 pqk tpjk tpjk ) j<q a (dig(k+1)diqk d/(k+)dUk k = 2,3,.. .,m4 (94) +Upq( tpqk tpjk tpfk 6.2.3.3 The Relaxing Rule for SPm in the Multiple-Machine Problem The rules to relax SPm for rn-machine problem is the same as the one discussed for SP2 in the two-machine problem. Thus, the same constraints can be applied for solving SPm of the multi-machine problem. These constraints are as follows: a d__1 d.1 + Upq d.1 iqm ijm ijm (95) Yyq w0l i = 1,2,.. .,a Pl tpqm tpjm tpjm i+ a qm 1 d' d-' urn + Upq ym j,q 1,2,... bmax j < q (96) P=l tpqrn tpjrn tpjrn 79 6.2.4 Adding an Auxiliary Constraint to Simplify Finding the Sequence of Dummy Jobs As mentioned, the dummy jobs of a group are processed as the last jobs of the group. In order to facilitate solving sub-problems, the constraint below is added to each sub- problem. This constraint relaxes the job sequence binary variables of dummy jobs in the mathematical model. In this constraint the parameter tlpjk is defined as follows: th p .,a I-i; if the job of group p is a dummy job tlpjk j l,2,...,bmax 1tk ; otherwise k= 1,2, .., m If q is a dummy job, the value of tlpqk is equal to -1. Thus, the right hand side of the constraint below is equal to 1 if job q is a dummy job. This leads to Yyq = 1 if job q is a dummy job. Adding the constraint below to SPs, causes the dummy jobs of each group considered to be processed as the last jobs of the group. (-1a i =1,2,...,a Y VW /q p1 h.d t1pqk) p j,q=1,2,...bmax j<q (97) 6.2.5 Solving Sub-Problems As mentioned, the sub-problems are NP-hard, so it is better to avoid solving them optimally as long as possible. It is clear that during solving a node, any colunm with positive coefficient can be added to RMP in order to help improve the objective function value. Based on this fact, it is not necessary that the sub-problems be solved optimally during the intermediate levels of solving a node. Thus, the heuristic algorithm (tabu search) is applied to solve sub-problems until it can provide a solution with positive coefficient. When the heuristic algorithm is unable to find columns with positive coefficient for all sub-problems, the sub-problems are solved optimally. This process is performed until none of the sub-problems can provide a column with positive coefficient. At this time, the node is solved optimally. In other words, at the end of each node, all sub-problems should be solved optimally to make sure that the optimal solution of a node is found. 6.2.6 Branching The LP relaxation of the RMP which is solved by colunm generation will not necessarily provide integral solution. In this case, applying a standard branch-and- bound procedure to the RMP with its existing columns will not guarantee an optimal (or feasible) solution (Barnhart, 1998). Barnhart et al. (1995) and Desrosiere et al. (1995) suggested to branch on the original variables of the problem. This means that the branching rules for the proposed problem should be based on AS(+l)l or other original variables. Because the sequence of jobs in a group can be calculated by the rules discussed in the previous section, the branching is only performed on the group variables, i.e., AS(+J)l 'S or W1,. In this case, to find the best variable to branch, all AS1(+J)l variables related to all machines are considered. Each column that exists in RMP has a coefficient (,%) at the optimal solution of each node. To find the best variable for branching, for each AS(+l)l related to each machine, a branching coefficient is calculated. The value of this coefficient is sum of the coefficient of the existing columns in RMP in which ASi1(+J)l =1. Wilhelm et al. (2001) suggested to branch on the original variable in which its branching coefficient has the nearest value to "0.5" compared to the other variables. Thus, branching occurs on the variable in which its branching coefficient has the closest value to 0.5. The branching rules applied for this problem is shown as a flow chart in Figure 6.2. Suppose that AS(+l)l (AS(+J)l that belongs to the kth sub-problem) has the closest value to 0.5 among all variables. In this case, the parent node is branched on two new nodes. In one node, constraint AS1J,(e+J), =1 is added to SPk and in the other node the constraint AS(l+J)l = 0 is added to SPk as well. All existing columns related to all machines but the kth machine are added to both new nodes. The columns related to the kth machine are separated in two parts. The ones in which ASi(IJ), 1 are added to the first node and the remaining are added to the one which includes the ASI1,(eI)l = 0 constraint. Initialize Coefficient of Lamda's Pick the First Existing Column in RMP V Get the Coefficient of the Column Identify the Relative Machine Pick the First AS V Yes AS=i' V ACId the coefficient to the No L, ............................................................ branching coefficinet of AS V No Are all ASs checked? Yes V Yes a there any other column in RMP No Pick the AS with the value closest to 05 V Apply Branching Figure 6.2 The branching rule flow chart 6.2.7 Stopping Criteria The branching process can be continued until all nodes provide an integer solution, be infeasible, or are fathomed. Because finishing this process requires a considerable amount of time, especially for large size problems, and considering the required amount of time for solving sub-problems optimally which are NP-hard, a time limitation is applied for solving problems. During solving problems to obtain lower bounds, if the time spent for solving a problem exceeds 4 hours, the sub-problems of the current node started is solved optimally once. After solving all sub-problems optimally, the algorithm stops and the best lower bound obtained so far is reported as the lower bound of the problem. The maximum time spent to solve a sub-problem is set to at most two hours. If a sub- problem cannot be solved optimally in two hours, solving the sub-problem is stopped and the lower bound of the sub-problem is considered as the objective function value of the sub-problem. During solving the nodes, the breadth first procedure is used to solve the nodes. In other words, all nodes of a higher level have priority to be solved compared to the nodes in lower levels. 6.2.8 The Software Application The B&P algorithm is coded by concert technology concept of CPLEX 9.0 version, by applying the beta version of a library function called MAESTRO developed for the B&P algorithm. 6.2.9 The Lower Bound for the Original Problem During solving a problem with the B&P algorithm, the algorithm stops for one of the following reasons: The B&P algorithm is solved optimally. The B&P algorithm cannot be solved optimally because of imposed time limitation. If the B&P algorithm is solved optimally, the optimal solution of the mathematical model is a lower bound for the original problem. If the B&P algorithm is not solved optimally, there are some rules to calculate the lower bound of the original problem. These rules are discussed as follows: If in a problem, all nodes cannot be solved because of time limitation, the lower bound of the original problem is the minimum value of the solved nodes in which all their branches are not solved yet. For instance, consider a problem in which all possible nodes of the problem cannot be solved. The objective function values of solved nodes for such a problem are shown in Figure 6.3. Suppose the B&P 6th algorithm is stopped by the end of the node because of time limitation. In this case, the lower bound of the original problem is equal to the minimum objective function value of nodes number 3, 4, and 5 which is equal to 154. Node(1) Level 0 I 150 I I I Node(2) Node(3) Level I I I I I I 153 I I 156 I I 1 I I Node(7) Node(4) I Node(5) I I Node(6) I I I Level 2 Unable to I 155 156 solve Figure 6.3 The objective function value of nodes for an incomplete problem In some problems, the sub-problems cannot be solved optimally in their time limitation (two hours). In such cases, the algorithm stops solving the sub-problem after two hours and the lower bound of the sub-problem is considered as the objective function value of the sub-problem. If in a problem, a node cannot be solved optimally, the lower bound of the problem is equal to the objective function value of the recent RMP minus the summation of the objective function values of sub-problems (Lubbecke and Desrosiers, 2004). 6.2.10 Example The problem shown in chapter four is considered to find a lower bound as an example. As explained before, the optimal solution for this problem, by considering minimization of the sum of the completion times criterion, is equal to 165. The heuristic algorithm (tabu search) also provides a solution equal to 165. As discussed, the B&P algorithm requires an initial solution. The experiment showed that, if an initial solution with good quality is considered, the efficiency of the algorithm will increase. Thus, the result of the tabu search is considered as the initial solution for the problem. The RIVIP is solved to find the optimal solution of the first node. The value of RMP and sub-problems during the first node iterations are shown in the table below. Table 6.4 The result of the first node Iteration RMP Alphal Alpha2 SP1 SP2 o 165.000000 0.000000 165.000000 0.000000 46.000000 1 164.500000 111.500000 53.000000 16.500000 15.500000 2 160.375000 132.250000 28.125000 6.131579 3.375000 3 160.000000 137.000000 23.000000 1.000000 0.000000 4 159.000000 136.000000 23.000000 0.000000 0.000000 The branching coefficients of AS(1+])l at the end of the first node are as shown in Table 6.5. Table 6.5 The branching coefficients of AS(1+J)l at the end of the first node Absolute Absolute AS difference AS difference Machine Value value Machine Value value compared T compared L1JL to 0.5 P 1 to 9.5 111 1 0 0 211 1 0 0 TTT 2 0 0 211 2 0 0 iT 1 0 0 T2 1 0 0 1 1 2 2 0 0 2 T 2 2 0.762 0.238 113 1 1 0 2T3 1 0 0 1 T 2 0.238 0.238 2 T 3 2 0 0 !21 1 0 0 221 1 0 0 !21 2 0 0 221 2 0 0 222 !2 ! !.! 1 2 1 0 0 0 0 0 0 222 223 1 2 1 0 0 0 0 0 0 !.1 2 0 0 223 2 0 0 1 0 0 231 1 0 0 3 1 2 0.762 0.238 2 T3 2 0 0 1 32 1 0 0 232 1 1 0 T 3 2 2 0 0 2 3 2 2 0.238 0.238 1 0 0 1 0 0 2 0 0 233 2 0 0 Based on the result, the best variable to branch is AS21123. In this case, two new nodes are created. In the first node, the constraint AS1123 = 1 is added to its SP2. AS1123 = 0 is added to SP2 of the second node as well. All existing columns of RMP related to the second machine are passed to the new nodes according to the added constraint. All columns related to SP1 are added to both nodes. Then the second node which has one more constraint at its SP2 (AS1123 = 1) is solved. The value of RMP and sub-problems during its iterations are as follows: Table 6.6 The results of the second node Iteration RMP Alphal Alpha 2 SP1 SP2 5 165.000000 85.000000 80.000000 7.000000 10.000000 6 161.888889 121.777778 40.111111 3.592593 4.111111 7 160.553114 105.851648 54.701465 2.102564 2.401099 8 159.984694 122.405977 37.578717 1.306122 0.637755 9 159.802286 124.118857 35.683429 0.676000 0.432000 10 159.642857 128.428571 31.214286 0.000000 0.000000 The optimal solution of this node is 159.642857. The branching coefficients of all variables are equal to zero. Thus, branching cannot be continued for this node. The other node generated by the first node which has one more constraint at its SP2 (AS1123 = 0) is solved. The value of RMP and sub-problems during its iterations are as follows: Table 6.7 The result of the third node Iteration RMP Alphal Alpha 2 SP1 SP2 11 159.000000 136.000000 23.000000 1.000000 1.000000 12 159.000000 136.000000 23.000000 0.117647 0.000000 13 159.000000 136.000000 23.000000 0.000000 0.000000 The optimal solution of this node is 159.0000. The branching coefficients of all variables are equal to zero. Thus, branching cannot be continued for this node. At this stage, there is no node to branch and the algorithm stops. The lower bound is 159.000 with an error of 3.77%. CHAPTER 7: EXPERIMENTAL DESIGN In this research, several versions of tabu search are used to find a good quality solution. Thus, the design of experiment techniques should be applied for choosing the most efficient version. Some random test problems are created and the solutions of these problems obtained by these algorithms are compared by design of experiment techniques to identify the best algorithm. The steps of performing the experiments are as follows: 7.1 Steps of the Experiment The steps of performing the experimental design for the proposed research problems based on Montgomery's text (2001, p 14) are as follows: 1. The goal of performing the experimental design is to compare the quality of the solutions of the proposed heuristic algorithms (several versions of tabu search) as well as the efficiency of the algorithms. Another interest of performing the experiment is to identify if there is a difference between the initial solution generator techniques. 2. The factors considered for this research problem are as follows: - Number of groups: It is clear that by increasing the number of groups, the problem becomes more complicated and consequently the quality of solutions provided by heuristic algorithms may decrease, so the number of groups is considered as the first factor in this study. The main idea of applying group scheduling techniques in production is to decompose the production line into small size and independent cells. Thus, in industry too many groups are not expected to be assigned for processing in the same cell. Logendran et al. (2006) investigated group scheduling problems by considering at most 16 groups in a cell. Schaller et al. (2000) performed their experiments by considering at most 10 groups in a cell. Based on these experiences, the maximum number of groups in a cell is considered equal to sixteen in this research. The levels of this factor can be ['I'J defined in three different categories: small, medium, and large. Small size problems are problems including 2 to 5 groups. Problems with 6 through 10 groups are considered as medium size problems, and finally problems with 11 through 16 groups are classified as large size problems. - Number of jobs in a group: The number of jobs that belongs to a group may affect the quality of solution. This can be considered as the second factor of the experimental design. In this research, the maximum number of jobs that belongs to a group in a problem is considered as a factor. For instance, if in a group scheduling problem with three groups, groups have 3, 6, and 9 jobs respectively, then the problem is classified as a 9-jobs problem. In this research, the maximum number of jobs that belong to a group is considered as ten which is the same as Schaller et al. 's (2000) suggestion. This factor has also three levels. Level 1 includes problems with at most 2 to 4 jobs in a group. Problems with 5 to 7 jobs in a group are classified as level 2, and finally if one of the groups of a problem includes 8 to 10 jobs, then the problem belongs to level 3 based on its number of jobs. The ratio of set-up times: The preliminary experiments indicate that the quality of solutions strongly depends on the ratio of set-up times of groups on machines. This factor must be considered as the third factor. This can be considered as a factor with three levels. These levels are as follows: Level!: 0 ratio<0.8 Level2: 0.8 ratio.1.2 Level 3: 1.2 ratio cc It is clear that this factor should be applied to all machine pairs. For instance, in a three machines problem this ratio for "M1/M2" and "M21 M3" should be compared. Thus, this can be considered as two separate factors in this problem. Level 1 of each factor investigates the problems in which the ratio of set-up times of machine pair is increased. In other words, if the set-up time of groups on machine i is smaller than the set-up time of groups on machine i +1, then the ratio of "Me /M+1" has a value less than 1. If this value is less than 0.8, it is assumed that there is significant difference between the required set-up times of groups on these pair of machines. Level 2 includes the problems in which this ratio has a value almost equal to 1. Thus, the problems whose required set-up times on the investigated machine pair are almost equal are classified in this level, and finally level 3 investigates the problems in which the required set-up time of machines is decreased in the investigated machine pair. - Initial solution: Based on preliminary experiments, providing an initial solution with good quality can improve the quality of final solutions. Thus, the applied initial solution generator should be considered as a factor. Two different techniques of generating initial solution are applied for each criterion and each of them can be considered as a level for this factor. The number of machines in a cell is not a suitable factor in this experiment. Two main reasons are as follows: The first reason is that in a cellular manufacturing system, the number of machines in a cell is always fixed. In other words, in cellular manufacturing design, each cell includes a specific number of machines, so the number of machines in a cell is not a variable. The second reason is that the number of machines does not change the number of feasible solutions (possible sequences) of the problem. The number of feasible sequences for a problem only depends on the number of jobs in a group and the number of groups. Based on this fact, the number of machines may not affect the quality of solutions. 3. The response variables of the experiments are the objective function value (OFV) of the algorithms (i.e., the makespan of the problem or the sum of the completion times of the problem) and the time consumed to perform the algorithm. 4. The basic principles of experimental design such as replication (generating several random test problems for each basic experiment) and blocking (defining some test problems to ignore nuisance factors) is considered. It is not required to solve test problems with a random sequence because the results are deterministic (the objective function value of the algorithms is deterministic). 5. Consider the factors that are used to define the model. The first three factors, i.e., the group, job, and the set-up ratio factors are the ones which are used to generate a test problem. Then, each test problem is solved by the heuristic algorithms by applying one of the two initial solution generators. Based on this explanation, each experimental unit of the first three factors (which generate a test problem) is split into six different parts to be solved by one of the combinations of the heuristic algorithms and the initial solution generators. Thus, the split plot design is the most appropriate model to compare the results. As the test problems are created based on the groups, jobs, and set-up ratio factors, these factors are put in the whole-plot and the remaining factors, i.e., the algorithm factor and the initial solution generator factor (which are the most important factors) are put in the sub-plot. Each test problem is considered as an experimental block. The factors in the whole plot are considered nested to generate a test problem. A problem instance, which is considered as a block for the sub-plot factors, is generated for specific levels of whole-plot factors. The problems (blocks) are treated as a random factor. The factors that belong to whole plot generate a block in a nested way. This model (the split plot design) is also applied by Amini and Barr (1993) to a similar problem. They performed an experimental design to compare the performance of three Network re-optimization algorithms. In their experiments, they defined several classes of problems and generated some test problems for each class. They applied a split plot design in which the factors that generate the test problems are put in the whole plot and the remaining factors are put in the sub-plot. 6. A problem instance, which is considered as a block for the sub-plot factors, is generated for specific levels of whole-plot factors. The problems (blocks) are treated as a random factor. The factors that belong to whole plot generate a block in a nested way. 91 7. Thus, the model is a mixed model, because it includes fixed factors (groups, jobs, set-up ratios, algorithms, and initial solutions) as well as random factor (problem instances). 8. For example, the model of the experiment for a 3-machine problem can be represented as: Yijklmnr = p + G1 + + Rik + R21 + (G*J) + (G*R1)ik + (G*R2)i + (J*R1)Jk + (J*R2).i + (R1*R2)kl + Tt(jkl) + am + I,,+ (G*a)jm + + (.J*a)jm + (J*I)1 +(R1*a) + (R1*J) + (R2*a),m + (.R2*J)1 + (a*J)mn + (G*J*R1),jk + (G*j*R2)i + (G*J*a)ym + (G*J*I + (G*R1*R2)jkl + (G*R1*a)j + (G*R1*I)ikn + (G*R2 *a)iim + (G*R2*1)11 + (G*a*I)jmn + (J*R1*R2)kl + (J*R1*a)j + (J*R1*I)jim + (J*R2*a)jim + (J*R2*I)3i + (J*a *J)jmn + (Ri *R2 *a)klm + (Ri *R2 *J)kl + (Ri *a *J), + (R2*a*I)imn + (G*J*R1*R2) + + (G*J*R1*1),, -- (G*J*R2*a)uim + (G*J*R2*1)i+ (G*J*a*I)jjmn + (G*R1*R2*a)jklm + (G*RJ*R2*1)jkln + (G *R1 *a *J)., + (G *R2 *a *J)jlmfl + (J*R1 *R2 *a)jklm + (J*RJ *R2 *J)jklfl + (J*R1 *a *J)., + (J*R2 *a *J)jlmn + (Ri *R2 *a *J)klmfl + (G *J*R 1 *R2 *a)yklm + (G*J*Ri *I?*I)kl + (G*J*Ri *a*]) + (G*J*R2*a*I)yimn + (G*Ri **a*J).k1 + (J*Ri**a*I).kl + (G*J*R1*R2*a*I)Uklmn + Cijklmnr where p the overall mean G1: the effect of group factor, i = 1, 2, 3 .J the effect of job factor,j = 1, 2, 3 Rik: the ratio of set-up time ofM1/M2 factor, k 1, 2, 3 R21: the ratio of set-up time of M21M3 factor, 1 1, 2, 3 the block factor (a random factor) am: the algorithm effect factor m = 1, 2, 3 I: the algorithms effect factor n = 1, 2 8ijklmnr the error term The interactions of the effects are also considered in the model. 9. The goals of performing the experimental design are as follows: Which heuristic algorithm has the best performance? Is there any difference between the initial solution generators? The hypothesis test to investigate for the first goal is: 92 H0: a1=a2=a3 H1: if any of the a 's is different from the others and the hypothesis test to investigate for the second goal is: H0: 11=12 Hi: I112 11. The significant level is chosen equal to 5%. 12. Model adequacy checking is performed by checking the normality assumption. This could be made by plotting a histogram of the residuals. A useful procedure is to construct a normal probability plot of the residuals. If the error distribution is normal, this plot should look like a line. In visualizing the straight line, more emphasis should be placed on the central values of the plot than on the extremes (Montgomery, 2001). This comparison is performed for minimization of makespan and minimization of sum of the completion times criteria, for 2, 3, and 6 machine problems separately by considering the generated test problems. If this technique is applied for problems with more machines, the number of test problems which should be investigated will increase highly. For instance, for a six- machine problem, because the number of whole-plot factors will increase to 7 (group factor, job factor, and 5 factors for ratios of sequenced machines), if in each cell only 2 replicates are applied, then it is required to solve 37*2 = 4,374 problems. By considering that there are three versions of tabu search and two different initial solution generator mechanisms for each criterion (minimization of makespan and minimization of the sum of the completion times), 4374*3*2 = 26,244 problems should be solved for each criterion. This is the correct way to perform the experiment, but in the interest of time, it is not practical for this research. Thus, the experiment for problems with more than three machines is proposed to be performed by just applying one factor for the 93 ratio of set-up times for all machine pairs. In this case, only a factor is defined for the ratio of set-up times of machine pairs. This factor has three levels as explained before. Level 1: 0 ratio .8 Level2: 0.8< ratio<1.2 LeveI3: 1.2< ratio<co Level 1 indicates the problems in which the required set-up times for each machine are increased sequentially. The second level investigates the problems in which the set-up times of all machines are almost equal. And finally, level three investigates the problems in which the set-up times of machines are decreased from the first machine to the last machine. For instance, a six machine problem would belong to level 1 if the ratio of set-up times of its machines has the following relations: 0 M1IM20.8 0M2/M30.8 0M3/M40.8 0M4IM50.8 0 M5/M6 0.8 7.2 Test Problems Specifications To perform the design of experiment techniques, two test problems (replicates) are generated for each cell. These problems are generated based on specifications below: . The run time of each job on each machine is a random integer from a uniform discrete distribution [1, 20]. The number of groups is a random integer from a uniform discrete distribution [1, 5], [6, 10], and [11, 16] for small, medium, and large size problems, respectively. The number of jobs in a group is a random integer from a discrete uniform distribution [2, 4], [5, 7], and [8, 10] for small, medium, and large size problems, respectively, based on the job factor. The set-up time of groups on each machine for two-machine problem is shown in Table 7.1. As discussed, in the first level the ratio of set-up time between M1 and M2 should be less than 0.8. If these set-up times are generated bas U[ 17,67] for M1 and M2 respectively, then the average ratio of set-up tim to 0.607, which satisfies the condition. The set-up times for other levels are generated similar to this rule to satisfy the required ratio of each level as well. The set-up times of groups on each machine for problems with three and six machine problems are shown in Table 7.2 and Table 7.3. The set-up times shown in Table 7.2 for three machine problem can be applied for set-up time ratio factors (Ri and R2). The distribution to generate random set-up time for each machine in each level is chosen based on the required ratio among set-up times. Table 7.1 The set-up time of each machine on two-machine problems Machine [ Levell [ Level 2 Level 3 M1 U[1,50] U[1,50] U[17,67] M2 U[17,67] U[1,50] U[1,50] Table 7.2 The set-up time of each machine on three-machine problems Machine Level 1 Level 2 Level 3 M1 U[1,50] U[1,50] U[45,95] M2 U[17,67] U[1,50] U[17,67] M3 U[45,95] U[1,50] U[1,50] Table 7.3 The set-up time of each machine on six-machine problems Machine Level 1 Level 2 Level 3 U[1,50] U[1,50] U[300,350] M2 U[17,67] U[1,50] U[170,220] M3 U[45,95] U[1,50] U[92,142] M4 U[92,142] U[1,50] U[45,95] M5 U{170,220] U[1,50] U[17,67] M6 U[300,350] U[1,50] U[1,50] 7.3 Two Machine Test Problems The two machine test problem has factors below: Group factor with three levels . Job factor with three levels . The set-up ratio M4/ M2 with three levels The initial solution factor with two levels Algorithm factor with three levels To cover these factors two replicates are generated based on the first three factors. Then each problem is solved by each heuristic algorithm with both initial solution generator techniques. Thus the problems can be classified in 27 different classes as follows: Table 7.4 Small size problems based on group category (two machine) Set-up category Job category Level 1 Level 2 Level 3 Small Cl C2 C3 Medium C4 C5 C6 Large C7 C8 C9 Table 7.5 Medium size problems based on group category (two machine) Set-up category Job category Level 1 Level 2 Level 3 Small ClO Cli C12 Medium C13 C14 C15 Large C16 C17 C18 Table 7.6 Large size problems based on group category (two machine) Set-up category Job category Level 1 Level 2 Level 3 Small C19 C20 C21 Medium C22 C23 C24 Large C25 C26 C27 For each of these classes, two random problems (replicates) are generated. Thus, 54 test problems are generated for two machine problems. The specifications of these problems are shown in the table below: Table 7.7 The specification of test problems generated for two machine problem 0 0 0 - - - - . 0 0 1 4 4 13 29 6 7 32 ci C15 2 3 4 10 30 10 7 49 3 4 8 31 8 9 41 C2 C16 4 2 3 5 32 8 10 51 5 4 16 33 10 10 71 C3 C17 6 3 4 8 34 6 8 29 5 7 28 35 9 9 58 C4 C18 8 4 6 22 36 6 10 31 3 7 17 37 11 4 29 C5 C19 10 2 7 13 38 13 4 40 5 7 31 39 16 4 46 C6 C20 12 4 7 25 40 13 4 40 13 5 10 31 41 14 4 40 C7 C2 1 14 2 8 15 42 15 4 46 15 5 9 35 43 16 7 79 C8 C22 16 5 10 40 44 13 7 63 17 4 9 31 45 16 7 66 C9 C23 18 4 10 28 46 12 7 65 19 6 4 17 47 15 7 76 cio C24 20 8 4 25 48 14 7 69 21 9 3 22 49 11 10 75 cii C25 22 9 4 27 50 15 10 99 23 10 4 33 51 12 10 83 C12 C26 24 6 4 16 52 16 10 106 25 9 7 60 53 15 10 79 C13 C27 26 10 7 62 54 16 10 108 27 6 7 33 C14 28 10 7 52 7.4 Three Machine Test Problems For the three machine problems, following factors are considered: 97 Group factor with three levels Job factor with three levels The set-up ratio M1/ M2 with three levels The set-up ratio M2/ M3 with three levels The initial solution factor with two levels Algorithm factor with three levels To cover these factors two replicates are generated based on the first four factors. Then each problem is solved by each algorithm with both initial solution generator techniques. Thus, the problems are classified in 81 different classes as follows: Table 7.8 Small group, small job size Table 7.9 Small group, medium job problems (three machine) size problems (three machine) M1/ M2 M2/ M3 Ratio M1/M2 M2/M3 Ratio Ratio Level 1 Level 2 Level 3 Ratio Level 1 Level 2 Level 3 Level 1 Cl C2 C3 Level 1 ClO Cli C12 Level 2 C4 C5 C6 Level2 C13 C14 C15 Level 3 C7 C8 C9 Level3 C16 C17 C18 Table 7.10 Small group, large job Table 7.11 Medium group, small job size problems (three machine) size problems (three machine) M1/M2 M2/M3 Ratio M1/ M2 M2/ M3 Ratio Ratio Level 1 Level 2 Level 3 Ratio Level 1 Level 2 Level 3 Level 1 C19 C20 C21 Level 1 C28 C29 C30 Level 2 C22 C23 C24 Level2 C31 C32 C33 Level 3 C25 C26 C27 Level3 C34 C35 C36 Table 7.12 Medium group, medium Table 7.13 Medium group, large job job size problems (three machine) size problems (three machine) M1/ M2 M2/ M3 Ratio M1/ M2 M2/ M3 Ratio Ratio Level 1 Level 2 Level 3 Ratio Level 1 Level 2 Level 3 Level 1 C37 C38 C39 Level 1 C46 C47 C48 Level 2 C40 C41 C42 Level2 C49 C50 C51 Level 3 C43 C44 C45 Level 3 C52 C53 C54 Table 7.14 Large group, small job Table 7.15 Large group, medium job size problems (three machine) size problems (three machine) M1/ M2 M2/ M3 Ratio M1/M2 M2/M3 Ratio Ratio Level 1 Level 2 Level 3 Ratio Level 1 Level 2 Level 3 Level 1 C55 C56 C57 Level 1 C64 C65 C66 Level2 C58 C59 C60 Level 2 C67 C68 C69 Level 3 C61 C62 C63 Level3 C70 C71 C72 Table 7.16 Large group, large job size problems (three machine) M1/ M2 M2/ M3 Ratio Ratio Level 1 Level 2 Level 3 Level 1 C73 C74 C75 Level 2 C76 C77 C78 Level3 C79 C80 C81 For each of these classes, two random problems (replicates) are generated. Thus, 162 test problems are generated for three machine problems. The specifications of these problems are as follows: Table 7.17 The test problems generated for three machine problem 0 0 - -1 0 - - C - r1 . - -. 0 - '1 0 0 1 2 4 4 4 15 ci C6 2 5 4 16 12 5 4 13 4 4 12 13 5 4 14 C2 C7 4 3 4 10 14 2 4 7 15 4 4 13 C3 C8 6 5 4 16 16 5 4 17 5 4 15 C9 17 2 3 5 C4 8 3 3 7 18 4 4 13 3 3 8 ClO 19 3 6 18 cs 10 4 4 13 20 4 6 22 Table 7.17 (Continued) The test problems generated for three machine problem 0 . 0 0 - 0 E 0 E 0 21 2 7 13 7 4 19 cii C30 22 5 7 31 60 9 4 24 23 5 6 23 61 8 4 23 C12 C31 24 4 6 20 62 9 4 32 25 3 6 17 63 8 4 25 C13 C32 26 4 7 23 64 7 4 21 27 5 6 26 65 10 4 34 C14 C33 28 3 7 19 66 6 4 18 29 5 5 22 67 7 3 18 C34 30 2 7 13 68 10 4 29 31 3 7 15 69 7 4 21 C16 C35 32 5 6 23 70 9 4 30 4 6 19 71 9 4 24 C17 C36 34 3 5 15 72 6 4 18 35 2 7 12 7 7 41 C18 C37 36 5 7 29 74 6 5 28 4 10 34 9 7 50 C19 C38 38 5 9 38 76 8 6 39 4 10 36 77 8 7 46 C20 C39 40 3 10 27 78 9 7 49 41 5 10 9 7 51 C21 C40 42 2 8 16 80 6 6 30 43 2 10 18 81 8 5 36 C22 C41 44 4 9 31 82 10 6 54 45 5 10 49 83 6 6 29 C23 C42 46 4 8 31 84 10 6 49 3 9 24 85 9 6 42 C24 C43 48 5 9 42 86 8 5 36 87 8 5 26 C25 C44 50 3 8 23 88 10 7 40 51 5 8 37 89 9 5 27 C26 C45 52 3 9 23 90 10 7 48 5 8 36 91 9 10 56 C27 C46 54 4 10 36 92 7 9 43 55 6 4 19 93 8 9 49 C28 C47 56 8 4 20 94 9 10 54 57 6 4 19 7 8 33 C29 C48 58 7 4 17 96 10 10 62 100 Table 7.17 (Continued) The test problems generated for three machine problem aD aD I; 0 aD aD 0 0 0 0 E o -t - - o = aD 97 8 9 48 133 16 7 69 C49 C67 98 6 10 41 134 11 7 50 99 10 10 63 135 12 7 50 C50 C68 100 8 9 38 136 14 7 53 101 7 10 48 137 13 7 62 C51 C69 102 10 8 40 138 15 7 66 103 10 10 66 139 14 7 75 C52 C70 104 8 7 36 140 13 7 54 105 6 10 38 141 15 7 59 C53 C71 106 9 8 43 142 16 7 66 107 10 143 13 7 54 C54 C72 108 8 10 60 144 15 7 68 109 13 4 145 12 9 62 css C73 110 15 4 39 146 14 10 74 111 16 4 46 147 13 10 81 C56 C74 112 14 4 42 148 15 10 85 113 13 4 149 16 9 93 C57 C75 114 15 4 45 150 15 10 110 115 15 151 15 10 104 C58 C76 116 14 4 44 152 14 9 77 117 11 4 153 16 10 96 C59 C77 118 14 4 39 154 13 9 66 119 11 4 32 155 14 10 84 C60 C78 120 15 4 50 156 11 10 86 121 11 4 31 157 13 10 89 C61 C79 122 15 4 47 158 15 10 101 123 13 4 36 159 15 10 97 C62 C80 124 12 4 35 160 14 10 86 125 14 4 47 161 11 10 64 C63 C81 126 13 4 40 162 14 8 77 127 12 7 50 C64 128 14 7 83 129 11 7 64 C65 130 14 7 88 131 11 7 69 C66 132 16 7 104 101 7.5 Six Machine Test Problems The six machine test problem has the following factors: Group factor with three levels Job factor with three levels The set-up ratio among machines with three levels The initial solution factor with two levels Algorithm factor with three levels Two replicates are generated for each experimental cell based on the first three factors. Then each problem is solved by each algorithm with both initial solution generator techniques. Thus the problems can be classified in 27 different classes as follows: Table 7.18 Small size problems based on group category (six machine) Set-up category Job category Level 1 ] Level 2 Level 3 Small Cl C2 C3 Medium C4 C5 C6 Large C7 C8 C9 Table 7.19 Medium size problems based on group category (six machine) Set-up category Job category Level 1 Level 2 Level 3 Small dO Cli I C12 Medium C13 C14 C15 _ C17 I Large C16 _ C18 Table 7.20 Large size problems based on group category (six machine) Set-up category Job category Level 1 Level 2 Level 3 Small C19 C20 C21 Medium C22 C23 C24 Large C25 C26 C27 102 For each of these classes, two random problems (replicates) are generated. Thus, 54 test problems are generated for six machine problems. The specifications of these problems are as follows: Table 7.21 The specification of generated test problems for six machine problem 0 0 -t 0 - 0 0 0 - - 0 B -t 0 0 1 29 7 7 34 ci C15 2 3 4 9 30 6 7 31 4 3 10 31 9 9 50 C2 C16 4 2 4 7 32 8 10 49 4 4 15 7 10 46 C3 C17 6 2 4 8 34 10 10 65 5 7 29 35 10 10 56 C4 C18 8 4 6 21 36 8 10 46 9 2 7 11 37 16 4 45 C5 C19 10 5 7 25 38 12 4 32 11 4 6 15 39 14 4 37 C6 C20 12 5 7 20 40 13 4 40 13 2 10 41 11 4 34 C7 C21 14 3 10 29 42 14 4 45 15 4 10 28 43 13 6 47 C8 C22 16 4 9 21 44 16 7 63 17 4 9 28 45 15 7 71 C9 C23 18 3 9 21 46 14 7 60 19 9 4 31 47 15 7 69 co C24 20 8 4 19 48 16 7 78 21 9 4 22 49 13 10 95 C25 22 9 4 25 50 14 10 80 23 8 4 24 51 13 10 72 C12 C26 24 6 4 16 52 15 10 77 25 6 7 28 53 16 9 100 C13 C27 26 7 7 29 54 15 10 117 27 8 6 33 C14 28 9 7 38 103 CHAPTER 8: RESULTS The random test problems are solved by three different versions of tabu search by applying two different initial solution generators. The proper lower bounding technique also provides a lower bound for test problems. The results of the experiments for each criterion are as follows: 8.1 The Results for the Makespan Criterion The results for two, three, and six machine problems by considering minimization of makespan are as follows: 8.1.1 The Results of Two-Machine Problems by Considering Minimization of Makespan Criterion All 54 test problems of two machine problems are solved by heuristic algorithms to find the algorithm with the best performance and the best initial solution generator. The lower bounding technique is also applied for each test problem to evaluate the quality of solution. The solution of the Schaller et al. (2000) algorithm for each test problem is also obtained to compare the results with the heuristic algorithm. The results are presented in three sections: In the first section, the results of the heuristic algorithms are compared to the lower bound to evaluate the quality of solutions. In this section, the results of the experimental design to find the algorithm with the best performance as well as finding the best initial solution generator is presented. In the second section, an experimental design to compare the time spent for heuristic algorithms is performed and the results are presented. In the third section, a comparison between the best found heuristic algorithm and Schaller et al. (2000) algorithm based on the results of the test problems is performed. 104 8.1.1.1 Comparison among Heuristic Algorithms and Lower Bound The results obtained from the heuristic algorithms and the results obtained from the lower bounding technique are shown in Table 8.1. This table has columns described below: Lower bound: The test problems are solved by the lower bounding technique for minimization of makespan criterion and the results are presented in the table below. The time spent to solve the mathematical model for the lower bounding technique for each problem is also provided in this table. The results of the heuristic algorithm: As mentioned before, the two machine problem which considers the minimization of makespan criterion has some advantages compared to the other proposed problems, such as relaxing the heuristic algorithm to a one level search. The results obtained from the heuristic algorithms are shown in Table 8.1 by applying two different initial solution generators. As mentioned before, the job sequence of SDGS problems by considering minimization of makespan for two machine problems can be calculated based on Johnson's (1954) algorithm. Thus, for the second initial solution, the Schaller et al.'s (2000) algorithm is applied to find the sequence of groups. Then the sequence of jobs in each group is calculated based on Johnson's (1954) algorithm. In this table, TS 1 stands for tabu search with short term memory algorithm, TS2 stands for the LTM-Max, and TS3 stands for LTM-Min. The best result of the heuristic algorithms (the one with the minimum objective function value) for each test problem is considered to estimate the quality of solutions. These error percentages are shown in the "Best error" column of Table 8.1. Based on the results, the average error percentage of all test problems compared to the best result of the heuristic algorithms is equal to 0.68% and the maximum error is 4.5% (problem 42). This error percentage is calculated based on the formula below: 105 The heuristic algorithm objective function The lower bound objective function The lower bound objective function Table 8.1 The results of the experiments with test problems for two machine problems by considering minimization of makespan Initial 1 Initial 2 ci) I TS1 TS2 TS3 TS1 TS2 TS3 1 280 0.05 287 287 287 287 287 287 287 0.025 2 237 0.03 237 237 237 237 237 237 237 0 171 0.02 171 171 171 171 171 171 171 0 C2 4 130 0.02 130 130 130 130 130 130 130 0 s 321 0 321 321 321 321 321 321 321 0 C3 6 209 0.03 209 209 209 209 209 209 209 0 403 0.14 404 404 404 404 404 404 404 0.002 C4 8 354 0.02 354 354 354 354 354 354 354 0 264 0.03 264 264 264 264 264 264 264 0 CS 10 152 0 152 152 152 152 152 152 152 0 527 0.02 527 527 527 527 527 527 527 0 C6 12 405 0.2 405 405 405 405 405 405 405 0 13 491 0.01 491 491 491 491 491 491 491 0 C7 14 249 0 249 249 249 249 249 249 249 0 437 0.19 445 445 445 437 437 437 437 0 C8 16 490 0.14 490 490 490 490 490 490 490 0 17 397 0.01 397 397 397 397 397 397 397 0 C9 18 396 0.03 396 396 396 396 396 396 396 0 19 335 0.05 346 346 335 335 335 335 335 0 ClO 20 457 0.06 457 457 457 457 457 457 457 0 21 346 1.73 358 358 358 370 368 368 358 0.035 22 383 2.81 391 391 391 391 391 391 391 0.021 23 583 2.95 604 584 604 590 590 590 584 0.002 C12 24 330 0.19 330 330 330 336 330 336 330 0 25 880 1.53 888 888 888 880 880 880 880 0 C13 26 815 5.55 819 819 819 838 838 838 819 0.005 27 440 0.33 440 440 440 440 440 440 440 0 C14 28 659 3.34 659 659 659 666 659 666 659 0 106 Table 8.1 (Continued) The results of the experiments with test problems for two machine problems by considering minimization of makespan a Initial 1 Initial 2 ci) TS1 J TS2 TS3 TS1 TS2 TS3 29 495 0.36 495 495 495 503 503 503 495 0 C15 30 678 0.17 678 678 678 678 678 678 678 0 31 657 0.08 657 657 657 657 657 657 657 0 C16 32 733 0.09 734 734 734 734 733 734 733 0 33 859 2.64 873 863 873 871 871 871 863 0.005 C17 34 383 0.42 383 383 383 388 383 388 383 0 35 768 1.39 772 772 772 777 777 777 772 0.005 C18 36 471 0.38 474 474 474 474 474 474 474 0.006 37 521 2.78 521 521 521 533 533 521 521 0 C19 38 667 21.36 692 672 692 685 685 685 672 0.007 39 605 83.41 649 623 649 627 612 627 612 0.012 C20 40 547 15.38 569 552 569 567 567 567 552 0.009 41 682 30.91 710 698 710 694 694 694 694 0.018 C21 42 786 24.72 830 821 830 829 826 829 821 0.045 43 1176 21.45 1208 1208 1208 1213 1213 1213 1208 0.027 C22 44 965 21 978 978 978 989 977 989 977 0.012 45 766 34.38 800 800 800 794 794 794 794 0.037 C23 46 817 6.08 834 824 824 835 830 835 824 0.009 47 1064 20.64 1098 1098 1098 1095 1095 1095 1095 0.029 C24 48 1066 28.72 1075 1075 1075 1094 1094 1081 1075 0.008 947 10.59 952 952 952 960 957 960 952 0.005 C25 50 1343 17.75 1355 1355 1355 1345 1345 1345 1345 0.001 51 923 4.8 939 939 939 934 934 923 923 0 C26 52 1286 107 1323 1312 1323 1321 1324 1321 1312 0.02 53 1115 14.15 1138 1138 1138 1149 1116 1149 1116 0.001 C27 54 1374 60.34 1409 1402 1409 1405 1405 1405 1402 0.02 Average Error: 0.68 The experimental design is coded with Statistical Analysis System, SAS, version 9.1, to find the best heuristic algorithm as well as the best initial solution. The normal probability plot of the residuals (evaluated as the difference between the actual value of the objective function and the predicted one by the model) confirms that the residuals have a normal distribution (Figure 8.1). Thus, there is evidence that the parametric 107 statistics-based analysis of variance (ANOVA) can be used to further analyze the results. 15 10 S H Jo -s -10 -15 .1 1 5 10 25 so is 80 95 99 99.9 Nors I Fe-nt Figure 8.1 The normal probability plot of the experimental design of finding the best heuristic algorithm for two machine problem by considering minimization of makespan The result of ANOVA is presented in Table 8.2. The results of the experiment show that there is a significant difference among the objective function values of heuristic algorithms (the result of F test is equal to 0.0048). To find the difference among the heuristic algorithms, the Tukey test is performed. The result of Tukey's test shows that TS2 has a better performance compared to the other heuristic algorithms. The results of the experimental design also show that there is no difference between the initial solution generators for two machine problems (the result of F test is equal to 0.4975). Among the interactions, only the interaction between group factor and algorithm factor (G*A), and job factor and initial solution factor (J*J) are significant. The significant factors and interactions are shown in bold in Table 8.2. Table 8.2 The ANOVA table for two machine problem by considering minimization of makespan for algorithm comparison The Mixed Procedure Type 3 Tests of Fixed Effects Num Den Effect DF DF F Value Pr > F G 2 0 434444 <.0001 J 2 0 122878 <.0001 Ri 2 0 16974.6 0.0488 A 2 135 5.56 0.0048 I 1 135 0.46 0.4975 G*J 4 0 14163.4 0.0445 G*R1 4 0 4021.26 0.5347 Q*A 4 135 2.69 0.0339 G*I 2 135 1.81 0.1671 J*R1 4 0 7523.18 0.2300 J*A 4 135 0.90 0.4682 J*I 2 135 4.37 0.0145 Ri*A 4 135 0.55 0.7023 R1*I 2 135 0.72 0.4869 A*I 2 135 0.51 0.5992 G*J*R1 8 0 1976.72 0.9140 G*J*A 8 135 0.58 0.7895 G*J*I 4 135 1.24 0.2962 G*Rl*A 8 135 0.18 0.9937 G*R1*I 4 135 1.93 0.1083 G*A*I 4 135 0.48 0.7482 J*R1*A 8 135 0.53 0.8285 J*R1*I 4 135 0.64 0.6352 J*A*I 4 135 1.02 0.4005 R1*A*I 4 135 0.18 0.9460 G*J*Ri*A 16 135 0.89 0.5868 G*J*R1*I 8 135 1.07 0.3889 G*J*A*I 8 135 0.77 0.6318 G*R1*A*I 8 135 0.39 0.9229 J*R1*A*I 8 135 0.43 0.9036 G*J*R1*A*I 16 135 0.38 0.9844 A test of effect slice is performed to obtain detailed information on the highest significant order interactions for the algorithm and the initial solution effects i.e., G*A and J*J by considering Tukey-Kramer adjustment. The results are shown in Table 8.3. Based on the results, for large size problems, TS2 has a better performance compared to the other heuristic algorithms. ID Table 8.3 Test of effect slices for two machine problem by considering minimization of makespan for algorithm comparison Differences of Least Squares Means Effect G J A I _G J _A I Pr > jt Adjustment Adj P G*A 1 1 1 2 1.0000 Tukey-Kramer 1.0000 G*A 1 1 1 3 1.0000 Tukey-Kramer 1.0000 G*A 1 2 1 3 1.0000 Tukey-Kramer 1.0000 G*A 2 1 2 2 0.2221 Tukey-Kramer 0.9492 G*A 2 1 2 3 0.7550 Tukey-Kramer 1.0000 G*A 2 2 2 3 0.3623 Tukey-Kramer 0.9918 G*A 3 1 3 2 <.0001 Tukey-Kramer 0.0009 G*A 3 1 3 3 0.2705 Tukey-Kramer 0.9724 G*A 3 2 3 3 0.0016 Tukey-Kramer 0.0409 J*I 1 1 1 2 0.2908 Tukey-Kramer 0.8960 J*I 2 1 2 2 0.0061 Tukey-Kramer 0.0654 J*I 3 1 3 2 0.5833 Tukey-Kramer 0.9939 8.1.1.2 The Experimental Design to Compare the Time Spent for Heuristic Algorithms for Two-Machine Problems by Considering Minimization of Makespan Criterion The time spent to terminate the search algorithm and the time spent to find the best solution for each heuristic algorithm are shown in Table 8.4 for all test problems. Table 8.4 The time spent for the test problems of two machine problems (in seconds) ---- by considering minimization of makespan Initial 1 Initial 2 , . TS1 TS2 TS3 TS1 TS2 TS3 2 2 2 2 2 2 ci _!_ ±- _±._ __ 0 _2_ .iL. .iL JL i.. J.. ......Q.. ..i 2 3 4 10 0 0 0 0 0 0 0 0 0 0 0 0 0 C2 p-- -p-- -- Q. .JL _2_ 4 2 3 5 0 0 0 0 0 0 0 0 0 0 0 0 ..! -u-- ..JL 0 0 0 0 0 0 0 0 0 0 0 C3 6 3 4 8 0 0 0 0 0 0 0 0 0 0 0 0 C4 ...L ' 0 L. ...2... L. JL. ....L JL JL JL. 8 4 6 22 0 0 0 0 0 0 0 0 0 0 0 0 C5 .2.. .L 7 ..!L .Q. 0 ..SL. 1L. 2.. JL .JL JL. ..Q.. .JL JL. 10 2 7 13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 o .1 0 0 P!P:.. :P:. P.: : PIPI ! I;II 111 Table 8.4 (Continued) The time spent for the test problems of two machine problems (in seconds) by considering minimization of makespan Initial 1 Initial 2 TS1 TS2 TS1 TS2 TSL TS2 2. 2. 2. 2. 2. . . - - 2 2 2 2 2 2 2 e 47 15 7 76 2 1 4 1 3 1 1 0 4 0 3 0 C24 48 14 7 69 1 1 3 0 2 0 1 0 3 1 3 3 _LL 10 75 1 0 1 0 1 0 0 0 1 1 1 0 C25 5015 10 99 2 1 3 1 3 0 2 1 4 1 3 0 10 83 0 0 2 0 2 1 0 0 2 0 2 2 C26 5216 10 106 1 1 4 2 4 1 4 1 4 4 4 2 C27 5416 10 1 ..L L _L i. _L ..L .JL ±. 2... L Q.. 10 108 1 0 4 4 4 0 2 1 4 1 4 1 The experimental design is performed by applying SAS 9.1 to find the most efficient heuristic algorithm. The normal probability plot of the residuals is shown in Figure 8.2. The middle part of the plot can be interpreted as a line. Thus, the parametric statistics- based analysis of variance (ANOVA) can be used to further analyze the results. 10 + -I- + 05 ill I I -I- 41- lI1tIlIII4I1 R 00 muiui.. I IlIlIWlIlUhl -H- -0 5 -I- -I- -I- -1 0 .1 1 5 10 25 50 15 90 95 99 99.9 Iorna I Percent I Ins Figure 8.2 The normal probability plot of the experimental design of finding the most efficient heuristic algorithm for two machine problem by considering minimization of makespan 112 The ANOVA table for time spent comparison is presented in Table A. 1 in appendices. The results of the experiment show that there is a significant difference among the time spent for heuristic algorithms (the result ofF test is less than 0.0001). The Tukey test is applied to find the difference. The result of Tukey's test shows that TS 1 is more efficient compared to the other two heuristic algorithms. The results of the experimental design show that there is a significant difference between the time spent for algorithms by applying different initial solution techniques for two machine problems (the result of F test is equal to 0.032 1). A comparison between the time spent by the heuristic algorithms in applying different initial solutions shows that the first initial solution generator results in a better performance of the heuristic algorithms than the second generator. Among the interactions, the interaction between the group factor and all subplot factors i.e., G*A and G*I are significant. It means that the group factor is an important factor in such problems. The algorithm factor is also important because most of the interactions including the algorithm factor (G*A, G*I, Ri *A, A*I, G*R1 *A, G*A*I, G*J*R1 *A) are significant. The significant factors and interactions are shown in bold in Table A.1. A test of effect slice is performed to obtain detailed information on the highest significant order interactions for the algorithm and the initial solution effects i.e., G*J*R1*A and G*J*I. by considering Tukey-Kramer adjustment. The results are shown in Table A.2. This table shows the performance of the heuristic algorithms as well as the initial solutions for each cell of the experiment. Based on the results, the summary of the significant differences are summarized as follows: . For any large size group problems, there is a significant difference between the time spent by the algorithms. In all of them, TS 1 required less time compared to the others. This result was expected because in TS2 and TS3, the algorithm has a chance to search more to find a better objective function value. By increasing the size of the problems, this difference can be observed more prominently. 113 . For any large size group and large size job problem, there is a significant difference between the time spent by the algorithms in applying different initial solutions. In these problems, the first initial solution generator has a better performance than the second (Schaller et al., 2000) one. 8.1.1.3 The Comparison between the Best Tabu Search and the Results of Schaller et al. (2000) Algorithm In this section, a paired t-test is performed between the results of the best tabu search algorithm and the results of Schaller et al. (2000) algorithm for test problems. The result of Schaller et al. (2000) algorithm for test problems is presented in Table B. 1 of appendix. As discussed in section 8.1.1.1, TS2 has the best performance compared to the other algorithms. Because there is no difference between the initial solution generators, the results of TS2 by considering the first initial solution generator is applied to be compared with the result of Schaller et al. (2000) algorithm. The result of the paired-t test shows a significant difference between the results of two algorithms. The average error percentage of Schaller et al. (2000) algorithm for the test problems is equal to 9% and the maximum of percentage error for a test problem is equal to 28%. These percentage errors are too high compared to the ones obtained by the proposed heuristic algorithms (0.68%). 8.1.2 The Results of Three-Machine Makespan Criterion The 162 test problems of three machine problem are solved by heuristic algorithms to find the algorithm with the best performance and the best initial solution. The lower bounding technique is also applied for each test problem to evaluate the quality of solutions. Then, the result of the best heuristic algorithm is compared with the result of Schaller et al. (2000) algorithm, as the best current available algorithm, for all test problems. The results are presented in sections below. 114 8.1.2.1 Comparison among Heuristic Algorithms and the Lower Bound for Three Machine Problems by Considering Minimization of Makespan The results of applying the heuristic algorithms and the results of the lower bounding technique are shown in Table 8.5. This table has columns described below: Lower bound: The results of the lower bounding technique as well as the time spent for solving the lower bound problem for each test problem are presented in the table below. The results of the heuristic algorithm: The test problems are solved by three different heuristic algorithms with two different initial solutions. The minimum value of the objective function of heuristic algorithms for each test problem is considered to estimate the quality of solutions. This value is compared to the value of the lower bound of the test problem. The error percentage of each test problem is shown in the "Best Error" column of Table 8.5. Based on the results, the average error percentage of the best solutions is equal to 1.00% and the maximum error is 4.9%, which is obtained in test problem 34. This error percentage is calculated based on the formula presented in section 8.1.1.1. Table 8.5 The results of the experiments with test problems for three machine problems by considering minimization of makespan criterion Initial 1 Initial 2 . = ,-.'. TS1 TS2 TS3 TS1 TS2 TS3 . 1 221 221 0 221 221 221 221 221 221 221 0.000 2 481 494 0.03 491 491 481 494 491 493 481 0.000 303 312 0.6 309 303 303 309 309 303 303 0.000 C2 4 226 236 0.03 230 230 230 230 230 230 230 0.018 5 239 242 0.03 239 239 239 239 239 239 239 0.000 6 351 379 0.03 351 351 351 351 351 351 351 0.000 115 Table 8.5 (Continued) The results of the experiments with test problems for three machine problems by considering minimization of makespan criterion Initial 1 Initial 2 .' Nt - (D 1. CID Nt TS1 TS2 TS3 TS1 TS2 TS3 Nt 0 7 436 474 0.17 444 444 444 438 438 438 438 0.005 C4 8 268 268 0.03 268 268 268 268 268 268 268 0.000 9 242 244 0.03 244 244 244 244 244 244 244 0.008 C5 10 291 332 0.08 300 300 300 300 300 300 300 0.031 11 335 367 0.02 342 342 342 340 340 340 340 0.015 C6 12 305 352 0.23 308 308 308 308 308 308 308 0.010 13 471 514 0.03 490 490 490 490 490 490 490 0.040 C7 14 203 210 0.02 215 215 215 210 210 210 210 0.034 15 389 408 0.09 392 392 392 392 392 392 392 0.008 C8 16 478 492 0.03 492 492 492 492 492 492 492 0.029 17 194 200 0.02 200 200 200 200 200 200 200 0.03 1 C9 18 408 413 0.05 410 410 410 410 410 410 410 0.005 19 334 344 0.03 344 344 344 344 344 344 344 0.030 cio 20 429 435 0.05 433 433 433 430 430 430 430 0.002 21 207 214 0.01 210 209 209 209 209 209 209 0.010 22 498 505 0.19 524 498 498 505 505 505 498 0.000 23 430 448 0.14 448 430 448 448 441 439 430 0.000 C12 24 335 342 0.02 336 336 336 340 340 340 336 0.003 25 409 415 0.02 412 412 412 412 412 412 412 0.007 C13 26 456 459 0.08 459 459 459 456 456 456 456 0.000 27 506 546 0.03 507 507 507 544 544 523 507 0.002 C14 28 345 352 0.03 352 352 352 346 346 346 346 0.003 29 428 453 0.16 432 432 428 428 428 428 428 0.000 C15 30 214 214 0.06 217 217 214 214 214 214 214 0.000 31 398 405 0.03 405 405 405 405 405 405 405 0.018 C16 32 532 545 0.19 539 533 536 533 533 533 533 0.002 397 458 0.02 402 402 402 402 402 402 402 0.013 C17 34 350 373 0.03 373 373 373 367 367 367 367 0.049 35 266 270 0 270 270 270 270 270 270 270 0.015 C18 36 571 638 0.03 583 583 583 589 582 589 582 0.019 547 550 0 548 548 548 547 547 547 547 0.000 C19 38 656 666 0.03 665 665 665 666 663 666 663 0.011 C20 511 0.03 503 503 503 498 498 498 498 0.008 40 413 437 0.03 413 413 413 413 413 413 413 0.000 41 652 671 0.25 660 660 662 655 655 655 655 0.005 C21 42 241 261 0 249 249 249 246 246 246 246 0.02 1 C22 357 0 364 364 364 357 357 357 357 0.035 44 554 554 0.02 554 554 554 554 554 554 554 0.000 116 Table 8.5 (Continued) The results of the experiments with test problems for three machine problems by considering minimization of makespan criterion Initial 1 Initial 2 a cM cM -t r- TS1 TS2 TS3 TS1 TS2 TS3 C 45 734 769 0.41 742 742 742 760 760 746 742 0.011 46 518 531 0.14 525 525 525 523 523 523 523 0.010 47 417 422 0.05 424 422 422 422 422 422 422 0.012 C24 48 617 636 0.03 648 648 646 618 618 618 618 0.002 737 763 0.11 750 739 750 740 740 740 739 0.003 C25 50 466 475 0.02 478 478 478 475 475 475 475 0.019 51 689 728 0.19 703 703 703 725 700 724 700 0.016 C26 52 412 429 0.02 432 432 431 428 428 428 428 0.039 s 634 662 0.03 696 648 692 656 642 649 642 0.013 C27 54 615 626 0.02 641 641 641 624 624 624 624 0.0 15 55 576 629 0.36 582 582 582 581 581 581 581 0.009 C28 56 573 719 0.63 575 575 575 576 576 576 575 0.003 57 403 416 0.59 403 403 403 403 403 403 403 0.000 C29 58 424 475 1.03 426 426 426 426 426 426 426 0.005 59 408 466 0.83 408 408 408 408 408 408 408 0.000 C30 60 533 570 2.53 533 533 533 533 533 533 533 0.000 61 647 724 1.28 658 658 647 668 668 666 647 0.000 C31 62 717 778 3.05 718 718 717 717 717 717 717 0.000 63 529 585 3.31 536 536 536 544 544 544 536 0.013 C32 64 495 555 10.83 508 508 508 518 508 518 508 0.026 65 694 735 5.76 699 694 694 694 694 694 694 0.000 C33 66 384 433 0.55 399 399 394 398 398 398 394 0.026 67 625 637 0.67 635 635 635 635 635 635 635 0.016 C34 68 897 1017 16.58 900 899 900 897 897 897 897 0.000 69 596 666 0.52 599 599 599 599 599 599 599 0.005 C35 70 785 890 2.92 802 802 802 791 791 791 791 0.008 71 716 776 0.09 731 731 731 741 733 741 731 0.021 C36 72 562 620 0.19 571 571 571 571 571 571 571 0.016 784 787 0.69 787 787 787 787 787 787 787 0.004 C37 74 596 609 0.05 608 608 608 604 604 604 604 0.013 75 825 876 3.73 831 831 831 829 829 829 829 0.005 C38 76 636 704 0.16 652 652 652 653 652 652 652 0.025 667 739 1.99 682 672 682 669 669 669 669 0.003 C39 78 800 859 2.66 804 804 804 802 802 802 802 0.003 79 1031 1087 1.48 1048 1046 1048 1055 1055 1052 1046 0.015 C40 80 643 684 0.03 664 664 664 659 659 659 659 0.025 C41 81 609 676 1.61 617 617 617 617 617 617 617 0.013 82 839 897 16.47 847 846 854 847 847 846 846 0.008 117 Table 8.5 (Continued) The results of the experiments with test problems for three machine problems by considering minimization of makespan criterion ci) I Initial 1 Initial 2 - TS1 TS2 TS3 TS1 TS2 TS3 83 509 537 0.78 514 514 514 514 514 514 514 0.010 C42 84 779 946 4 798 795 798 790 790 790 790 0.014 85 979 1087 2.72 1003 1003 1005 1008 1005 1008 1003 0.025 C43 86 873 925 173 891 888 891 891 887 891 887 0.016 87 697 816 0.08 716 701 723 701 701 701 701 0.006 C44 88 902 990 4.36 906 906 906 906 906 906 906 0.004 89 754 830 0.08 775 769 775 775 758 775 758 0.005 C45 90 951 1074 3.67 953 953 953 964 953 964 953 0.002 91 961 1057 0.94 965 965 965 961 961 961 961 0.000 C46 92 777 836 0.05 777 777 777 777 777 777 777 0.000 764 864 2.08 769 767 769 767 767 767 767 0.004 C47 94 820 933 3.69 832 830 832 834 834 834 830 0.012 95 527 552 1.36 528 528 528 527 527 527 527 0.000 C48 96 878 969 6.3 898 884 884 894 884 894 884 0.007 897 1018 2.64 906 906 906 906 906 906 906 0.010 C49 98 774 856 0.06 774 774 774 774 774 774 774 0.000 99 964 1047 24.63 971 971 972 970 969 970 969 0.005 cso 100 640 676 3.34 641 640 640 641 641 641 640 0.000 101 695 771 1.27 695 695 695 695 695 695 695 0.000 csi 102 754 901 10.42 758 758 758 758 758 758 758 0.005 103 1260 1351 4.75 1276 1271 1276 1262 1262 1262 1262 0.002 C52 104 839 889 1.81 843 843 843 845 845 845 843 0.005 105 747 776 0.03 762 761 762 757 757 757 757 0.013 C53 106 953 1044 1.75 971 971 971 955 955 955 955 0.002 107 1061 1183 1.7 1072 1066 1066 1066 1066 1066 1066 0.005 C54 108 1086 1184 0.63 1092 1092 1092 1092 1092 1092 1092 0.006 109 1021 1177 16.55 1046 1033 1044 1040 1040 1038 1033 0.012 C55 110 1166 1288 12.77 1188 1188 1188 1189 1187 1178 1178 0.010 111 847 1004 2356 885 864 885 863 863 863 863 0.019 C56 112 791 925 73.36 820 820 820 816 810 816 810 0.024 113 663 741 41.66 672 668 672 691 691 691 668 0.008 C57 114 796 961 451.9 828 822 828 827 825 827 822 0.033 115 1237 1394 21.75 1254 1254 1254 1260 1260 1260 1254 0.014 C58 116 1205 1316 66.17 1215 1215 1215 1238 1223 1228 1215 0.008 117 693 800 25.01 719 703 719 718 705 703 703 0.014 C59 118 865 1003 254.9 887 887 897 874 874 888 874 0.010 119 641 743 13 648 648 648 644 644 644 644 0.005 C60 120 920 1093 247.3 936 934 936 965 953 942 934 0.015 121 968 1090 11.78 975 975 975 970 970 970 970 0.002 C61 122 1285 1478 1285 1308 1308 1308 1321 1306 1314 1306 0.016 118 Table 8.5 (Continued) The results of the experiments with test problems for three machine problems by considering minimization of makespan criterion 1-c,J . Initial 1 Initial 2 TSI TS3 TSI TS3 [ TS2 { TS2 123 1083 1210 2.34 1110 1108 1118 1107 1107 1101 1101 0.017 C62 124 941 1047 16.11 947 952 952 958 955 958 947 0.006 125 1162 1378 10.58 1189 1189 1189 1190 1190 1190 1189 0.023 C63 126 1028 1127 8.89 1058 1057 1058 1046 1044 1046 1044 0.016 127 1088 1174 12.64 1103 1103 1103 1096 1095 1096 1095 0.006 C64 128 1537 1673 11.09 1558 1543 1558 1545 1545 1545 1543 0.004 129 966 1060 5,77 979 979 979 992 992 971 971 0.005 C65 130 1307 1456 339.8 1325 1325 1325 1338 1317 1338 1317 0.008 C66 131 996 1114 24.28 1009 997 1009 1007 1002 1007 997 0.001 132 1369 1584 500.7 1401 1387 1401 1404 1396 1404 1387 0.013 133 1545 1706 27.8 1567 1567 1567 1569 1569 1569 1567 0.014 C67 134 1077 1167 3.48 1082 1082 1082 1087 1091 1089 1082 0.005 135 888 960 119.7 895 889 894 894 894 894 889 0.001 C68 136 973 1131 483.9 1003 993 1003 993 993 1001 993 0.021 137 1028 1154 108.9 1033 1033 1033 1043 1032 1043 1032 0.004 C69 138 1106 1213 489.1 1133 1131 1133 1129 1129 1129 1129 0.021 139 1568 1721 77.75 1606 1598 1607 1598 1598 1598 1598 0.019 C70 140 1255 1370 29.77 1286 1272 1286 1279 1269 1279 1269 0.011 141 1369 1568 17.3 1400 1387 1400 1395 1395 1395 1387 0.013 C71 142 1393 1545 18.45 1453 1453 1453 1443 1435 1443 1435 0.030 143 1242 1376 16.41 1273 1255 1244 1258 1258 1258 1244 0.002 C72 144 1423 1722 36.38 1457 1457 1455 1458 1458 1458 1455 0.022 145 1252 1402 0.33 1253 1252 1253 1255 1255 1255 1252 0.000 C73 146 1415 1620 12.53 1420 1420 1420 1448 1441 1448 1420 0.004 147 1170 1324 100.5 1170 1170 1170 1197 1197 1197 1170 0.000 C74 148 1284 1460 362.1 1300 1298 1309 1295 1295 1295 1295 0.009 149 1296 1448 353.4 1325 1337 1305 1347 1330 1336 1305 0.007 C75 150 1503 1677 23.03 1539 1520 1539 1514 1514 1514 1514 0.007 151 1763 1927 54.63 1797 1797 1797 1802 1802 1777 1777 0.008 C76 152 1526 1646 34.73 1543 1543 1543 1555 1547 1555 1543 0.011 153 1411 1654 1380 1447 1447 1452 1448 1413 1445 1413 0.001 C77 154 1080 1261 107 1086 1086 1086 1093 1093 1101 1086 0.006 155 1293 1452 266 1322 1307 1305 1305 1305 1305 1305 0.009 C78 156 1206 1310 74 1223 1206 1208 1217 1217 1220 1206 0.000 157 1676 1788 35 1686 1692 1686 1695 1695 1695 1686 0.006 C79 158 1955 2149 165 1984 1973 1988 1989 1976 1989 1973 0.009 159 1697 1815 33.14 1715 1726 1731 1742 1712 1742 1712 0.009 C80 160 1558 1688 0.78 1596 1573 1573 1587 1587 1587 1573 0.010 161 1188 1274 5 1207 1207 1207 1201 1201 1201 1201 0.011 C81 162 1530 1695 15 1550 1550 1550 1565 1548 1558 1548 0.012 119 The normal probability plot of the residuals confirms that the residuals have a normal distribution (Figure 8.3). Thus, there is evidence that the parametric statistics-based analysis of variance (ANOVA) can be used to further analyze the results. 30 20 10 H -20 -30 01 .1 1 5 10 25 50 15 90 95 99 19.9 99.99 Nar,1 Prcnt le Figure 8.3 The normal probability plot of the experimental design of finding the best heuristic algorithm for three machine problem by considering minimization of makespan The experimental design is performed by applying SAS 9.1 to find the best heuristic algorithm as well as the best initial solution. The ANOVA table is shown in Table A.3 of appendix. The results of the experiment show that there is a significant difference among the objective function values of heuristic algorithms (the result of F test is less than 0.0001). To find the best heuristic algorithm, a Tukey test is performed. The result of Tukey's test shows that TS2 has a better performance compared to the other two heuristic algorithms. The results of the experimental design show that there is no difference between applying different initial solution generators for three machine problems (the result of F test is equal to 0.2732). 120 Among the interactions, the interaction between the group factor and all sub-plot factors (G*A and G*I) are significant. This supports the importance of group as a factor. The other significant interactions are R1*I, G*J*I, G*R2*I, J*j*, G*J*R2*I, G*R1*R2*I, and G*J*R1*R2*I. A test of effect slice is performed to obtain detailed information on the highest significant order interactions for the algorithm and the initial solution effects i.e., G *A and G*J*R1*R2*I by considering Tukey-Kramer adjustment. The results are shown in Table A.4 in appendix. Based on the results, the summary of significant differences are as follows: For large size group problems, there is a significant difference among the performance of heuristic algorithms. Based on Tukey' s test results, for these problems, TS2 has a better performance than the other heuristic algorithms. For small size group, large size job, and the third level of set-up ratio for both RI and R2 factors, there is a significant difference between the performance of heuristic algorithms. In these problems, the second initial solution generator has a better performance. 8.1.2.2 The Experimental Design to Compare the Time Spent for Heuristic Algorithms for Three Machine Problems by Considering Minimization of Makespan The time spent to terminate the search algorithm and the time spent to find the best solution for each heuristic algorithm are shown in Table 8.6 for all test problems. 121 Table 8.6 The time spent for the test problems of three machine problems (in seconds) by considering minimization of makespan criterion Initial 1 Initial 2 S S TS3 TS1 TS2 TS3 2 2 2 2 2 2 2 2 2 2 2 2 eD D D eD eD ci _L_ JL JL JL L JL iL Q JL JL _2_ _Q_ __P_- 2 0 0 0 0 0 0 0 0 0 0 0 0 C2 4 p--- 0 0 p-- p-- p-- C3 p 6 0 0 0 p-- 0 0 0 0 0 p--- 0 0 0 0 0 0 0 0 0 0 p-- 0 0 0 0 0 C4 .L. Q JL JL JL Q LJ JL JL L JL 8 0 0 0 0 0 0 0 0 1 1 0 0 cs -- 10 0 _Q_ p-- p-- 0 0 0 2 0 ±L 0 0 L _Q- 0 iL JL 0 0 cL 0 0 L C6 JL JL JL JL JL JL JL L iL J 12 0 0 0 0 0 0 0 0 0 0 0 0 C7 -u-- 14 0 P 0 2 P_ .L _L -L ___ JL __ _ 0 0 0 0 0 0 0 0 0 0 C8 p-- _2_ .JL P L JL JL JL P L JL L L 16 0 0 0 0 0 0 0 0 0 0 0 0 C9 p- C 10 18 20 0 p-- 0 0 0 0 0 p 0 0 0 0 0 0 0 0 _ 0 Q 0 0 0 _ 0 0 0 ___ __- 0 o 0 cii IL _Q. Q Q_ L L Q Q Q. Q Q iL L. 22 0 0 0 -- 0 0 0 0 0 0 0 0 0 C12 -p--- p-- p--- p--- 24 0 0 0 0 0 0 0 0 0 0 0 0 C13 26 p0 0 0 p-- 0 0 0 p-- 0 0 0 0 p-- 0 0 C 14 IL 9 _L .__ __ _ L P 28 0 0 0 0 0 0 0 0 0 0 0 0 C 15 30 p-- 0 0 0 p--- 0 -- p--- -- -- 0 0 0 0 0 0 0 0 C16 -- 2 JL JL JL JL 2 L JL JL L 32 0 0 0 0 0 0 0 0 0 0 0 0 C17 34 -- 0 0 0 p-- 0 2_ ± JL JL L 0 0 0 0 0 0 0 0 Cl 8 p-- p--- p-- 36 0 0 0 0 0 0 0 0 0 0 0 0 122 Table 8.6 (Continued) The time spent for the test problems of three machine problems (in seconds) by considering minimization of makespan criterion Initial 1 Initial 2 TSI TS2 TS3 TSI TS2 TS3 2 D 3 eD 2 2 2 2 2 2 2 2 2 2 fD D eD Cl 9 9 Q P .... .Q L P. P P P C20 38 40 0 0 0 -- -p-- 0 1 0 0 1 0 0 0 0 0 -- -p-- 0 0 0 0 0 0 0 -- -p-- 0 0 0 0 C21 42 L 0 Q. 0 Q 0 0 0 L __ 0 0 L .JL 0 ±0 JL JL JL 0 0 0 C22 -p-- -p-- -p-- -p-- _2__ 44 0 0 0 0 0 0 0 0 0 0 0 0 C23 L L L J P 9 46 0 0 0 0 0 0 0 0 0 0 0 0 C24 -p- -p--- -p-- -p-- 48 0 0 0 0 0 0 0 0 0 0 0 0 C25 50 -p-- 0 0 0 --- -p-- 0 0 0 0 -- -p-- 0 0 0 0 JL 0 C26 52 -- -- 0 0 ± JL L JL JL L JL L L JL 0 0 0 0 0 0 0 0 0 0 C27 -p--- 54 0 0 -p--- 0 0 0 _2_ 0 00-- -p-- Q_ 0 0 1 -p- 1 C28 56 -p-- -p- -p-- 1 1 1 1 1 0 0 0 1 0 1 0 C29 -v- L L P P L 58 0 0 0 0 0 0 0 0 0 0 1 1 C30 60 1 -p- 0 2 0 3 1 1 0 L 3 _i_ 0 __ __ 3 0 C3 1 ±!- 62 -- 1 - 1 -i-_ 6 _2__ 2 L 6 ± L 3 1 9 0 L 5 Q 0 6 L J 0 C32 64 -p-- 0 0 p-- 0 0 _i__ L .JL 1 1 0 0 1 0 1 0 C33 66 -p- 0 0 0 -- 0 -f- 0 0 _±_ 0 --- -p-- ___ i 0 0 0 0 0 C34 -k- Q L P L P .L 68 3 2 12 7 14 4 3 0 15 0 15 0 C35 -p--- 70 1 1 -p- 4 0 5 -p---- 0 1 0 6 1 5 1 C36 -- 72 0 0 0 JL L 0 0 0 0 0 0 0 0 0 C37 JL ± L -- -- --- --- --- 74 0 0 0 0 1 0 0 0 0 0 0 0 C38-f-- 76 1 1 -p- 5 1 !!_ 7 __- __ 2 1 0 12 9 11 0 123 Table 8.6 (Continued) The time spent for the test problems of three machine problems (in seconds) by considering minimization of makespan criterion Initial 1 Initial 2 TS1 TS2 TS3 TS1 TS2 TS3 - _ E - - -t 2 2 2 5 5 5 5 5 C39 2i _IP._ __ .__ _i_ __ _L _±L i JL ..L C40 78 80 2 --- 0 2 -- --- 0 19 0 5 0 -p- 20 0 6 0 0 3 1 0 -- 25 0 5 L ± 0 23 0 4 0 C41 82 29 27 137 L L 69 196 li 33 L L 29 12 ± 135 1 18 125 27 C42 84 -p- L L L JL JL L L L 21 17 64 37 79 22 20 18 61 23 101 37 L .± C43 86 0 0 J 4 2 4 00 0 5 5 4 1 C44 -p-- L L L _!___ 88 12 0 88 0 88 0 12 2 84 6 90 17 C45 !2 P 1 L L L L L i . 90 19 17 112 35 48 137 90 18 4 116 88 7 C46 92 ± P ± 1 L JL L 1± L 1 1 5 2 5 4 43 2 2 3 3 C47 94 P L P -- 5 2 25 32 23 4 4 4 5 15 5 13 C48 96 49 -p---- 27 185 --- 37 _i_ 48L _ _2__ L JL 125 150 51 148 71 149 53 C49 2 L JQ ± i il. L il L 98 0 0 1 0 1 0 0 0 1 1 1 1 cso L i ±L J .± ±L iQ. ± I J ±L 100 1 0 6 3 7 6 1 1 5 2 8 3 ci 102 14 9 61 13 109 ± L .± ± 26 13 6 68 11 72 11 C52 --'- fL 104 1 0 3 1 6 1 1 0 6 0 6 0 C53 106 _P_ 2 Q 1 L 15 3 L. L 16 Q 3 ± 2 2 JL L 17 6 17 6 C54 LQ7 iL Ji iL _ L i L i L C55---- -- -- -- -- --- ---- -- -- 108 110 6 15 15 1 34 89 2 27 25 110 2 33 5 16 4 9 17 92 5 71 25 86 7 21 C56 IL 1L --- -- -- --- -- 112 28 7 191 18 205 20 27 14 213 148 217 41 C57-- --- --- ---- --- --- 114 35 17 251 193 251 47 38 14 263 149 247 36 124 Table 8.6 (Continued) The time spent for the test problems of three machine problems (in seconds) by considering minimization of makespan criterion Initial 1 Initial 2 TS1 TS2 TS3 TS1 TS2 TS3 - _ - E 2 2 2 2 2 2 2 2 2 (9 (9 (9 (9 (9 (9 C58 -- _2_ -i-- -!l- 116 32 25 205 -L _2_ -- -!- 61 219 65 21 - 14 210 170 217 _2L 106 C59 118 -- -p- 18 15 120 --- 40 __ _IL _J_ 128 ___!__ _ 83 18 5 126 13 _ 127 J! 44 C60 _ _ _ J _i. _!__ _ _i__ 120 61 38 357 158 486 110 61 8 421 328 465 142 C61 -p- p-- _i__ _L .i_ -_ _ --_ 122 4! 29 266 70 277 78 41 18 252 226 294 278 C62----- -p- --- p--- --- -- ------ --- --- --- -- -- 124 10 8 57 8 65 9 9 3 64 53 64 8 C63-- -- -- --- 126 21 9 120 53 138 24 21 3 108 63 103 5 C64--- J 128 196 121 1260 755 1337 274 189 43 1129 115 1051 88 C65--- 130 22! 176 1471 -p-- 384 -- -- --- 1692 488 228 78 977 548 1137 2 148 C66 132 448 118 -- 2378 639 !QL 2266 233 iQ 447 -L 198 2 2117 J2 1650 2784 396 C67 ---- ---- ± i* 1L 4!L J 2 1L 134 30 15 145 23 223 42 32 23 128 31 147 48 C68 --- J9_ _ __ __ 12!_ __ L 136 58 25 400 287 432 69 58 8 420 20 449 168 C69 --- J.L i J2 138 131 106 910 417 1030 305 132 5 1036 15 97! 14 C70 i iL JL J i i2 Z2 11L C71--- 140 50 ---- 33 284 -- -- -- --- -- --- --- 146 223 ----- 38 51 ---- 14 185 95 274 27 -- ---- --- --- --- ---- -- 142 148 66 1067 160 1175 181 150 79 1215 604 1166 228 C72--- ---'-- 144 137 65 922 177 893 130 135 29 1051 82 806 59 C73 146 142 -i--- 129 ----- 995 --- --- --- 316 1139 396 2i_ 139 115 _ 471 J290 ±L 456 136 C74 J42 i2P J± L J2 ± J9 JQ 148 284 179 1768 695 1502 623 267 128 784 137 768 136 C75 150 456 --- 347 1719 -- 1053 1751 --- 525 467 -- 129 i2iQ 1913 _ZL 178 JL 2579 255 C76 152 152 -- 127 868 244 - 957 260 J1 147 133 775 565 ii 457 150 125 Table 8.6 (Continued) The time spent for the test problems of three machine problems (in seconds) by considering minimization of makespan criterion Initial 1 Initial 2 TS1 TS2 TS3 TS1 TS2 TS3 H C H H C H C H C H C * E E 2 2 2 2 2 2 C77 J- -2 -- _I JI1_ 112 2_ - - J2! iL J4 _2 154 97 97 571 205 617 201 95 53 718 145 641 121 155 224 190 1248 948 1534 331 224 35 1636 187 1663 178 C78 156 131 57 835 459 864 119 132 101 673 123 596 29 157 203 25 1514 892 1532 70 199 11 607 12 588 13 C79 158 401 373 2833 2435 3037 1074 406 327 3071 2256 3105 910 159 325 305 1965 826 1871 669 302 25 2211 1145 1715 48 C80 160 224 148 1528 684 1695 763 228 27 719 34 1757 78 161 64 31 406 61 502 91 68 18 496 52 501 52 C8 1 162 142 123 785 277 821 252 143 93 1097 898 1114 1095 The normal probability plot of the residuals is shown in Figure 8.4. The residual plot is close to a line. It shows that ANOVA can be used to analyze the results. 750 -I- - 500 250 4 H 0 250 500 + tit4d4 + 750 01 .1 1 5 10 25 50 75 90 95 9 9 99 9 N0..-_oI Figure 8.4 The normal probability plot of the experimental design of finding the most efficient heuristic algorithm for three machine problem by considering minimization of makespan The ANOVA table for comparison of time spent is presented in Table A.5 in appendix. The results of the experiment show that there is a significant difference among the time spent for heuristic algorithms (the result ofF test is less than 0.000 1). The Tukey test is 126 applied to find the difference. The result of Tukey' s test shows that the time spent by TS 1 is less than the other two heuristic algorithms. This result was expected as discussed in previous sections. The results of the experimental design show that there is not a significant difference between the time spent by algorithms in applying different initial solution techniques for three machine problems (the result ofF test is equal to 0.1929). Among the interactions, the interactions between the algorithm factor and group and job factors (G*A, J*A, G*J*A, R1*R2*A, and G*R1*R2*A) are significant. This result was expected, because by increasing the size of the problems (increasing the number of groups or number of jobs in a group), the difference between the time spent by TS1 and the other two heuristic algorithms (TS2 and TS3) will be increased. The effect slice test is performed for more detailed comparisons on the highest significant order interaction for the algorithm effects i.e., and G*J*R1*R2*A by considering Tukey-Kramer adjustment. Based on the results for any large size group problem, there is a significant difference between the time spent by the algorithms. In all of these problems, TS1 required less time compared to TS2 and TS3. This result was expected because by increasing the size of the problems, the difference between time spent by TS1 compared to TS2 and TS3 will be increased. 8.1.2.3 The Comparison between the Best Tabu Search and the Results of Schaller et al. (2000) Algorithm for six machine Problems The results of Schaller et al. (2000) algorithm are compared to the result of the heuristic algorithms in this section by applying a paired t-test. The result of Schaller et al. (2000) algorithm for test problems is presented in Table B.2 of appendix. As discussed in section 8.1.2.1, TS2 has the best performance compared to the other algorithms. Because there is no difference between the initial solution generators, the results of TS2 by considering the first initial solution generator is applied to be compared with the results of Schaller et al. (2000) algorithm. The result of the paired t-test shows a 127 significant difference between the results of two algorithms. In other words, TS2 has a better performance compared to Schaller et al. (2000) algorithm for three machine problems. The average error percentage of Schaller et al. (2000) algorithm for the test problems is equal to 9% and the maximum of percentage error for a test problem is equal to 25%. These results are too high compared to the one obtained by the proposed heuristic algorithms (1.00%). 8.1.3 The Results of Six-Machine Makespan Criterion The heuristic algorithms are applied to solve all 54 test problems for six machine problems to find the algorithm with the best performance and the best initial solution generator. The lower bounding technique is also applied for each test problem to evaluate the quality of solution. The Schaller et al. (2000) algorithm for each test problem is also applied to compare the results with the heuristic algorithm. The results are presented in the sections below. 8.1.3.1 Comparison among Heuristic Algorithms and the Lower Bound The results of performing the heuristic algorithms and the result of the lower bounding technique are shown in the tables below. The performance of lower bounding technique for small size problems was not good enough. Thus, four of the small size problems in which the lower bounding technique could not find a good quality lower bound, and can be solved optimally in negligible time, are solved optimally by the original mathematical model. Table 8.7 shows the results of the heuristic algorithms as well as the result of the best heuristic algorithm for each test problem. In Table 8.8, the optimal solution of those small size problems which are solved optimally, the result of the lower bounding technique, the time spent to solve the problems with the lower bounding mathematical model, the result of Schaller et al. (2000) algorithm, and the minimum objective function value obtained by the heuristic algorithms are shown. The 128 minimum value of the objective function of the heuristic algorithms for each test problem is considered to estimate the quality of solutions. This value is compared to the value of the lower bound or the objective function value of the optimal solution (for those test problems which are solved optimally) of the test problems. The error percentage of each test problem is shown in the "Best Error" colunm of Table 8.8. Based on the results, the average error percentage of the heuristic algorithm is equal to 1.60% and the maximum error is 7.8%, which is obtained in test problem 15. This error percentage is calculated based on the formula presented in section 8.1.1.1. Table 8.7 The results of the experiments with test problems for six machine problems by considering minimization of makespan criterion Initial 1 Initial 2 TS1 TS2 TS3 TS1 TS2 TS3 1 1688 1683 1688 1688 1676 1688 1676 2 1086 1086 1086 1086 1086 1086 1086 279 279 279 279 279 279 279 C2 4 169 169 169 169 169 169 169 1420 1420 1420 1420 1420 1420 1420 C3 797 797 797 797 797 797 797 1878 1878 1878 1863 1863 1863 1863 C4 8 1477 1477 1477 1477 1477 1477 1477 224 224 224 217 217 217 217 C5 10 410 410 410 424 424 424 410 i 1466 1466 1466 1472 1472 1468 1466 C6 2 1818 1795 1809 1795 1795 1795 1795 3 786 786 786 786 786 786 786 C7 4 1179 1179 1179 1179 1179 1179 1179 5 496 496 496 496 496 496 496 C8 16 418 418 418 409 409 409 409 17 1583 1583 1583 1583 1583 1583 1583 C9 18 1218 1218 1218 1212 1212 1212 1212 19 3087 3087 3087 3087 3087 3087 3087 cio 20 2633 2633 2633 2631 2631 2631 2631 21 451 451 451 454 452 452 451 cii 22 527 527 527 532 532 528 527 23 2782 2763 2782 2763 2763 2763 2763 C12 24 2108 2108 2097 2097 2097 2097 2097 25 2172 2172 2172 2172 2172 2172 2172 C13 26 2459 2459 2459 2459 2459 2459 2459 129 Table 8.7 (Continued) The results of the experiments with test problems for six machine problems by considering minimization of makespan criterion Initial 1 Initial 2 eD -. #) ri TS1 TS2 TS3 TSI TS2 TS3 27 564 564 564 564 565 564 564 28 632 632 645 633 631 633 631 29 2538 2538 2538 2528 2528 2528 2528 C15 30 2223 2223 2223 2223 2223 2223 2223 31 3269 3268 3269 3269 3266 3269 3266 C16 32 2945 2945 2945 2945 2945 2945 2945 685 682 683 681 682 682 681 C17 34 983 979 981 980 983 984 979 3769 3769 3769 3755 3749 3755 3749 C18 36 2942 2942 2942 2942 2942 2942 2942 5305 5299 5305 5272 5272 5272 5272 C19 38 3942 3941 3941 3954 3948 3946 3941 731 731 743 728 728 734 728 C20 40 740 740 751 758 749 755 740 41 3803 3797 3803 3803 3797 3803 3797 C21 42 4775 4768 4781 4778 4778 4778 4768 4479 4478 4479 4473 4473 4473 4473 C22 44 5428 5428 5428 5449 5449 5449 5428 1075 1075 1094 1100 1094 1090 1075 C23 993 979 993 990 986 991 979 5296 5288 5296 5297 5286 5291 5286 C24 48 5633 5633 5633 5643 5627 5643 5627 4947 4920 4947 4943 4943 4943 4920 C25 50 5045 5035 5045 5051 5032 5051 5032 51 1092 1092 1106 1082 1077 1084 1077 C26 52 1194 1176 1206 1189 1189 1198 1176 5900 5899 5900 5923 5891 5919 5891 C27 54 5866 5848 5848 5882 5879 5872 5848 130 Table 8.8 The lower bound value of test problems for six machine problems by considering minimization of makespan criterion D D rD E 0 D 0 -. 0 -. e' 0 . 1 1666 0.05 1682 1666 1676 0.006 0.006 Cl 2 1086 0.02 1086 1086 1086 0.000 0.000 3 279 262 0.2 284 279 279 0.065 0.000 C2 4 169 156 0.03 204 169 169 0.083 0.000 1391 0.03 1427 1391 1420 0.021 0.021 C3 6 797 753 0 797 797 797 0.058 0.000 1863 0.05 1871 1863 1863 0.000 0.000 C4 8 1477 0.02 1477 1477 1477 0.000 0.000 9 217 195 0.02 228 217 217 0.113 0.000 CS 10 390 0.66 424 390 410 0.051 0.051 1442 0.02 1481 1442 1466 0.017 0.017 C6 12 1765 0.05 1795 1765 1795 0.017 0.017 13 786 0.01 786 786 786 0.000 0.000 C7 14 1179 0.02 1184 1179 1179 0.000 0.000 15 460 0.16 497 460 496 0.078 0.078 C8 16 383 0.19 411 383 409 0.068 0.068 17 1530 0.02 1598 1530 1583 0.035 0.035 C9 18 1159 0.02 1215 1159 1212 0.046 0.046 19 3062 0.11 3310 3062 3087 0.008 0.008 ClO 20 2631 0.09 2666 2631 2631 0.000 0.000 21 430 2.28 477 430 451 0.049 0.049 22 506 60.92 543 506 527 0.042 0.042 23 2748 0 2805 2748 2763 0.005 0.005 C12 24 2084 0.09 2097 2084 2097 0.006 0.006 25 2172 0.06 2195 2172 2172 0.000 0.000 C13 26 2459 0.5 2465 2459 2459 0.000 0.000 27 553 19.33 582 553 564 0.020 0.020 C14 28 619 34.17 650 619 631 0.019 0.019 29 2500 0.05 2555 2500 2528 0.011 0.011 C15 30 2189 0.05 2257 2189 2223 0.016 0.016 31 3266 0.09 3282 3266 3266 0.000 0.000 C16 32 2945 0.08 2967 2945 2945 0.000 0.000 667 7.48 705 667 681 0.021 0.021 C17 34 949 408.47 1021 949 979 0.032 0.032 3730 2.13 3838 3730 3749 0.005 0.005 C18 36 2898 0.08 2952 2898 2942 0.015 0.015 131 Table 8.8 (Continued) The lower bound value of test problems for six machine problems by considering minimization of makespan criterion 0 eD. eDeb 00 r B E © 0 © 0 - 5255 94 5337 5255 5272 0.003 0.003 C19 38 3927 3.22 3940 3927 3941 0.004 0.004 694 36550 746 694 728 0.049 0.049 C20 40 721 21253 801 721 740 0.026 0.026 41 3763 4.58 3834 3763 3797 0.009 0.009 C21 42 4726 1.3 4800 4726 4768 0.009 0.009 4 4455 6.41 4506 4455 4473 0.004 0.004 C22 44 5396 48.17 5459 5396 5428 0.006 0.006 1033 58574 1120 1033 1075 0.041 0.041 C23 46 957 53166 1035 957 979 0.023 0.023 5225 20.97 5317 5225 5286 0.012 0.012 C24 48 5572 24.19 5688 5572 5627 0.010 0.010 4911 0.92 4973 4911 4920 0.002 0.002 C25 50 5027 25.03 5067 5027 5032 0.001 0.001 51 1047 18571 1136 1047 1077 0.029 0.029 C26 52 1136 23506 1239 1119 1176 0.051 0.035 53 5829 16.16 5957 5829 5891 0.011 0.011 C27 54 5802 1.31 5914 5802 5848 0.008 0.008 The normal probability plot of the residuals confirms that the residuals have a normal distribution (Figure 8.5). Thus, there is evidence that the parametric statistics-based analysis of variance (ANOVA) can be used to further analyze the results. 15 + 10 5 F' 0 -s -10 + 1s 1 I S tO 2S 50 75 90 SS 99 99. 9 flor.,.al ra,-cen tiles Figure 8.5 The normal probability plot of the experimental design of finding the best heuristic algorithm for six machine problem by considering minimization of makespan 132 The SAS 9.1 is used to perform the experimental design to find the best heuristic algorithm as well as the best initial solution. The ANOVA table is shown in Table A.6 in appendix. The results of the experiment show that there is a significant difference among the objective function values of heuristic algorithms (the result of F test is equal to 0.0004). To find the best heuristic algorithm, a Tukey test is performed. The result of Tukey's test shows that TS2 has a better performance compared to the other two heuristic algorithms. The results of the experimental design show that there is no difference between the initial solution generator for six machine problems (the result of F test is equal to 0.3 344). Among the interactions, the interaction between the group factor and all sub-plot factors (G*A and G*I) are significant. This supports the idea of importance of the group factor. The significant interactions that include the initial solution factor are G*R1*J, J*Rl*I, and G*J*R1*I. Table A.7 in appendix shows the result of the effect slice test for detailed comparisons by considering the highest significant order interactions for the algorithm and the initial solution effects i.e., G *A and G*J*R1 *J based on Tukey-Kramer adjustment. This table shows the performance of the heuristic algorithms as well as the initial solutions for each cell of the experimental design. Based on the result, the significant differences are as follows: For all large size group problems, there is a significant difference among the performance of the heuristic algorithms. In these problems, TS2 has a better performance compared to the other heuristic algorithms. This result came from a Tukey test. For large size group, large size job, and the third level of set-up ratio factor, (the set-up time of groups on machines are decreased from M1 to M6), there is a significant difference between the performances of initial solutions. In these problems, the random initial solution generator has a better performance. 133 8.1.3.2 The Experimental Design to Compare the Time Spent for Heuristic Algorithms for Six-machine Problems by Considering Minimization of Makespan The time spent to terminate the search algorithm and the time spent to find the best solution for each heuristic algorithm are shown in Table 8.9 for all test problems. Table 8.9 The time spent for the test problems of six machine problems (in seconds) by considering minimization of makespan criterion Initial 1 Initial 2 TS1 TS2 TS3 TS1 TS2 TS3 C e C C C C D C e { * eD 2 2 2 2 2 2 2 2 2 2 2 e eD eD e eD ci _L JL JL 0 0 0 0 L L JL JL. 0 2 0 0 0 0 0 0 0 0 0 0 0 0 C2 p-- 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 0 0 0 0 C3 6 0 Q 0 0 0 0 0 0 JL 0 0 0 L 0 0 0 0 0 0 0 0 0 C4 8 p0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 C5 --- p-- 0 0 0 0 0 0 0 10 0 0 0 0 0 0 0 0 0 0 0 0 C6 !L 0 0 0 0 0 0 0 0 12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 C7 14 0 0 0 0 0 0 00 0 0 0 0 C8 _2_ 0 0 0 0 0 0 0 0 0 0 16 0 0 0 0 0 0 0 0 0 0 0 0 _iL 0 0 0 0 0 0 0 0 0 0 0 0 C9 18 19 0 0 0 0 10 0 0 0 00 0 0 0 0 1 0 9 1 1 0 8 1 8 1 cio 20 0 0 1 1 1 0 00 1 0 1 0 cli IL 0 0 0 0 0 0 0 0 0 0 1 0 22 1 0 3 0 4 0 10 2 0 3 1 2 1 2 0 1 0 2 0 1 0 C12 24 0 0 0 0 0 0 00 0 0 0 0 134 Table 8.9 (Continued)The time spent for the test problems of six machine problems (in seconds) by considering minimization of makespan criterion Initial 1 Initial 2 TSI 0____ ____ TS2 eD o o TS3 TS1 TS2 TS3 2 : 2 2 2 2 2 5 2 2 2 2 e eD e eD D D ep eD eD et 0 0 0 0 0 0 C13 26 0 0 1 0 1 0 0 0 1 0 1 0 27 2 0 7 0 9 1 2 1 9 1 7 2 C14 28 4 3 19 6 22 4 3 1 21 10 21 1 Cl 5 _2_ _L 0 3 0 6 0 1 0 5 1 4 1 30 0 0 1 0 1 0 0 0 1 0 1 1 31 6 4 46 31 46 13 6 4 29 17 28 8 Cl 6 32 3 1 25 3 24 3 3 0 15 0 16 1 33 5 2 15 12 13 7 4 1 16 3 12 5 c 7 34 45 11 204 197 177 27 45 36 227 166 195 42 35 22 5 135 11 118 6 23 16 122 61 65 20 C18 36 3 2 19 3 25 8 3 2 23 5 25 6 37 23 20 187 182 177 60 23 18 188 50 168 50 C19 38 4 3 33 27 33 22 4 3 29 11 30 15 39 14 12 98 30 97 28 13 7 100 18 94 32 C20 40 17 7 103 14 106 14 17 3 123 116 117 37 41 5 5 31 14 35 12 5 1 35 21 34 3 C2 1 42 21 12 103 64 140 119 20 19 153 53 107 34 43 18 7 137 110 136 19 17 8 112 19 112 19 C22 44 86 31 668 93 666 93 80 2 659 6 661 6 45 108 49 750 129 895 216 113 88 873 478 891 178 C23 46 76 24 496 269 424 381 77 74 496 437 437 105 47 90 66 514 316 684 174 88 54 740 438 624 600 C24 48 166 117 1061 281 1138 309 141 35 1137 1048 1108 103 49 145 133 1111 973 1075 393 144 23 412 25 395 25 C25 50 116 55 942 838 946 163 119 78 592 560 497 134 51 98 8 688 282 751 110 99 8 768 455 774 201 C26 52 139 100 985 881 1102 91 153 130 1142 377 1115 260 285 206 1889 1634 2455 618 307 209 907 520 2293 1943 C27 54 362 24 2709 2634 2298 2242 346 67 1799 898 1020 677 The normal probability plot of the residuals is shown in Figure 8.6. The plot confirms that the residuals have a normal distribution. Thus, there is evidence that the parametric statistics-based analysis of variance (ANOVA) can be used to further analyze the results. 135 750 + 500 250 II + 0 -250 + + -500 I-,- -750 .1 1 5 10 25 50 75 90 95 99 99.9 Nn.-n1 PtI1e Figure 8.6 The normal probability plot of the experimental design of finding the most efficient heuristic algorithm for six machine problem by considering minimization of makespan The ANOVA table for comparison of time spent is presented in Table A.8. The results of the experiment show that there is a significant difference among the time spent for heuristic algorithms (the result ofF test is less than 0.0001). The Tukey test is applied to find the difference. The result of Tukey' s test shows that the time spent by TS 1 is less than the other two heuristic algorithms. The results of the experiment show that there is a significant difference between the time spent by algorithms in applying different initial solution generators for six machine problems (the result of F test is equal to 0.0256). If the search is initiated with the second initial solution (Schaller el al., 2000), the search requires less time than the first one. Among the interactions, all interactions between the algorithm factor and the whole plot factors (G*A, J*A, R1*A, G*J*A, G*R1*A, J*R1*A, and G*J*R1*A) are significant. This result was expected, because by increasing the size of the problems (increasing the number of groups or number ofjobs in a group), the difference between the time spent by TS 1 and the other two heuristic algorithms (TS2 and TS3) will be increased. 136 The interaction of the initial solution factor and the group and job factors (G*I, JJ, and G*J*I) are significant as well. The interactions of whole plot factors (G, J, and Ri) with sub-plot factors (A and I) are also significant. Table A.9 in appendix shows the result of the effect slice test for detailed comparisons by considering the highest significant order interactions for the algorithm and the initial solution effects i.e., G*J*R1*A and G*J*Rl*1 based on Tukey-Kramer adjustment. This table shows the performance of the heuristic algorithms as well as the initial solutions for each cell of the experimental design. Based on the result, the significant differences are as follows: For large size group, medium and large size job problems, there is a significant difference between the time spent by the algorithms. in all of these problems, TS1 required less time compared to TS2 and TS3. This result was expected because by increasing the size of the problems, the difference between time spent by TS1 compared to TS2 and TS3 will be increased. For large size group, large size job, the first and the third level of set-up ratio factor problems, there is a significant difference between the performance of initial solutions. Iii both of these cases, the second initial solution has a better performance. 8.1.3.3 The Comparison Between the Best Tabu Search and the Results of Schaller et al. (2000) Algorithm for Six-Machine Problems by Considering Minimization of Makespan Criterion In this section, a paired t-test is performed between the results of the best tabu search algorithm and the results of Schaller et al. (2000) algorithm for test problems. The result of Schaller et al. (2000) algorithm for test problems is presented in Table B.3 of appendix. As discussed in section 8.1.3.1, TS2 has the best performance compared to the other algorithms. Because there is no difference between the initial solution generators, the results of TS2 by considering the first initial solution generator is applied to be compared with the result of Schaller et al. (2000) algorithm. The result of 137 the paired t-test shows a significant difference between the results of two algorithms. In other words, TS2 has a better performance compared to the Schaller et al. (2000) algorithm for six machine problems. The average error percentage of Schaller et al. (2000) algorithm for the test problems is equal to 7% and the maximum percentage error for a test problem is equal to 31%. These results are too high compared to the one obtained by the proposed heuristic algorithms (1.60%). 8.2 The Results for Minimization of Sum of the Completion Times Criterion The results for two, three, and six machine problems by considering the minimization of sum of the completion times criterion are as follows: 8.2.1 The Results of Two-Machine Problems by Considering Minimization of Sum of the Completion Times Criterion All 54 test problems of two machine problems are solved by heuristic algorithms to find the algorithm with the best performance and the best initial solution generator. In the interest of time, the lower bounding technique is also applied to only some of the test problems to evaluate the quality of solutions. The results are presented in three sections. . In the first section, the results of the heuristic algorithms are presented. The result of the experimental design to find the algorithm with the best performance as well as for finding the best initial solution generator are presented as well. . In the second section, an experimental design to compare the time spent for heuristic algorithms is performed and the results are presented. . In the third section, the result of the lower bounding technique for a few test problems to estimate the quality of solutions is presented. For this criterion, because solving the decomposed problem requires an enormous amount of time for large size problems, in the interest of time, only a few of test problems are solved by the lower bounding technique to evaluate the quality of solutions. 138 8.2.1.1 Comparison Among Heuristic Algorithms for Two Machine Problems by Considering Minimization of Sum of the Completion Times The results obtained from applying the heuristic algorithms are shown in Table 8.10 by using two different initial solution generators. In this table TS1 stands for the tabu search algorithm with short term memory, TS2 stands for the LTM-Max, and TS3 stands for LTM-Min. Table 8.10 The results of the test problems for two machine problems by considering minimization of sum of the completion times -u Initial I Initial 2 - C) & - a c a a . -. CD . Q cn g TSI TS2 TS3 TSI TS2 TS3 1 4 4 13 2011 2011 2011 2011 2011 2011 2 3 4 10 1425 1425 1425 1436 1436 1436 3 3 4 8 811 811 811 811 811 811 -- 3 -p--- 454 454 454 454 454 454 5 5 4 16 3290 3290 2925 2906 2906 2906 6 3 4 8 986 986 986 986 986 986 7 5 7 28 6061 6061 6061 5796 5796 5796 8 4 6 22 4310 4310 4310 4310 4310 4310 9 3 7 17 2600 2600 2607 2592 2592 2592 10 2 7 13 1263 1263 1263 1300 1300 1300 11 5 7 31 9064 9047 9064 9024 8876 8876 12 4 7 25 5595 5589 5497 5579 5579 5493 13 5 10 31 8442 8442 8160 8423 8423 8085 14 2 8 15 2160 2160 2132 2091 2091 2091 15 5 9 35 8012 8012 8012 8787 8787 8254 16 5 10 40 10456 10456 10364 10453 10340 10453 17 4 9 31 6336 6336 6336 6365 6336 6345 18 4 10 28 5590 5590 5590 5563 5563 5563 19 6 4 17 3558 3406 3408 3233 3233 3233 20 8 4 25 5960 5960 5960 5952 5952 5952 21 9 3 22 4129 4129 4129 4129 4129 4129 22 9 4 27 5643 5643 5643 5643 5643 5643 23 10 4 33 9904 9904 9904 10096 10058 10096 24 6 4 16 3250 3207 3043 3043 3043 3043 25 9 7 60 27506 27501 27506 27506 27501 27506 26 10 7 62 25708 25708 25708 25772 25600 25772 27 6 7 33 7702 7714 7702 7725 7724 7725 28 10 7 52 17234 17234 17234 17255 17196 17255 139 Table 8.10 (Continued) The results of the test problems for two machine problems by considering minimization of sum of the completion times -I 0 a Initial I Initial 2 2o TSI TS2 TS3 TSI TS2 TS3 29 6 7 32 8241 8055 8241 8630 8630 8241 30 10 7 49 17615 17187 17902 17192 17187 17192 31 8 9 41 12691 12693 12691 12689 12689 12689 32 8 10 51 18482 18480 18482 18482 18430 18482 33 10 10 71 31336 30700 30692 30717 30690 30718 34 6 8 29 6187 6116 6187 5986 5986 5986 35 9 9 58 21438 21441 21459 21428 21428 21428 36 6 10 31 8117 8117 8117 8124 8124 8124 37 11 4 29 8248 8234 8248 8100 8100 8100 38 13 4 40 14798 14246 14798 15066 15066 14500 39 16 4 46 15053 15028 15053 15026 15026 15026 40 13 4 40 11813 11813 11813 11677 11677 11677 41 14 4 40 15533 14386 15333 14461 14461 14211 42 15 4 46 21136 20581 20932 19352 19305 19352 43 16 7 79 49111 48949 49111 47642 47532 47642 44 13 7 63 30751 30646 30751 31680 31263 31263 45 16 7 66 27654 26966 27657 26318 26149 26318 46 12 7 65 28028 27071 27681 28776 27446 28776 47 15 7 76 43055 43055 43090 43889 43318 43889 48 14 7 69 39839 39839 39893 38080 38080 38080 49 11 10 75 35438 35438 35438 37090 36467 36915 50 15 10 99 66392 64642 66392 66455 65053 66455 51 12 10 83 39542 39392 39542 38140 38145 38140 52 16 10 106 69827 69103 69689 70317 70260 70309 53 15 10 79 45664 45403 45471 44358 43758 44428 54 16 10 108 75519 76717 77832 78802 76198 78802 The normal probability plot of the residuals confirms that the residuals have a normal distribution (Figure 8.7). Thus, there is evidence that the parametric statistics-based analysis of variance (ANOVA) can be used to further analyze the results. 140 1500 + 1000 ++ 500 II 0 0 -500 -1000 + -1500 .1 1 5 10 25 50 75 90 95 99 99.9 Ilornal Percentilen Figure 8.7 The normal probability plot of the experimental design of finding the best heuristic algorithm for two machine problem by considering minimization of sum of the completion times The result of ANOVA is presented in Table A. 10. The results of the experiment show that there is a significant difference among the objective function values of heuristic algorithms (the result of F test is equal to 0.03 94). To find the best heuristic algorithm, a Tukey test is performed. The result of Tukey' s test shows that TS2 is a better performer compared to the other two heuristic algorithms. The results of the experimental design also show that there is no difference between the initial solution generators for two machine problems (the result ofF test is equal to 0.0960). Among the interactions, only the interaction between group factor and algorithm factor (G*A) is significant. It means that by changing the size of the problem, the suitability of the heuristic algorithm can be changed. The significant factors and interactions are shown in bold in Table A.10. A test of effect slice is performed to obtain detailed information by considering the highest significant order interactions for the algorithm effect i.e., G *A based on 141 Tukey-Kramer adjustment. The results are shown in Table A. 11 in appendix. Based on the results, for large size group problems, TS2 has a better performance compared to the other heuristic algorithms. 8.2.1.2 The Experimental Design to Compare the Time Spent for Heuristic Algorithms The time spent to terminate the search algorithm and the time spent to find the best solution for each heuristic algorithm are shown in Table 8.11 for all test problems. Table 8.11 The time spent for the test problems of two machine problems (in seconds) by considering minimization of sum of the completion times Initial 1 Initial 2 TSI TS2 TS3 TS1 TS2 TS3 _ eD _I - - - - 2 2 2 2 2 B 1 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 0 0 0 0 0 7 0 0 0 0 0 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0 0 0 0 10 0 0 0 0 0 0 0 0 0 0 0 0 11 0 0 0 0 0 0 0 0 0 0 0 0 12 0 0 0 0 0 0 0 0 0 0 0 0 13 0 0 0 0 0 0 0 0 0 0 0 0 14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 16 0 0 0 0 0 0 0 0 0 0 0 0 142 Table 8.11 (Continued) The time spent for the test problems of two machine problems (in seconds) by considering minimization of sum of the completion times Initial 1 Initial 2 TSI TS2 TS3 TS1 TS2 TS3 _;;- - -;-- -;- -;;__ - ; - - - - 2 2 2 2 2 2 17 0 0 0 0 0 0 0 0 0 0 0 _Q_ 18 0 0 0 0 0 0 0 0 0 0 0 0 19 0 0 0 0 0 0 0 0 0 0 0 0 20 1 1 5 1 6 2 1 0 6 1 5 1 21 1 1 6 2 7 3 1 0 5 0 6 0 22 2 1 14 2 13 2 2 1 11 3 12 3 23 4 1 26 2 23 2 2 1 23 7 25 4 24 0 0 0 0 0 0 0 0 0 0 0 0 25 16 13 70 42 129 29 29 12 87 74 107 26 26 54 31 349 87 309 88 21 19 177 102 238 50 27 0 0 0 0 1 1 0 0 0 0 0 0 28 36 27 121 49 143 60 30 4 105 74 182 10 29 0 0 1 0 1 0 0 0 0 0 1 0 30 22 18 62 27 82 80 22 7 62 38 58 12 31 4 4 41 10 24 14 8 3 44 8 37 8 32 15 2 34 29 103 5 16 1 69 35 95 5 33 77 8 304 70 352 99 82 45 221 114 321 151 34 0 0 1 1 0 0 0 0 0 0 0 0 35 24 14 102 95 126 97 34 4 81 4 94 4 36 0 0 0 0 1 0 0 0 0 0 1 0 37 2 1 16 10 15 2 2 1 7 4 7 4 38 8 7 51 50 47 23 5 2 54 6 47 47 39 25 11 196 134 198 32 24 18 185 53 200 54 40 11 6 92 19 71 20 12 7 87 19 96 19 41 10 10 82 77 93 28 11 6 80 19 89 60 42 25 16 194 69 194 155 23 7 195 167 185 23 43 100 43 536 290 613 86 90 10 756 385 716 30 44 88 36 406 186 146 51 42 37 227 81 216 73 45 125 30 489 429 720 75 110 28 666 543 730 61 46 43 32 386 385 324 261 22 21 311 288 183 26 47 162 34 712 66 1078 68 164 29 749 736 775 40 48 115 107 534 251 835 315 110 94 570 222 860 260 49 37 6 187 8 146 11 14 2 161 147 195 8 50 228 76 1643 1457 1055 163 189 36 1385 1080 513 38 51 211 99 1167 621 1081 276 90 18 1478 43 1868 59 52 371 115 1069 891 1701 549 327 219 1763 454 1734 436 53 181 101 956 805 1399 495 186 74 1281 597 1329 492 54 471 342 1159 775 1981 311 432 86 1188 1115 1238 91 143 The normal probability plot of the residuals is shown in Figure 8.8. The normal probability plot of the residuals confirms that the residuals have a normal distribution. Thus, the ANOVA can be applied for detailed comparison. 400 -f + + 200 -1+ Fl a 0 -200 + +1- + -400 .1 1 5 10 25 50 75 90 95 99 99.9 Normal Percentiles Figure 8.8 The normal probability plot of the experimental design of finding the most efficient heuristic algorithm for two machine problem by considering minimization of sum of the completion times criterion The ANOVA table for comparison of time spent is presented in Table A.12 in appendix. The results of the experiment show that there is a significant difference among the time spent for heuristic algorithms (the result of F test is less than 0.000 1). The Tukey test is applied to find the difference. The result of Tukey's test shows that the time spent by TS 1 is less than the other two heuristic algorithms. The results of the experiment show that there is not a significant difference between the time spent by algorithms in applying different initial solution generators for two machine problems (the result ofF test is equal to 0.9 170). 144 Among the interactions, the interactions between the algorithm factor and all of the whole plot factors (G*A, J*A R1*A, G*J*A, G*R1*A, J*R1*A, and G*J*R1*A) are significant. A test of effect slice is performed to obtain detailed information by considering the highest significant order interactions for the algorithm effect i.e., G*J*R1 *A based on Tukey-Kramer adjustment. The results are shown in Table A.13 in appendix. Based on the results, for large size group, medium and large size job problems, there is a significant difference between the time spent by the algorithms. In all of these problems, TS 1 required less time compared to TS2 and TS3. This result was expected because by increasing the size of the problems, the difference between the time spent by TS1 compared to TS2 and TS3 will be increased. 8.2.1.3 Evaluating the Quality of Solutions The lower bounding technique based on B&P algorithm is applied to estimate the quality of solutions. If the B&P algorithm is applied to solve large size problems, it requires a considerable amount of time to get a lower bound. Thus, in the interest of time, only a few of test problems are considered to be solved for estimating the quality of solutions as a sample. The size of the sample is considered 10 for two machine problems. Thus, every other five problems (problems number 1, 6, 11, ..., 51 of test problems) are considered to be solved by the lower bounding technique. The results are shown in the table below. This comparison is performed by considering the result of the best tabu search. The results show that the average percentage error is equal to 14.40%. The results are shown in Table 8.12. 145 Table 8.12 The results of the lower bounding technique for two machine problems by considering minimization of sum of the completion times criterion w -u CD CD >C 1 3 :' G) 2. CD - CD 0 B D !. CD -' -. Cl) a. -. CD c C) U) 0 0 U) CD CD CD a C U) . B 1 1 4 4 13 1857.2 2 2011 0.0828 6 3 4 8 964.15 2 986 0.0227 11 5 7 31 8831 5 8876 0.0051 16 5 10 40 9274.3 200 10340 0.1149 21 9 3 22 3567.6 6687 4129 0.1574 26 10 7 62 23299 13282 25600 0.0988 31 8 9 41 12513 5186 12689 0.0141 36 6 10 31 7462.4 3600 8117 0.0877 41 14 4 40 12533 28800 14211 0.1339 46 12 7 65 23915 85991 27071 0.132 51 12 10 83 22054 28800 38140 0.729 Average: 0.144 8.2.2 The Results of Three-Machine Problems by Considering Minimization of Sum of the Completion Times Criterion All 162 test problems of three machine problems are solved by heuristic algorithms to find the algorithm with the best performance and the best initial solution. In the interest of time, the lower bounding technique is also applied for some of the selected test problems to evaluate the quality of solution. The results are presented in three sections. In the first section, the results of the heuristic algorithms, the result of the experimental design to find the algorithm with the best performance as well as for finding the best initial solution are presented. In the second section, an experimental design to compare the time spent for heuristic algorithms is performed and the results are presented. In the third section, the result of the lower bounding technique for a few test problems to estimate the quality of solution is presented. 146 8.2.2.1 Comparison among Heuristic Algorithms for Three Machine Problems by Considering Minimization of Sum of the Completion Times The result of performing the heuristic algorithms is shown in Table 8.13 by applying two different initial solution generators. In this table TS 1 stands for the tabu search algorithm with short term memory, TS2 stands for the LTM-Max, and TS3 stands for LTM-Min. Table 8.13 The results of the test problems for three machine problems by considering minimization of sum of the completion times criterion Initial 1 Initial 2 . 2 TSI TS2 TS3 TSI TS2 TS3 1 2 4 7 1021 1021 1021 1021 1021 1021 1021 2 5 4 16 4246 4246 4246 4246 4246 4246 4246 3 4 4 12 2297 2123 2100 2133 2133 2103 2100 4 3 4 10 1354 1354 1354 1354 1354 1354 1354 5 3 4 11 1605 1603 1603 1653 1597 1597 1597 6 5 4 16 3448 3448 3448 3327 3327 3327 3327 7 5 4 15 3952 3952 3704 3616 3616 3616 3616 8 3 3 7 1228 1228 1225 1225 1225 1225 1225 9 3 3 8 1200 1200 1200 1200 1200 1200 1200 10 4 4 13 2244 2244 2244 2244 2243 2244 2243 11 4 4 15 3120 3120 3120 3121 3121 3121 3120 12 5 4 13 2463 2463 2444 2463 2463 2463 2444 13 5 4 14 3893 3893 3893 3894 3894 3894 3893 14 2 4 7 1005 1005 1005 1005 1005 1005 1005 15 4 4 13 2956 2956 2956 2956 2956 2956 2956 16 5 4 17 4817 4807 4807 4826 4826 4815 4807 17 2 3 5 708 708 708 708 708 708 708 18 4 4 13 3008 3008 3008 3008 3008 3008 3008 19 3 6 18 3694 3694 3694 3717 3717 3717 3694 20 4 6 22 5534 5534 5534 5546 5546 5542 5534 21 2 7 13 1900 1900 1879 1832 1832 1832 1832 22 5 7 31 8489 8489 8367 8974 8562 8366 8366 23 5 6 23 5770 5770 5770 5770 5770 5770 5770 24 4 6 20 3892 3890 3890 3890 3890 3890 3890 25 3 6 17 3964 3964 3964 3964 3964 3964 3964 26 4 7 23 6236 6236 6236 6236 6236 6236 6236 27 5 6 26 7468 7468 7468 7468 7437 7468 7437 28 3 7 19 4039 4039 3983 4046 4046 4046 3983 147 Table 8.13 (Continued) The results of the test problems for three machine problems by considering minimization of sum of the completion times criterion Initial 1 Initial 2 0 2 TSI TS2 TS3 TSI TS2 TS3 29 5 5 22 5363 5363 5370 5418 5418 5418 5363 30 2 7 13 1804 1804 1804 1810 1810 1810 1804 31 3 7 15 3628 3628 3628 3625 3625 3625 3625 32 5 6 23 7084 7084 7084 7100 7100 7100 7084 33 4 6 19 4599 4599 4599 4788 4744 4615 4599 34 3 5 15 3248 3248 3248 3233 3233 3233 3233 35 2 7 12 2098 2098 2096 2094 2094 2094 2094 36 5 7 29 10190 10190 9936 9587 9587 9587 9587 37 4 10 34 11014 11014 10809 10612 10595 10612 10595 38 5 9 38 13597 13597 13597 13608 13608 13608 13597 39 4 10 36 9685 9685 9685 9719 9691 9719 9685 40 3 10 27 6406 6406 6406 6420 6420 6420 6406 41 5 10 49 17126 17126 17126 17624 17561 17440 17126 42 2 8 16 2432 2432 2432 2369 2341 2369 2341 43 2 10 18 3905 3905 3905 3813 3813 3813 3813 44 4 9 31 9706 9706 9706 9706 9706 9706 9706 45 5 10 49 19640 19646 19640 20184 19882 20184 19640 46 4 8 31 9055 9015 9015 8919 8919 8919 8919 47 3 9 24 5739 5739 5739 5758 5758 5758 5739 48 5 9 42 14240 14229 14220 14300 14174 14300 14174 49 5 9 43 18616 18579 18496 18032 17979 18008 17979 50 3 8 23 6684 6684 6684 6692 6690 6690 6684 51 5 8 37 14760 14732 14727 14724 14724 14724 14724 52 3 9 23 6325 6325 6274 6248 6248 6248 6248 53 5 8 36 13822 13822 13754 13409 13409 13017 13017 54 4 10 36 12566 12553 12566 12483 12483 12483 12483 55 6 4 19 6689 6689 6428 5986 5986 5986 5986 56 8 4 20 6242 6242 6242 6213 6200 6200 6200 57 6 4 19 4424 4424 4424 4424 4424 4424 4424 58 7 4 17 4164 4159 4164 4164 4159 4164 4159 59 7 4 19 4466 4316 4466 4316 4316 4316 4316 60 9 4 24 7862 7717 7862 7717 7717 7717 7717 61 8 4 23 7974 7974 7974 7974 7974 7974 7974 62 9 4 32 12397 12397 12397 12397 12397 12397 12397 63 8 4 25 7458 7458 7458 7494 7458 7494 7458 64 7 4 21 6139 6139 6139 6032 6032 6032 6032 65 10 4 34 12744 12744 12744 12810 12810 12810 12744 66 6 4 18 4456 4241 4411 4233 4233 4233 4233 67 7 3 18 6396 6396 6396 6396 6396 6396 6396 68 10 4 29 13658 13658 13658 13658 13603 13658 13603 148 Table 8.13 (Continued) The results of the test problems for three machine problems by considering minimization of sum of the completion times criterion I; Initial 1 Initial 2 0 - TSI TS2 TS3 TSI TS2 TS3 [ 69 7 4 21 6643 6643 6643 6643 6643 6643 6643 70 9 4 30 12424 12390 12424 12424 12390 12424 12390 71 9 4 24 9346 9346 9364 9364 9319 9364 9319 72 6 4 18 5359 5359 5359 5381 5381 5381 5359 73 7 7 41 16892 16828 16897 16897 16886 16897 16828 74 6 5 28 9300 9197 9300 9197 9197 9197 9197 75 9 7 50 21656 21656 21753 21794 21794 21713 21656 76 8 6 39 13460 13446 13588 13587 13441 13587 13441 77 8 7 46 15917 15873 15917 15883 15883 15883 15873 78 9 7 49 20513 20425 20666 20505 20504 20505 20425 79 9 7 51 27969 27969 27969 28012 28001 27969 27969 80 6 6 30 10467 10278 10278 10444 10444 10444 10278 81 8 5 36 12575 12536 12574 12542 12498 12501 12498 82 10 6 54 24911 24615 24612 24675 24598 24696 24598 83 6 6 29 8126 8126 8126 8145 8145 8130 8126 84 10 6 49 21542 21484 20523 20479 20443 20479 20443 85 9 6 42 22799 22799 22799 22826 22826 22794 22794 86 8 5 36 17443 17443 17453 17453 17453 17465 17443 87 8 5 26 9677 9677 9677 9677 9677 9677 9677 88 10 7 40 18546 18508 18546 18851 18539 18851 18508 89 9 5 27 10789 10785 10786 10788 10788 10785 10785 90 10 7 48 23838 23838 23838 23976 24235 24130 23838 91 9 10 56 27038 26997 26930 26965 26965 26965 26930 92 7 9 43 17556 17469 17556 17305 17134 17134 17134 93 8 9 49 19897 19650 19897 19464 19396 19382 19382 94 9 10 54 22634 22634 22634 22634 22703 22703 22634 95 7 8 33 8786 8769 8760 8764 8760 8764 8760 96 10 10 62 29187 27589 28374 28168 28172 29037 27589 97 8 9 48 22158 22158 22225 22384 22384 22150 22150 98 6 10 41 16700 16697 16700 16472 16472 16472 16472 99 10 10 63 31747 31485 32116 31396 31487 31388 31388 100 8 9 38 12586 12586 12595 12650 12599 12638 12586 101 7 10 48 17697 17568 17697 17579 17579 17579 17568 102 10 8 40 16630 16447 16429 16639 16821 16639 16429 103 10 10 66 42413 42343 42535 42497 42497 42097 42097 104 8 7 36 16330 16191 16660 16321 16172 16321 16172 105 6 10 38 16726 15865 16742 15856 15856 15856 15856 106 9 8 43 21287 21198 21216 21210 21351 21264 21198 107 10 7 50 27672 27672 27672 27651 27651 27651 27651 108 8 10 60 33022 32986 33010 32938 32806 32806 32806 149 Table 8.13 (Continued) The results of the test problems for three machine problems by considering minimization of sum of the completion times criterion Initial 1 Initial 2 . 0 TSI TS2 TS3 TSI TS2 TS3 109 13 4 37 20387 20387 20387 20248 20248 20248 20248 110 15 4 39 23144 23045 23415 23079 22954 22826 22826 111 16 4 46 21423 21423 21423 21300 21051 21488 21051 112 14 4 42 18551 18048 18551 18019 18019 18019 18019 113 13 4 39 14721 14381 14721 14111 13864 14111 13864 114 15 4 45 19123 19060 19123 19072 19072 19207 19060 115 15 4 43 26182 26182 26182 26674 26674 26642 26182 116 14 4 44 27603 27603 27603 27359 27359 27359 27359 117 11 4 35 13806 13806 13806 13663 13663 13843 13663 118 14 4 39 18298 18298 18298 18782 18336 18782 18298 119 11 4 32 11533 11300 11464 11496 11330 11492 11300 120 15 4 50 25956 25338 24917 24812 24812 24182 24182 121 11 4 31 16106 16106 16106 16165 16165 16165 16106 122 15 4 47 32646 32436 32244 32535 32535 32535 32244 123 13 4 36 20034 19712 20285 19480 19382 19480 19382 124 12 4 35 17402 17397 17622 17385 17385 17385 17385 125 14 4 47 28640 28640 28640 28362 28362 28355 28355 126 13 4 40 21239 21239 21239 20403 20403 20403 20403 127 12 7 50 26059 26059 26236 25696 25696 25696 25696 128 14 7 83 67862 67900 67862 66510 65900 66510 65900 129 11 7 64 33204 33002 33002 33146 33146 33146 33002 130 14 7 88 63232 62764 62487 61666 64297 61666 61666 131 11 7 69 37175 37650 36845 37400 37166 37171 36845 132 16 7 104 75681 74487 75254 73891 76455 75987 73891 133 16 7 69 55749 55749 55749 52023 51733 52023 51733 134 11 7 50 27425 27172 27545 27780 27346 27269 27172 135 12 7 50 23146 23128 23177 23276 23276 23230 23128 136 14 7 53 28362 28124 26641 27475 27475 27503 26641 137 13 7 62 34306 34195 34201 34706 34706 34706 34195 138 15 7 66 40345 40070 40345 39288 38987 39804 38987 139 14 7 75 60096 59211 60088 60171 60171 60161 59211 140 13 7 54 35196 34247 35667 32294 32294 32294 32294 141 15 7 59 41144 40731 41144 40687 40687 40687 40687 142 16 7 66 48443 47863 48443 47756 47756 48383 38987 143 13 7 54 35570 35570 32994 33015 32661 33019 32661 144 15 7 68 50778 50759 51083 50192 49594 50192 49594 145 12 9 62 39367 39367 39367 39438 39156 39438 39156 146 14 10 74 52196 52196 51793 50876 50893 51524 50876 147 13 10 81 49274 49274 49274 49988 48266 49951 48266 150 Table 8.13 (Continued) The results of the test problems for three machine problems by considering minimization of sum of the completion times criterion Initial 1 Initial 2 0 eD = 1. ______ ______ ______ ______ ______ B TSI TS2 TS3 TSI TS2 TS3 148 15 10 85 55711 55711 56284 55007 55007 54790 54790 149 16 9 93 64832 64823 65497 65233 62415 62418 62415 150 15 10 110 82943 81906 83125 83593 83593 83467 81906 151 15 10 104 92931 91688 91394 92332 91179 92332 91179 152 14 9 77 62829 59902 61977 58563 58103 58541 58103 153 16 10 96 70956 69338 71199 71054 69767 69434 69338 154 13 9 66 38719 38291 38174 38596 37624 37717 37624 155 14 10 84 57010 57000 57038 56479 55577 55775 55577 156 11 10 86 53356 54701 53149 53824 53655 54044 53149 157 13 10 89 76753 76904 76904 76241 77832 77346 76241 158 15 10 101 95207 95207 95461 97472 97472 96318 95207 159 15 10 97 83626 84271 84722 83022 83343 82976 82976 160 14 10 86 65526 65489 64731 64935 64917 64524 64524 161 11 10 64 38024 41018 37999 38646 37931 37391 37391 162 14 8 77 60730 60730 60730 61955 59746 61955 59746 The normal probability plot of the residuals is shown in Figure 8.9. The plot confirms that there is evidence to apply the parametric statistics-based analysis of variance (ANOVA) to further analyze of the results. 1500 -I- + 1000 tft+ 500 R 0 -500 -1000 + -f -1500 .01 .1 1 5 10 25 50 75 90 95 99 99.9 99.99 Normal Percent I 1cc Figure 8.9 The normal probability plot of the experimental design of finding the best heuristic algorithm for three machine problem by considering minimization of sum of the completion times 151 The result of ANOVA is presented in Table A. 14. The results of the experiment show that there is a significant difference among the objective function values of heuristic algorithms (the result of F test equal to 0.0429). The result of Tukey's test shows that TS2 is a better performer compared to the other two heuristic algorithms. The results of the experimental design show that there is a significant difference between the results of heuristic algorithms by applying different initial solution generators for three machine problems (the result of F test is less than 0.0001). A comparison between the average values of objective functions shows that applying the second initial solution generator in heuristic algorithms provides a better solution. Among the interactions, the interaction between group factor and the initial solution factor (G*I and G*J*I) are significant. The significant factors and interactions are shown in bold in Table A.14. The other significant interactions are J*R2*I, G*J*R1*I, G*J*R2*I, and G*R1*R2*I. A test of effect slice is performed to obtain detailed information by considering the highest significant order interactions for the initial solution generator effect i.e., G*J*R1 *R2*I based on Tukey-Kramer adjustment. The results are shown in Table A. 15 in appendix. Based on the results, for the initial solution generators comparison, there is a significant difference among the performance of initial solutions in a few cells. These cells are presented in the table below with the best initial solution of each cell. Table 8.14 The experimental cells of three machine problems by considering minimization of sum of the completion times criterion in which the initial solution generators do not have the same performance The level The level of The level The level The best heuristic of group job factor of Ri of R2 algorithm factor 3 2 2 1 Initial solution generator 2 3 2 3 1 Initial solution generator 2 3 2 3 3 Initial solution generator 2 3 3 2 1 Initial solution generator 2 152 8.2.2.2 The Experimental Design to Compare the Time Spent for Heuristic Algorithms Table 8.15 presents the time spent to terminate the search algorithms and the time spent to find the best solution for each heuristic algorithm. Table 8.15 The time spent for the test problems of three machine problems (in seconds) by considering minimization of sum of the completion times criterion Urn-nn rutn w AUHAHAHUHAHA 153 Table 8.15 (Continued) The time spent for the test problems of three machine problems (in seconds) by considering minimization of sum of the completion times criterion rrmnv - U IuII 154 Table 8.15 (Continued) The time spent for the test problems of three machine problems (in seconds) by considering minimization of sum of the completion times criterion Initial 1 Initial 2 TS1 TS2 TS3 TSI TS2 TS3 0 - - - Il) * C E - - 2 2 2 2 2 2 2 2 2 2 2 2 6561 10 2 34 3 6 6 14 6 33 15 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 1 1 0 68 2 2 9 2 5 3 3 2 12 7 16 3 69 70 0 3 0 1 1 18 1 13 1 20 0 4 10 3 0 1 21 0 11 1 21 0 2 71 1 0 6 1 10 0 1 0 6 5 8 0 j_ 0 0 0 0 0 0 o o 0 0 0 0 73 4 0 10 7 13 2 5 5 14 10 14 8 0 0 0 0 0 0 0 0 0 0 75 35 25 141 43 139 6 27 165 3 169 116 76 77 9 22 5 3 32 34 17 4 37 66 13 5 82 4 1 3 26 14 15 5 32 17 5 5 78 10 6 74 34 78 8 23 6 98 81 61 9 79 21 3 62 4 54 3 21 7 52 15 69 14 0 0 0 0 0 0 0 0 0 1 1 81 9 7 44 25 60 20 8 8 35 32 39 16 82 63 55 324 170 290 72 61 55 326 164 327 3 0 0 0 1 0 0 0 0 0 0 0 84 30 0 207 36 138 39 30 6 187 77 129 16 85 15 13 49 22 27 12 14 4 46 6 80 30 86 4 1 25 2 49 20 7 2 34 8 46 4 87 1 0 5 0 6 0 1 0 2 0 6 0 88 1 0 37 18 19 1 3 2 54 32 24 5 89 0 0 8 4 5 2 2 1 10 1 11 2 90 275 106 13 117 18 29 26 146 26 175 58 91 30 7 164 68 134 31 38 3 133 6 178 6 92 0 0 2 2 1 0 4 3 14 10 7 7 93 4 4 78 77 65 9 27 22 155 122 141 48 94 49 23 130 27 235 23 49 47 225 26 272 41 95 2 2 10 5 17 2 3 1 16 9 17 1 96 25 12 128 99 287 284 80 46 182 23 242 241 97 8 7 46 12 105 47 4 2 28 4 74 16 98 0 0 2 2 1 0 0 0 1 0 0 0 99 100 38 405 379 65 19 74 39 324 164 252 10 100 9 2 54 4 47 13 9 4 59 33 58 22 101 6 2 26 10 51 5 3 1 29 3 19 4 155 Table 8.15 (Continued) The time spent for the test problems of three machine problems (in seconds) by considering minimization of sum of the completion times criterion Initial 1 Initial 2 TSI TS2 TS3 TS1 TS2 TS3 TS1 TS2 TS3 TS1 TS2 TS3 -i---;- a a a a a a E - 102 11 9 64 58 26 23 12 5 40 6 49 8 103 57 46 324 137 405 24 69 69 410 199 444 438 104 1 0 15 8 15 1 1 1 14 7 12 1 105 0 0 1 1 0 0 0 0 0 0 0 0 106 16 4 76 52 55 19 16 9 85 16 93 19 10719 1 76 3 69 3 18 7 44 8 36 7 108 28 5 107 47 172 11 38 3 95 92 134 131 109 0 0 26 1 33 1 10 10 52 28 52 28 110 7 6 66 29 62 25 8 1 63 62 51 8 111 33 29 212 83 274 88 36 35 270 190 295 93 112 29 16 219 141 217 46 23 20 71 61 103 61 113 14 13 117 108 90 41 14 10 99 82 69 30 114 31 16 199 112 266 50 34 26 267 76 260 22 115 18 16 148 50 137 48 3 1 24 3 87 70 116 27 23 131 65 151 66 1 1 14 5 85 5 1176 6 60 16 52 17 104 52 9 71 62 118 3 2 101 8 41 9 18 12 117 83 85 36 1194 2 41 30 32 14 1 1 32 12 20 2 120 47 25 355 324 382 70 47 35 360 101 345 102 121 6 1 40 1 44 1 6 0 21 2 38 1 122 35 24 189 167 255 61 41 30 217 64 266 87 123 9 8 50 45 76 67 8 6 45 21 45 16 1249 7 61 19 58 18 5 1 21 4 46 4 125 29 27 166 81 242 81 34 11 243 32 149 3 1263 2 33 4 44 3 104 80 16 37 9 127 31 23 118 44 160 34 28 13 129 28 168 26 128 107 61 275 87 911 209 130 45 456 282 363 95 129 99 73 419 295 510 436 96 1 481 2 337 1 130 19 15 260 188 363 357 96 49 552 407 1228 96 131 102 96 252 149 526 37 109 22 321 320 444 64 132 349 349 1274 366 2236 518 396 329 1654 1455 2273 237 133 91 28 433 61 548 85 104 72 656 461 515 145 134 14 14 100 95 106 4 1 0 67 39 45 21 135 38 5 108 54 194 1 40 28 211 81 225 97 136 97 92 486 308 729 370 96 76 882 404 673 35 137 66 21 477 383 552 530 22 14 158 37 245 39 138 123 80 613 559 1027 237 133 119 954 742 1080 27 156 Table 8.15 (Continued) The time spent for the test problems of three machine problems (in seconds) by considering minimization of sum of the completion times criterion Initial 1 Initial 2 TS1 TS2 TS3 TS1 TS2 TS3 a a a a a a 2 2 3 2 2 2 139 100 83 961 950 684 257 196 115 594 245 881 216 140 94 41 635 600 278 239 142 48 767 132 503 132 141 126 63 995 521 687 127 113 111 1036 327 701 226 142 163 116 1083 591 1623 344 173 165 1349 477 1393 287 143 83 67 432 158 343 227 29 5 480 259 431 79 144 205 192 946 399 1245 1055 161 46 1008 947 858 93 145 56 3 146 3 265 4 50 28 329 172 306 68 146 83 31 449 64 508 392 147 120 568 290 532 217 147 142 39 554 67 983 80 232 135 768 514 1050 247 148 310 224 1738 442 1103 304 272 268 1266 422 1286 367 149 352 352 2181 1378 2834 508 320 293 1957 1155 2675 2673 150 614 477 1878 140 1960 798 459 421 1188 494 2039 748 151 330 312 1451 969 1503 341 233 207 1268 610 1004 405 152 94 11 540 171 550 549 140 33 300 206 380 136 153 527 110 2576 1810 3216 754 520 30 2254 783 2060 1930 154 89 87 667 300 638 149 33 23 402 169 709 327 155 340 183 1077 400 1494 131 133 23 1221 1205 2134 596 156 179 69 793 101 1112 286 207 167 688 442 551 35 157 355 337 1035 273 567 212 310 258 779 459 892 280 158 460 433 1907 1039 2997 803 454 31 3006 91 2677 1073 159 241 187 2129 1287 1897 620 281 152 2076 191 1848 161 160 60 51 587 87 845 210 94 66 295 118 400 167 161 89 72 89 86 289 132 34 27 207 171 198 110 162 126 104 319 123 1202 309 136 80 1094 1084 1106 239 The normal probability plot of the residuals is shown in Figure 8.10. The normal probability plot of the residuals confirms that the residuals have a normal distribution. Thus, the ANOVA can be applied for detailed comparison. 157 750 + 500 250 A 0 -250 -500 + -750 .01 .1 1 5 10 25 50 75 90 95 99 99.9 99.99 Normal Percent I lee Figure 8.10 The normal probability plot of the experimental design of finding the most efficient heuristic algorithm for the three machine problem by considering minimization of sum of the completion times criterion The ANOVA table for comparison of time spent is presented in Table A.16. The results of the experiment show that there is a significant difference among the time spent for heuristic algorithms (the result of F test is less than 0.000 1). The Tukey test is applied to find the difference. The result of Tukey's test shows that the time spent to solve TS1 is less than the other two heuristic algorithms. The results of the experiment show that there is not a significant difference between the time spent by algorithms in applying different initial solution generators for three machine problems (the result ofF test is equal to 0.8320). Among the interactions, the interactions between the algorithm factor and all of the whole plot factors (G*A, J*A p*A, G*J*A, G*R2*A, R1*R2*A, G*R1*R2*A, J*R1*1*A, and G*J*R1*R2*A) are significant. The effect slice test is performed for more detailed comparisons. Based on the results, the summary of significant differences is as follows: For any large size group, medium and large size job problems, there is a significant difference among the time spent by the algorithms. In all of these problems, TS1 required less time compared to TS2 and TS3. This result was expected because by increasing the size of the problems, the difference between the time spent by TS1 compared to TS2 and TS3 will be increased. 8.2.2.3 Evaluating the Quality of Solutions The lower bounding technique based on B&P algorithm is applied to estimate the quality of solutions. As discussed before, in the interest of time, only a few test problems are considered to be solved for estimating the quality of solutions as a sample. The size of the sample is considered 20 for three machine problems. Thus, every other eight problems (problems number 1, 9, 17, 25, ..., 161 of test problems) are considered to be solved by the lower bounding technique. The results are shown in the table below. This comparison is performed by considering the results of the best tabu search. The results show that the average percentage error is equal to 17.2%. Table 8.16 The results of the lower bounding technique for three machine problems by considering minimization of sum of the completion times criterion 0 . - - -. 0 1 2 4 7 1021 1 1021 0 9 3 3 8 1052 1 1200 0.141 17 2 3 5 705 5 708 0.004 25 3 6 17 3833.86 31 3964 0.033 33 4 6 19 4435 198 4599 0.037 41 5 10 49 14806 12000 17126 0.157 49 5 9 43 16434 3600 17979 0.094 57 6 4 19 3810 700 4424 0.161 65 10 4 34 11637 27859 12744 0.095 73 7 7 41 16150.27 7064 16828 0.042 81 8 5 36 10327.12 29385 12498 0.210 89 9 5 27 9635 15788 10785 0.119 97 8 9 48 20350.59 17125 22150 0.088 105 6 10 38 15380.8 28818 15856 0.031 113 13 4 39 11092 32273 13864 0.250 159 Table 8.16 (Continued) The results of the lower bounding technique for three machine problems by considering minimization of sum of the completion times criterion 0 o - o '-. 0© . eD 0 o ..-. 0 p- 121 11 4 31 14321.72 17171 16106 0.125 129 11 7 64 28921.3 22417 33002 0.141 137 13 7 62 19626.1 28800 34195 0.742 145 12 9 62 24770.7 29990 39156 0.581 153 16 10 96 52752 32000 69338 0.314 161 11 10 64 30250 28800 37931 0.254 Average: 0.172 8.2.3 The Results of Six-Machine Problems by Considering Minimization of Sum of the Completion Times Criterion All 54 test problems of six machine problems are solved by heuristic algorithms to find the algorithm with the best performance and the best initial solution generator. In the interest of time, the lower bounding technique is also applied for some of the selected test problems to evaluate the quality of solution. The results are presented in the sections below: In the first section, the results of the heuristic algorithms and the results of the experimental design to find the algorithm with the best performance as well as for finding the best initial solution are presented. In the second section, an experimental design to compare the time spent for heuristic algorithms is performed and the results are presented. In the third section, the results of the lower bounding technique for a few test problems to estimate the quality of solution are presented. 8.2.3.1 Comparison among Heuristic Algorithms for Six Machine Problems by Considering Minimization of Sum of the Completion Times The results of performing the heuristic algorithms are shown in Table 8.17 by applying two different initial solution generators. 160 Table 8.17 The heuristic algorithms results of the test problems for six machine problems by considering minimization of sum of the completion times criterion Initial 1 Initial 2 -. - TS1 TS2 TS3 TS1 TS2 TS3 1 5 3 11 10256 10256 10256 10256 10256 10256 2 3 4 9 5583 5583 5583 5583 5583 5583 3 4 3 10 1951 1787 1911 1824 1824 1824 4 2 4 7 879 879 879 886 886 886 5 4 4 15 12878 12878 12878 12891 12891 12891 6 2 4 8 4762 4762 4762 4781 4781 4770 7 5 7 29 30223 30223 30223 30223 30223 30223 8 4 6 21 18235 18235 18235 18235 18235 18235 9 2 7 11 1608 1608 1608 1581 1581 1581 10 5 7 25 5972 5972 5988 6014 6014 6014 11 4 6 15 11843 11843 11834 11791 11791 11791 12 5 7 20 17308 17308 17308 17349 17349 17349 13 2 10 3 6198 6198 6198 6198 6198 6198 14 3 10 29 20779 20779 20779 20779 20779 20779 15 4 10 28 8353 8201 8204 8355 8355 8355 16 4 9 21 5419 5419 5382 5359 5359 5359 17 4 9 28 26106 26106 26106 25902 25902 25902 18 3 9 21 15350 15350 15350 15257 15257 15257 19 9 4 31 47870 47870 47870 47870 47870 47870 20 8 4 19 25487 25487 25487 25487 25487 25487 21 6 4 22 5854 5854 5854 5854 5854 5854 22 9 4 25 7553 7553 7553 7553 7553 7553 23 8 4 24 34155 34155 34155 34128 34128 34128 24 6 4 16 17977 17977 17977 18096 18096 18027 25 6 7 28 29061 29061 29061 28874 28874 28874 26 7 7 29 34477 34477 34477 34477 34477 34477 27 8 6 33 10255 10135 10205 10262 10140 10187 28 9 7 38 13704 13701 13935 13703 13688 13779 29 7 7 34 43663 43663 43663 43506 43506 43469 30 6 7 31 35504 35504 35412 35412 35412 35412 31 9 9 50 74057 74057 74057 74057 74057 74057 32 8 10 49 66666 66666 66666 66666 66666 66666 33 7 10 46 17881 17645 17699 18273 17853 18156 34 10 10 65 34341 34275 34472 34502 34683 34738 35 10 10 56 101652 101652 101652 101493 101493 101493 36 8 10 46 62877 62871 62877 62987 62987 62987 37 16 4 45 114308 114308 114308 111771 111771 111771 38 12 4 32 60020 60020 60020 60084 60084 60084 161 Table 8.17 (Continued) The heuristic algorithms results of the test problems for six machine problems by considering minimization of sum of the completion times criterion Initial 1 Initial 2 - © - eD___ TS1 TS2 TS3 TS1 TS2 TS3 39 14 4 37 15933 15844 15846 15594 15531 15594 40 13 4 40 17388 17376 17239 16813 16813 16954 41 11 4 34 64472 64472 64472 64465 64465 64465 42 14 4 45 108018 108018 108010 107175 107175 107175 43 13 6 47 93655 93655 93655 93655 93655 93655 44 16 7 63 159887 159887 159887 145647 145647 145647 45 15 7 71 41136 40711 41319 42209 42054 41739 46 14 7 60 33607 31692 32829 32435 32435 32435 47 15 7 69 161705 161705 161668 161830 161830 161688 48 16 7 78 205140 205140 205125 207399 207025 207399 49 13 10 95 224192 223247 219865 218940 218940 218940 50 14 10 80 185849 178816 185849 178766 178766 178766 51 13 10 72 41884 41948 41896 43251 42550 42657 52 15 10 77 50526 48397 49821 48451 48140 48607 53 16 9 100 277569 277523 277569 276153 273784 276153 54 15 10 117 343065 343090 342575 338906 338954 338723 The normal probability plot of the residuals confirms that the residuals have a normal distribution (Figure 8.11). Thus, the ANOVA can be performed to find the best heuristic algorithm as well as the best initial solution generator. 162 150000 + 100000 50000 Fl + 0 -50000 -100000 + -150000 .1 1 5 10 25 50 75 90 95 99 99.9 N.,rn 1 Pere.-.t I 1 Figure 8.11 The normal probability plot of the experimental design of finding the best heuristic algorithm for six machine problem by considering minimization of sum of the completion times criterion The ANOVA table is shown in Table A. 17 in appendix. The results of the experiment show that there is not a significant difference among the objective function values of the heuristic algorithms (the result ofF test is equal to 0.6 189). The results of the experiment also show that there is a significant difference between the performance of the initial solution generators for six machine problems (the result of F test is less than 0.0001). The comparison between the average objective function values of the initial solutions shows that applying the second initial solution provides a better solution than the first one. Among the interactions, the interactions between the initial solution factor and the group factor and the ratio factors (G*I, Ri *J, G*R1 *J, and G*J*R1 *1) are significant. A test of effect slice is performed to obtain detailed information by considering the highest significant order interactions for the initial solution generator effect i.e., G*J*R1 *J based on Tukey-Kramer adjustment. The results are shown in Table A. 18 in appendix. Based on the results, the summary of significant differences are as follows: 163 For the initial solution generators comparison, there is a significant difference among the performance of initial solutions in a few cells. These cells are presented in the table below with the best initial solution of each cell. Table 8.18 The experimental cells of six machine problems by considering minimization of sum of the completion times criterion in which the initial solution generators do not have the same performance The level of The level of The level group factor The best heuristic algorithm job factor of Ri 3 2 1 Initial solution generator 2 3 3 1 Initial solution generator 2 3 3 3 Initial solution generator 2 8.2.3.2 The Experimental Design to Compare the Time Spent for Heuristic Algorithms by Considering Minimization of Sum of the Completion Times Criterion The time spent to terminate the search algorithm and the time spent to find the best solution for each heuristic algorithm are shown in Table 8.19 for all test problems. Table 8.19 The time spent for the test problems of six machine problems (in seconds) by considering minimization of sum of the completion times criterion Initial 1 Initial 2 TS1 TS2 TS3 TS1 TS2 TS3 - - 1 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 ___Q_ 0 0 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 0 0 0 0 5 0 0 0 0 0 0 00 0 0 0 0 6 7 0 0 0 0 0 1 0 1 0 0 0 0 00 00 0 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 0 0 164 Table 8.19 (Continued) The time spent for the test problems of six machine problems (in seconds) by considering minimization of sum of the completion times criterion nuwii' 1* 1* H HUHUHUHUHUHU 165 Table 8.19 (Continued) The time spent for the test problems of six machine problems (in seconds) by considering minimization of sum of the completion times criterion Initiall Initial2 TSI TS2 TS3 TSI TS2 TS3 j -zr _i -1 2. m CD 2. g. Cfl a 0 a w CD -I 1 -I -I f -I ! -I 3 CD 3 CD 3 3 CD 3 CD CD 3 CD 3 CD CD CD CD CD CD 45 378 291 2430 1475 2890 949 332 2 2512 2283 3007 631 46 135 95 1015 779 2956 2948 330 27 664 49 1495 59 47 52 51 635 168 1002 168 47 35 894 114 768 613 48 388 268 4930 1827 2759 641 872 137 9174 7186 6233 609 49 16 14 65 64 128 51 50 14 68 14 239 15 50 13 12 125 62 118 23 37 15 207 44 536 43 51 268 43 2925 303 4052 853 140 134 2838 2381 3015 851 52 530 167 3021 981 4318 1197 719 191 5467 912 4579 1344 53 125 66 315 76 674 79 415 61 803 749 1887 97 54 415 321 2286 682 1318 720 504 304 3668 327 3064 3015 The normal probability plot of the residuals confirms that the residuals have a normal distribution (Figure 8.12). Thus, the ANOVA can be applied to find the best heuristic algorithm as well as the best initial solution generator. 3000 -I- 2000 + -F 1000 El 0 -1000 -I- + -I- -2000 + -3000 .1 1 10 2 50 75 90 95 99 99.9 P.cr.tl 1 Figure 8.12 The normal probability plot of the experimental design of finding the most efficient heuristic algorithm for six machine problem by considering minimization of sum of the completion times criterion 166 The ANOVA table for comparison of time spent is presented in Table A. 19. The results of the experiment show that there is a significant difference among the time spent for heuristic algorithms (the result of F test is less than 0.000 1). The Tukey test is applied to find the difference. The result of Tukey' s test shows that the time spent by TS 1 is less than the other two heuristic algorithms. The results of the experiment show that there is not a significant difference between the time spent by algorithms in applying different initial solution generators for six machine problems (the result ofF test is equal to 0.1152). Among the interactions, the interactions between the algorithm factor and all of the whole plot factors (G*A, J*A, R1*A, G*J*A, G*R1*A, J*R1*A, and G*J*R1*A) are significant. The effect slice test is performed for more detailed comparisons. The results are shown in Table A.20 in appendix. Based on the results, the summary of significant differences are as follows: For the heuristic algorithms comparison, there is a significant difference among the time spent by the heuristic algorithms in a few cells. These cells are presented in the table below with the best heuristic algorithm for each cell. Table 8.20 The experimental cells of six machine problems by considering minimization of sum of the completion times criterion in which the heuristic algorithms do not have the same time spent The level The level of The level The most efficient of group job factor of Ri heuristic algorithm factor 3 2 2 TS1 3 2 3 TS1 3 3 2 TS1 3 3 3 TS1 167 8.2.3.3 Evaluating the Quality of Solutions The lower bounding technique based on B&P algorithm is applied to estimate the quality of solutions. As discussed before, in the interest of time, only a few of test problems are considered to be solved for estimating the quality of solutions as a sample. The size of the sample is considered 10 for six machine problems. Thus, every other five problems (problems number 1, 6, 11, 21, ..., 51 of test problems) are considered to be solved by the lower bounding technique. The results are shown in the table below. This comparison is performed by considering the result of the best tabu search. The results show that the average percentage error is equal to 14%. Table 8.21 The result of the lower bounding technique for six machine problems by considering minimization of sum of the completion times criterion 0 0 - o - 0 © _. D 0 1 5 3 11 10256 1 10256 0.000 6 2 4 8 4502 4 4762 0.058 11 4 6 15 11678 36 11791 0.010 16 4 9 21 4256.45 23764 5359 0.259 21 6 4 22 4728.26 28800 5854 0.238 26 7 7 29 34477 41 34477 0.000 31 9 9 50 74057 1899 74057 0.000 36 8 10 46 60370 24924 62871 0.041 41 11 4 34 63360 29226 64465 0.017 46 14 7 60 19517 28800 31692 0.624 51 13 10 72 32389 28800 41884 0.293 Average: 0.140 168 CHAPTER 9: DISCUSSION The heuristic algorithms are applied to solve problems by using different initial solution generators for proposed criteria. The lower bounding technique for each criterion is also applied for the test problems to estimate the quality of solutions. The analysis of the results of the experiment for each criterion is as follows: 9.1 Analyzing the Results of Minimization of Makespan Criterion As discussed in chapter two, Schaller et al. (2000) investigated SDGS problems by considering minimization of makespan. They developed a heuristic algorithm to solve the problem and noted that their algorithm may not provide a good quality solution, but the result worth to be used as the initial solution of a heuristic search algorithm such as tabu search. In this research, all test problems of two, three, and six machine problems are solved by three versions of heuristic algorithm (tabu search) by applying two different initial solution generators. The first initial solution generator is a random sequence generator and the second one is developed based on the result of Schaller et al. (2000) algorithm. The lower bounding technique is also applied to get a lower bound for each test problem. The results of the experiment show that TS2 (LTM_Max) has the best performance compared to the other heuristic algorithms in all problems. In other words, it provides a better sequence for groups as well as jobs in each group. This result is expected because LTM_Max is more capable of obtaining a better solution than the other heuristic algorithms, because it is extensively searching around the areas (neighborhoods) that are historically found good (intensification). Based on the results, there is no significant difference between the objective function values of the heuristic algorithms by applying different initial solutions. This means 169 that applying Schaller et al. (2000) algorithm as the initial solution generator does not help to improve the quality of solution. The results also show that the heuristic algorithms (tabu search) provide a much better solution than Schaller et al. (2000) algorithm. These results are shown in Table 9.1. The feasible solution space of the problem has too many local optimum points. Thus, starting with a good quality local optimal solution as the initial solution does not guarantee of obtaining a better final solution by the heuristic algorithm. This may be the reason for not improving the quality of solutions by applying Schaller et al. (2000) algorithm as an initial solution generator. Table 9.1 The results of test problems for minimization of makespan criterion Percentage Percentage Is TS better The best The best Problem heuristic error for error for than initial Schaller et the best Schaller et algorithm solution al. (2000) tabu search al. (2000)? Two machine TS2 9.14% 0.68% Yes Three machine TS2 8.76% 1.00% Yes Six Machine TS2 7.08% 1.64% Yes The results of comparing the heuristic algorithms based on their efficiency (the time spent to perform the search) is shown in Table 9.2. Based on the results, for two machine problem, the random initial solution generator is better than that based on Schaller et al. (2000) algorithm. On the other hand, for the six machine problem, the Schaller et al. (2000) initial solution generator is a better performer. Table 9.2 The result of the most efficient initial solution generator by considering minimization of makespan criterion Problem The efficient initial solution The first initial solution Two machine generator Three machine The second initial solution Six Machine generator 170 9.2 Analyzing the Results of Minimization of Sum of the Completion Times Criterion There is no readily available algorithm from previous research for minimization of sum of the completion times criterion, and compare its performance with that of the heuristic algorithm. For this criterion, two different initial solution generators are developed. The fist is a random sequence generator, and the second is developed based on relaxing the problem to a single machine SDGS problem. All test problems of two, three, and six machine problems are solved by three versions of heuristic algorithm (tabu search) by applying these two initial solution generators. The lower bounding approach is also applied to get a lower bound for some test problems. The summary of the results are shown in Table 9.3. The results of the experiment show that TS2 (LTM_Max) has the best performance compared to the other heuristic algorithms for two and three machine problems. None of the heuristic algorithms show a superior performance compared to the others for six machine problems. The reason is that all possible combination of problems are not tested in this research because of interest of time. As mentioned before, only six machine problems in which the ratio of set-up time between consecutive machines are increased, equal, or decreased are considered in this research. The results also show that there is not a significant difference between the objective function values of the heuristic algorithms by applying different initial solution generators for two and three machine problems, but the second initial solution generator provides better results for six machine problem. The efficiency of TS1 (the time spent to perform the search by heuristic algorithm) is better than the other heuristic algorithms in all problems. 171 The percentage error of the problems is not satisfactory for large size problems in all two, three, and six machine problems. The reason is that for large size problems, the sub-problems cannot be solved optimally during their time limitation (two hours) and the lower bound obtained for sub-problems after two hours is not of good quality. Table 9.3 The results of test problems for minimization of sum of the completion times criterion The best The most The most The best initial efficient efficient initial Percentage Problem heuristic algorithm solution heuristic solution error generator algorithm generator Two TS2 TS1 14.4% machine Three TS2 TS1 17.2% machine Six ---- Initial 2 TS1 14.0% Machine As it is shown, the percentage errors of the problems are 14.4%, 17.2%, and 14.0% for two, three, and six machine problems. At each of these problem instances, there are a few problems with high percentage error (more than 50%). If these problems with high percentage error are removed from the sample, the percentage error of the problems is improved drastically. Table 9.4 shows the percentage errors by removing the problems with more than 50% error. Table 9.4 The percentage error of the test problems for minimization of sum of the completion times by removing problems with more than 50% percentage error Percentage Percentage error (by removing problems with more Problem error than 50% percentage error) Two 14.4% 8.5% (by removing problem 51) machine Three 17.2% 12.1% (by removing problems 137, 145) machine Six 14.0% 9.2% (by removing problem 46) Machine___________ 172 CHPATER 10: CONCLUSIONS AND SUGGESTIONS FOR FUTHURE RESEARCH Manufacturing companies need to improve their efficiency in order to survive in current competitive world. One way of improving the efficiency is reducing the production cost by producing the products as quickly as possible. It is clear that the longer time the products stay on the shop floor, the higher they cost the company. Cellular Manufacturing (CM), is known as a technique to improve the efficiency in batch type production by reducing the production time. In this approach, all machines of the production line of the company are assigned to several independent cells. Then, parts based on their similarity in shape or production requirements are set in different groups. Finally, the groups are assigned to cells according to the capability of available machines in each cell. This decomposition of machines and grouping parts leads to significant reduction in set-up time, work-in-progress inventories, and simplified flow of parts and tools which generally increase the production efficiency. The efficiency of production can be further improved, if the best sequence of processing groups in a cell as well as jobs that belong to a group are found based on maximizing or minimizing some measure of effectiveness. This subject is called Group Scheduling. Two relevant objectives in the investigation of group scheduling problems, minimization of makespan and minimization of the sum of the completion times, were considered in this research. The goal of these objectives is to process parts as quickly as possible and deliver them to the customer. In group scheduling problems, each group requires a major set-up on every machine. The separable set-up time scheduling problems are divided into two major categories: sequence dependent, and sequence independent scheduling. If the set-up time of a group for each machine depends on the immediately preceding group that is processed on that machine, the problem is classified as "sequence dependent group scheduling," Otherwise, it is called "sequence independent group scheduling". 173 In chapter two, it is shown that although a considerable body of literature on sequence dependent and sequence independent group scheduling has been created, there still exist several potential areas worthy of further research on sequence dependent and sequence independent group scheduling (Cheng et al., 2000). In this research, sequence dependent group scheduling problems are discussed by considering minimization of makespan and minimization of the sum of the completion times criteria. A mathematical model is developed to solve the problems optimally, but it is proved that the mathematical model is NP-hard. Thus, it is required to develop a heuristic algorithm to solve industry size problems in a reasonable time. Based on previous research, tabu search has produced a better performance compared to other heuristic algorithms in similar problems. Thus, a few versions of tabu search are developed to solve problems heuristically and generate solutions with good quality. Two different initial solution generators for the heuristic algorithms are developed for each criterion as well. To support the quality of solutions, a lower bound is required to estimate the quality of solutions. For each criterion, a different lower bounding mechanism is created to get lower bounds for problems. For the minimization of makespan, a lower bounding technique is developed by relaxing the mathematical model of the problem from sequence dependent group scheduling problem to sequence dependent job scheduling problem and adding a few constraints to get tighter lower bound. The results show that the average percentage error of the heuristic algorithm for this criterion is 0.68%, 1.00%, and 1.60% for two, three and six machine problems, respectively. 174 For minimization of the sum of completion times criterion, a lower bounding technique based on Branch-and-Price (B&P) is developed. In this model, the mathematical model is decomposed to a master problem and one or more sub-problems. The number of sub- problems is equal to the number of machines. The results show that the average percentage error of the heuristic algorithm for this criterion is 14.4%, 17.2%, and 14.0% for two, three and six machine problems, respectively. The experimental design techniques are applied to find the best heuristic algorithm and the best initial solution generator for each criterion. To compare the performance of the heuristic algorithms, for each machine size problem, the random test problems are generated and solved by the heuristic algorithms. Then the experimental design techniques are applied to find the best heuristic algorithm with the best performance as well as the best initial solution generator. 10.1 Suggestions for Future Research The suggestions for future research can be categorized as follows in the following sections: Defining new research problems related to the one discussed in this dissertation Applying new techniques to solve the proposed problems in this dissertation Each of the above items are discussed below 10.1.1 Defining Related Research Problems As Cheng et al. (2000) mentioned, there is still room for more research in the area of sequence dependent group scheduling problems. As mentioned, the research problem of this dissertation is constructed based on some assumptions. These assumptions are explained in chapter three. By relaxing any of 175 these assumptions, a new research problem can be defined. These research problems are as follows: The first assumption was based on permutation scheduling. In other words, in this research it was assumed that all jobs and groups are processed in the same sequence on all machines (permutation scheduling). If a company can relax this assumption in its production line, there are possibilities that the company can further reduce the production time and work-in-progress inventories. To solve the new research problem, the tools applied in this research can be applied as follows: o The mathematical model proposed in this research can be applied by making a few changes. o The heuristic algorithm proposed in this research can be applied by making minor changes in the section of calculating the objective function value of neighborhoods. o The lower bounding technique proposed for minimization of makespan criterion can be applied by making minor changes. o The lower bounding technique proposed based on B&P algorithm can be applied to get a lower bound to estimate the quality of solutions of heuristic algorithms. The second assumption was based on static job releases. In other words, in this research it is assumed that all jobs in each group are available at the beginning of the schedule. If this assumption is relaxed, the mathematical models, the heuristic algorithm techniques, and the lower bounding algorithm (B&P) can be applied by making changes for both criteria, but the changes can be substantial. The third assumption was about the priority of jobs as well as group. In this research it was assumed that all jobs and groups have the same importance (weight). In real world problems, there are cases that a company has orders in which they are more important than the other orders. This problem can be solved by the tools presented in this research by making the following minor changes: o The mathematical model proposed in this research can be applied by making some minor changes. 176 o The heuristic algorithm (tabu search) proposed in this research can be applied by making minor changes in the section of calculating the objective function value of neighborhoods. o The lower bounding technique proposed based on B&P algorithm can be applied to get a lower bound to estimate the quality of solutions of heuristic algorithms for both criteria. The last assumption was based on machine availabilities. In this research it was assumed that all machines are available at the beginning of planning horizon. There are situations in real world that some of the machines in the production line may not be available for a while. This problem can be solved by the tools presented in this research by making the following minor changes: o The mathematical model proposed in this research can be applied by making some minor changes. o The heuristic algorithm (tabu search) proposed in this research can be applied by making minor changes in the section of calculating the objective function value of neighborhoods. o The lower bounding technique proposed based on B&P algorithm can be applied to get a lower bound to estimate the quality of solutions of heuristic algorithms for both criteria. In this research, minimization of makespan and minimization of sum of the completion times are considered as a criterion. There are other criteria which may be more suitable for companies to apply. For instance, if a company has some stringent deadlines for its orders, then minimizing the sum of the tardiness of all job orders would be a better performance measure for the company. 10.2 Applying New Techniques (tools) to Solve Proposed Problems There are tools other than the ones applied in this research to solve problems. Some of these tools may have better performance than the ones applied in this research. It is valuable if their performance is examined with the ones proposed in this research. Some of these techniques are as follows: 177 Heuristic algorithm: Tabu search is applied to get good quality solutions in this research because of its better performance in previous research compared to genetic algorithm and simulated annealing. It may be worth to apply other heuristic algorithms such as ant colony algorithm to compare its performance with tabu search result. Lower bounding technique: The B&P algorithm is applied to get a lower bound for minimization of sum of the completion times criterion. The performance of the algorithm was not good enough. There is still room to improve the quality of the lower bound in one of areas below: o Applying other approaches for branching. The branching rule applied in this research (branch on the original variables i.e., AS variables) may not necessarily be the best way to branch. o Trying other stopping criteria. 178 BIBLIOAGRAPHY Allahverdi, A., Gupta, J.N.D., Aldowaisian, T., 1999, A Review of Scheduling Research Involving Setup Considerations, Omega, International Journal of Management Science, 27, 219-239. Allahverdi, A., 2000, Minimizing Mean Flowtime in a Two-machine Flowshop with Sequence Independent Setup Times, Computers and Operations Research, 27, 111- 127. Amini, M.M., Barr, R.S., 1993, Network Reoptimization Algorithms, A Statistically Design Compariosion, Orsa Journal of Computing, 5,4, 385-408. Bagga, P.C., and Khurana, K. 1986, Two Machine Flowshop with Separated Sequence- Independent Set-up Times: Mean Completion Time Criterion, Indian Journal of Management and Systems, 2, 1, 47-57. Baker, K.R., 1990, Scheduling Groups of Jobs in the Two-machine Flowshop, Mthematical and Computer Modeling, 13, 3, 29-3 6. Barnhart, C., Hane, C.A., Johnson, E.L., and Sigimondi, G., 1995A, a Column Generation and Partitioning Approach for Multi-commodity Flow Problem, Telecommunication Systems, 3, 239-258. Bamhart, C., Johnson, E.L., Nemhauser, G.L., Savelsbergh, M.W.P., and Vance, P.11., 1998, Branch and Price: Column Generation for Solving Huge Integer Programs, Operations Research, 46, 3, 3 16-329. Bellman, R., Esogbue, A.O., and Nabeshima, I., 1982, Mathematical Aspects of Scheduling and Applications, Pergamon Press,New York. Campbell, H.G., Dudek, R.A., and Smith, M.L., 1970, A Heuristic Algorithm for the n- Job and rn-Machine Sequencing Problem, Management Science, 16, 10, 630-637. Cheng, T.C.E., Gupta, J.N.D., and Wang, G., 2000, A Review of Flowshop Scheduling Research with Set-up Times, Production and Operations Management, 9, 3, 262- 282. Corwin, B.D., and Esogbue, A.O., 1974, Two-Machine Flowshop Scheduling Problems with Sequence Dependent Set-up Times: A Dynamic Programming Approach, Naval Research Logistics Quarterly, 21, 3, 5 15-524. Desrosiers, J., Dumas, Y., Solomon, M.M., Soumis, F., 1995, Time Constrained Routing and Scheduling. hi Handbooks in Operations Research and Management 179 Science. Ball, M. E., Magnanti, T.L., Monma, C., Nenthauser, G.L., Elsevier, Amesterdam. Flynn, B.B., 1987, The effects of Set-up time on output capacity in cellular manufacturing, International Journal of Production Research, 25, 1761-1772. Garey, M.D., Johnson, D.S., and Sethi, R., 1976, The Complexity of Flowshop and Jobshop Scheduling, Mathematics of Operations Research, 1, 2, 117-129. Glover, F., 1986, Future Paths for Integer Programming and Links to Artificial Intelligence, Computers and Operations Research, 13, 5 33-549. Glover, F., 1989, Tabu-Search-Part I, ORSA Journal of Computing, 1, 190-206. Glover, F., 1990a, Tabu-Search-Part II, ORSA Journal of Computing, 1, 4-32. Glover, F., 1990b, Tabu-Search: A Tutorial, Interfaces, 20, 74-94. Gupta, J.N.D., 1972, Optimal Scheduling in Multi-stage Flowshop, AIIE Transaction, 4, 238-243. Gupta, J.N.D., 1988, Flowshop Scheduling with Sequence Dependent Set-up Times, Proceeding of the ORSA/TIMS National Meeting, Washington DC. Gupta, J.N.D., and Darrow, W.P., 1986, The Two-Machine Sequence Dependent Flowshop Scheduling Problem, European Journal of Operational Research, 439- 446. Gupta, J.N.D., Das, S.R., and Ghosh, S., 1995, Flowshop Scheduling with Sequence Dependent Set-up Times, Working Paper, Department of Management, Ball State University, Muncie, IN. Ham, I., Hitomi, K., and Yoshida, T., 1985, Group Technology, Kluwer-Nijhoff Publishing, Boston, MA. Hansen, P., 1986, The Steepest Ascent Mildest Decent Heuristic for Combinatorial Programming, Conference on Numerical Methods in Combinatorial Optimization, Capri, Italy. Helal, M., and Rabelo, L., 2004, Investigating Group-Scheduling Heuristics in the Context of the Two-phase Nature of the Model in a Flow Cell, Proceedings(CD- ROM), 12th Annual Industrial Engineering Research Conference (IERC), Houston, TX, May 16-19. Hitomi, K., and Ham, I., 1976, Operations Scheduling for Group Technology Applications, CIRP Annals, 25, 419-422. Johnson, S.M., 1954, Optimal Two and Three Stage Production Scheduling with Set-up times Included, Naval Research Logistic Quarterly, 1, 1, 6 1-68. Jordan, C., 1996, Batching and Scheduling: Models and Methods for Several Problem Classes, Springer, Berlin, Germany. Lageweg, B.J, Lenstra, J.K, Rinnooy Kan, A.H.G, 1978, A General Bounding Scheme for the Permutation Flowshop Problem, Operations Research, 26, 1, 53-67. Laguna, M., Barnes, J.W., and Glover, F., 1991, Tabu Search Methods for a Single Machine Scheduling Problem, Journal of International Manufacturing, 2, 63-74. Logendran, R., 2002, Class Notes for Design and Scheduling of Cellular Manufacturing Systems (1E564), Oregon State University. Logendran, R, Salmasi, N., and Sriskandarajah, C., 2006, Two-Machine Group Scheduling Problems in Discrete Parts Manufacturing with Sequence-Dependent Setups, Journal of Computers and Operations Research, , 33, 158-180. Logendran, R, and Sonthinen, A., 1997, A Tabu search-based Approach for Scheduling Job-shop Type Flexible Manufacturing Systems, Journal of the Operational Research Society, 48, 264-277. Logendran, R, and Sriskandarajah, C., 1993, Two Machine Group Scheduling Problem with Blocking and Anticipatory Set-ups, European Journal of Operational Research, 69, 3, 467-481. Logendran, R, and Subur, F., 2004, Unrelated Parallel Machine Scheduling with Job Splitting, TIE Transaction, 36, 3, 359-372. Lubbecke, M.E., and Desrosiers, J, 2004, Selected Topics in Column Generation, Les Cahiers de GERAD G-2002-64, Group for Research in Decision Analysis, Montreal, Canada,. To appear in Operations Research. Montgomery, D.C., 2001, Design and Analysis of Experiments, New York, John Wiley & Sons. Nawaz, M, Enscore, E, and Ham, I, 1983, A Heuristic Algorithm for the rn-Machine, n- Job Flow-shop Sequencing Problem, Omega, International Journal of Management Science, 11, 1,91-95. Nowicki, E., and Smutnicki C., 1996, A Fast Tabu Search Algorithm for the permutation Flow-shop Problem, European Journal of Operations Research, 91, 160-175. 181 Panwalker, SS., Dudek, R.A., and Smith M.L., 1973, Sequencing Research and the Industrial Scheduling Problem, Symposium on the Theory of Scheduling and its Applications, 29-38. Parthasarathy, S., and Rajendran, C., 1997, A Simulated Annealing Heuristic for Scheduling to Minimize Weighted Tardiness in a Flowshop with Sequence Dependent Set-up Time of Jobs- A Case Study, Production Planning and Control, 8, 5, 475-483. Petrov, V.A., 1968, Flowline Group Production Planning, Business Publications, London, United Kingdom. Pinedo, M., 2002, Scheduling Theory Algorithms and Systems, Prentice Hall. Pham, D.T., and Karaboga, D., 1998, Intelligent Optimization Techniques, Springer. Proust, C., Gupta, J.N.D., and Deschamps, V., 1991, Flowshop scheduling with Set-up, Processing and Removal Time Separated, International Journal of Production Research, 29,3, 479-493. Reddy, V., and Narendran, T.T, 2003, Heuristics for Scheduling Sequence Dependent Set-up Jobs in Flow Line Cells, International Journal of Production Research, 41, 1, 193-206. Reeves, C.R., 1993, Modern Heuristic Techniques for Combinatorial Problems, John Wiley & Sons. Rios-Mercado, R.Z., and Bard, J.F., 1998, Computational Experience with a Branch- and-Cut Algorithm for Flowshop Scheduling with Set-ups, Computers and Operations Research, 25, 5, 35 1-366. Rios-Mercado, R.Z., and Bard, J.F., 1999, A Branch and Bound Algorithm for Flowshop Scheduling with Set-up Times, TIE Transaction, 31, 8, 721-731. Schaller, J.E., 2000, A Compansion of Heuristic for Family and Job Scheduling in a Flow-line Manufacturing Cell, International Journal of Production Research, 28, 2, 287-308. Schaller, J.E., Gupta, J.N.D., and Vakharia, A.J., 1997, Group Scheduling with Sequence Dependent Set-ups, Proceedings of the Annual Decision Science Institute Meeting, San Diego, CA, 1141-1143. Schaller, J.E., Gupta, J.N.D., and Vakharia, A.J., 2000, Scheduling a Flowline Manufacturing Cell with Sequence Dependent Family Setup Times, European Journal of Operational Research, 125, 324-339. 182 Simons, J.V, 1992, Heuristic in Flowshop Scheduling with Sequence Dependent Setup Time, Omega, 20, 2, 215-225. Skorin-Kapov, J., and Vakharia, A.J., 1993, Scheduling a Flow-Line Manufacturing Cell: A Tabu Search Approach, International Journal of Production Research, 31, 7, 1721-1734. Sridhar, J and Rajendran, C., 1994, A Genetic Algorithm for Family and Job Scheduling a Flow-Line Based Manufacturing Cell, Computers and Industrial Engineering, 27, 1-4, 469-472. Srikar, B.N., and Ghosh, S., 1986, A MLP Model for the n-job, M-stage Flowshop, with Sequence Dependent Set-up Times, International Journal of Production Research, 24, 6, 1459-1472. Stafford, E.F., and Tseng, F.T, 1990, On the Srikar-Ghosh MILP Model for the N*M SDST Flowshop Problem, International Journal of Production Research, 28, 10, 18 17-1830. Taillard, E., 1990, Some Efficient Heuristic Methods for the Flowshop Sequencing Problem, European Journal of Operational Research, 47, 65-74. Vakharia, A.J., and Chang, Y.L., 1990, a Simulated Annealing Approach to scheduling a Manufacturing Cell, Naval Research Logistics, 37, 6, 559-577. Vakharia, A.J., Schaller, J.E., and Gupta, J.N.D., 1995, Designing and Scheduling Manufacturing Cells, Proceeding of the iNFORMS National Meeting, New Orleans, LA. Widmar, M., and Hertz, A., 1989, A New Heuristic Method for the Flowshop Sequencing Problem, European Journal of Operational Research, 41, 186-193. Wilhelm, W.E., 2001, A Technical Review of Colunm Generation in Integer Programming, Optimization and Engineering, 2, 159-200. Wilhelm, W.E., Damodaran, P., and LI, J., 2003, Prescribing the Content and Timing of Product Upgrade, TIE Transactions, 35,647-663. Wortman, D.B., 1992, Managing Capacity: Getting the most from your Firm's Assets, md Eng, 24, 47-49. Yoshida, T., and Hitomi, K., 1979, Optimal Two-Stage Production Scheduling with Setup Times Separated, AITE Transactions, 11, 261-263. 183 APPENDICES 184 APPENDIX A: THE ANOVA AND TEST OF EFFECT SLICES TABLES FOR THE RESULT CHAPTER Table A. 1 The ANOVA table for two machine problem by considering minimization of makespan for time spent comparison Type 3 Tests of Fixed Effects I'Jum Den Effect DF DF F Value Pr > F G 2 0 106.94 <.0001 J 2 27 0.78 0.4701 Il 2 27 1.02 0.3735 A 2 135 203.99 <.0001 I 1 135 4.69 0.0321 G*J 4 0 13.99 0.5503 G*Rl 4 0 18.41 0.4140 G*A 4 135 203.99 <.0001 G*I 2 135 4.69 0.0107 J*R1 4 27 0.30 0.876]. J*A 4 135 1.55 0.1912 2 135 2.60 0.0782 R1*A 4 135 5.79 0.0002 R1*I 2 135 0.50 0.6053 A*I 2 135 6.09 0.0029 G*J*R1 8 27 0.30 0.9600 G*J*A 8 135 1.55 0.1457 G*J*I 4 135 2.60 0.0391 G*R1*A 8 135 5.79 <.0001 G*R1*I 4 135 0.50 0.7329 G*A*I 4 135 6.09 0.0002 J*R1*A 8 135 1.93 0.0606 J*R1*I 4 135 0.16 0.9604 J*A*I 4 135 0.85 0.4943 R1*A*I 4 135 0.68 0.6082 G*J*R1*A 16 135 1.93 0.0227 G*J*R1*I 8 135 0.16 0.9960 G*J*A*I 8 135 0.85 0.5582 G*R1*A*I 8 135 0.68 0.7100 J*R1*A*I 8 135 0.59 0.7839 G*J*R1*A*I 16 135 0.59 0.8863 185 Table A.2 Test of effect slices for two machine problem by considering minimization of makespan for time spent comparison The Mixed Procedure Differences of Least Squares Means Effect G J Ri A _G J Ri _A _Pr > Jt Adjustment G*J*Ri*A 1 1 1 1 1 1 1 2 1.0000 Tukey-Kramer G*J*R1*A 1 1 1 1 1 1 1 3 1.0000 Tukey-Kramer G*J*R1*A 1 1 1 2 1 1 1 3 1.0000 Tukey-Kramer G*J*Ri*A 1 1 2 1 1 1 2 2 1.0000 Tukey-Kramer G*J*R1*A 1 1 2 1 1 1 2 3 1.0000 Tukey-Kramer G*J*R1*A 1 1 2 2 1 1 2 3 1.0000 Tukey-Kramer G*J*R1*A 1 1 3 1 1 1 3 2 1.0000 Tukey-Kramer G*J*R1*A 1 1 3 1 1 1 3 3 1.0000 Tukey-Kramer G*J*R1*A 1 1 3 2 1 1 3 3 1.0000 Tukey-Kramer G*J*R1*A 1 2 1 1 1 2 1 2 1.0000 Tukey-Kramer G*J*R1*A 1 2 1 1 1 2 1 3 1.0000 Tukey-Kramer G*J*R1*A 1 2 1 2 1 2 1 3 1.0000 Tukey-Kramer G*J*R1*A 1 2 2 1 1 2 2 2 1.0000 Tukey-Kramer G*J*R1*A 1 2 2 1 1 2 2 3 1.0000 Tukey-Kramer G*J*R1*A 1 2 2 2 1 2 2 3 1.0000 Tukey-Kramer G*J*R1*A 1 2 3 1 1 2 3 2 1.0000 Tukey-Kramer G*J*Ri*A 1 2 3 1 1 2 3 3 1.0000 Tukey-Kramer G*J*R1*A 1 2 3 2 1 2 3 3 1.0000 Tukey-Kramer G*J*R1*A 1 3 1 1 1 3 1 2 1.0000 Tukey-Kramer G*J*R1*A 1 3 1 1 1 3 1 3 1.0000 Tukey-Kramer G*J*R1*A 1 3 1 2 1 3 1 3 1.0000 Tukey-Kramer G*J*Ri*A 1 3 2 1 1 3 2 2 1.0000 Tukey-Kramer G*J*R1*A 1 3 2 1 1 3 2 3 1.0000 Tukey-Kramer G*J*R1*A 1 3 2 2 1 3 2 3 1.0000 Tukey-Kramer G*J*R1*A 1 3 3 1 1 3 3 2 1.0000 Tukey-Kramer G*J*Ri*A 1 3 3 1 1 3 3 3 1.0000 Tukey-Kramer G*J*R1*A 1 3 3 2 1 3 3 3 1.0000 Tukey-Kramer G*J*R1*A 2 1 1 1 2 1 1 2 1.0000 Tukey-Kramer G*J*R1*A 2 1 1 1 2 1 1 3 1.0000 Tukey-Kramer G*J*R1*A 2 1 1 2 2 1 1 3 1.0000 Tukey-Kranier G*J*R1*A 2 1 2 1 2 1 2 2 1.0000 Tukey-Kramer G*J*R1*A 2 1 2 1 2 1 2 3 1.0000 Tukey-Kramer Q*J*R1*A 2 1 2 2 2 1 2 3 1.0000 Tukey-Kramer G*J*R1*A 2 1 3 1 2 1 3 2 1.0000 Tukey-Kramer G*J*Ri*A 2 1 3 1 2 1 3 3 1.0000 Tukey-Kramer G*J*R1*A 2 1 3 2 2 1 3 3 1.0000 Tukey-Kramer G*J*Ri*A 2 2 1 1 2 2 1 2 1.0000 Tukey-Kramer G*J*R1*A 2 2 1 1 2 2 1 3 1.0000 Tukey-Kramer G*J*Ri*A 2 2 1 2 2 2 1 3 1.0000 Tukey-Kramer G*J*R1*A 2 2 2 1 2 2 2 2 1.0000 Tukey-Kramer G*J*R1*A 2 2 2 1 2 2 2 3 1.0000 Tukey-Kramer G*J*R1*A 2 2 2 2 2 2 2 3 1.0000 Tukey-Kramer G*J*R1*A 2 2 3 1 2 2 3 2 1.0000 Tukey-Kramer G*J*R1*A 2 2 3 1 2 2 3 3 1.0000 Tukey-Kramer G*J*Ri*A 2 2 3 2 2 2 3 3 1.0000 Tukey-Kramer Ils Table A.2 (Continued) Test of effect slices for two machine problem by considering minimization of makespan for time spent comparison The Mixed Procedure Differences of Least Squares Means Effect G J Ri A G J Ri A Pr > ti Adjustment G*J*R1*A 2 3 1 1 2 3 1 2 1.0000 Tukey-Kramer G*J*Ri*A 2 3 1 1 2 3 1 3 1.0000 Tukey-Kramer G*J*R1*A 2 3 1 2 2 3 1 3 1.0000 Tukey-Kramer G*J*Ri*A 2 3 2 1 2 3 2 2 1.0000 Tukey-Kramer G*J*Ri*A 2 3 2 1 2 3 2 3 1.0000 Tukey-Kramer G*J*R1*A 2 3 2 2 2 3 2 3 1.0000 Tukey-Kramer G*J*Ri*A 2 3 3 1 2 3 3 2 1.0000 Tukey-Kramer G*J*Ri*A 2 3 3 1 2 3 3 3 1.0000 Tukey-Kramer G*J*Ri*A 2 3 3 2 2 3 3 3 1.0000 Tukey-Kramer G*J*R1*A 3 1 1 1 3 1 1 2 <.0001 Tukey-Xramer G*J*R1*A 3 1 1 1 3 1 1 3 <.000]. Tukey-Xramer G*J*R1*A 3 1 1 2 3 1 1 3 0.2124 Tukey-Kramer G*J*R1*A 3 1 2 1 3 1 2 2 <.0001 Tukey-Iraiuer G*J*R1*A 3 1 2 1 3 1 2 3 <.0001 Tukey-Krainer G*J*R1*A 3 1 2 2 3 1 2 3 0.2124 Tukey-Kramer G*J*R1*A 3 1 3 1 3 1 3 2 <.0001 Tukey-Xramer G*J*R1*A 3 1 3 1 3 1 3 3 <.0001 Tukey-Kramer G*J*R1*A 3 1 3 2 3 1 3 3 0.0134 Tukey-Kramer G*J*R1*A 3 2 1 1 3 2 1 2 <.0001 Tukey-Kramer G*J*R1*A 3 2 1 1 3 2 1 3 <.0001 Tukey-Krainer G*J*R1*A 3 2 1 2 3 2 1 3 0.0134 Tukey-Kramer G*J*R1*A 3 2 2 1 3 2 2 2 <.0001 Tukey-Kramer G*.J*R1*A 3 2 2 1 3 2 2 3 <.0001 Tukey-Kraiuer G*J*R1*A 3 2 2 2 3 2 2 3 1.0000 Tukey-Kramer G*J*R1*A 3 2 3 1 3 2 3 2 <.0001 Tukey-ICramer G*J*R1*A 3 2 3 1 3 2 3 3 <.0001 Tukey-Kramer G*J*R1*A 3 2 3 2 3 2 3 3 0.0003 Tukey-Kramer G**R1*I 3 3 1 1 3 3 1 2 <.0001 Tukey-Kraiuer G*J*R1*A 3 3 1 1 3 3 1 3 <.000]. Tukey-Kramer G*J*R1*A 3 3 1 2 3 3 1 3 0.2124 Tukey-Kramer G*J*R1*A 3 3 2 1 3 3 2 2 <.0001 Tukey-Kramer G*J*Ri*A 3 3 2 1 3 3 2 3 <.0001 Tukey-ICraiuer G*J*R1*A 3 3 2 2 3 3 2 3 1.0000 Tukey-Kramer G*J*R1*A 3 3 3 1 3 3 3 2 <.0001 Tukey-Kramer G*J*R1*A 3 3 3 1 3 3 3 3 <.0001 Tukey-raiuer G*J*R1*A 3 3 3 2 3 3 3 3 0.2124 Tukey-Kramer 187 Table A.2 (Continued) Test of effect slices for two machine problem by considering minimization of makespan for time spent comparison The Mixed Procedure Differences of Least Squares Means Effect G J I _G J I Pr > ti Adjustment G*J*I 1 1 1 1 1 2 1.0000 Tukey-Kramer G*J*I 1 2 1 1 2 2 1.0000 Tukey-Kramer G*J*I 1 3 1 1 3 2 1.0000 Tukey-Kramer G*J*I 2 1 1 2 1 2 1.0000 Tukey-Kramer G*J*I 2 2 1 2 2 2 1.0000 Tukey-Kramer G*J*I 2 3 1 2 3 2 1.0000 Tukey-Kramer G*J*I 3 1 1 3 1 2 0.2396 Tukey-Kramer G*J*I 3 2 1 3 2 2 1.0000 Tukey-Kramer G*J*I 3 3 1 3 3 2 <.0001 Tukey-Kramer Table A.3 The ANOVA table for three machine problem by considering minimization of makespan for algorithm comparison Type 3 Tests of Fixed Effects Num Den Effect DF DF F Value Pr > F G 2 0 360.42 <.0001 J 2 0 57.00 <.0001 Rl 2 0 16.96 <.0001 R2 2 0 10.54 <.0001 A 2 405 14.97 <.0001 I 1 405 1.20 0.2732 G*J 4 0 3.87 0.0063 G*Rl 4 0 1.74 0.1490 G*R2 4 0 2.56 0.0444 G*A 4 405 2.77 0.0270 G*I 2 405 4.58 0.0108 J*R1 4 0 0.46 0.7651 J*R2 4 0 0.16 0.9576 J*A 4 405 0.67 0.6122 2 405 0.94 0.3911 R1*R2 4 0 0.58 0.6813 R1*A 4 405 1.14 0.3354 R1*I 2 405 4.71 0.0095 R2*A 4 405 0.90 0.4623 R2*I 2 405 1.04 0.3546 A*I 2 405 0.15 0.8649 G*J*Rl 8 0 0.28 0.9720 G*J*R2 8 0 0.13 0.9979 G*J*A 8 405 0.33 0.9539 G*J*I 4 405 6.37 <.0001 G*R1*R2 8 0 0.94 0.4906 G*R1*A 8 405 0.21 0.9898 G*R1*I 4 405 0.45 0.7722 G*R2*A 8 405 0.90 0.5130 G*R2*I 4 405 5.01 0.0006 G*A*I 4 405 0.20 0.9399 J*R1*R2 8 0 0.39 0.9253 J*R1*A 8 405 0.74 0.6580 J*1fl*I 4 405 1.63 0.1654 J*R2*A 8 405 0.48 0.8678 J*R2*I 4 405 5.85 0.0001 J*A*I 4 405 0.45 0.7754 Rl*R2*A 8 405 0.36 0.9430 R1*R2*I 4 405 0.92 0.4542 R1*A*I 4 405 0.26 0.9014 R2*A*I 4 405 0.37 0.8271 G*J*R1*R2 16 0 0.81 0.6686 G*J*R1*A 16 405 0.69 0.8090 G*J*R1*I 8 405 1.60 0.1242 G*J*R2*A 16 405 0.17 0.9999 G*J*R2*I 8 405 3.36 0.0010 Table A.3 (Continued) The ANOVA table for three machine problem by considering minimization of makespan for algorithm comparison Type 3 Tests of Fixed Effects Num Den Effect DF DF F Value Pr > F G*J*A*I 8 405 0.78 0.6205 G*Rl*R2*A 16 405 0.51 0.9402 G*R1*R2*I 8 405 3.73 0.0003 G*Rl*A*I 8 405 0.17 0.9951 G*R2*A*I 8 405 0.32 0.9589 J*R1*R2*J 16 405 0.31 0.9953 J*Rl*R2*I 8 405 1.65 0.1078 J*Rl*A*I 8 405 0.13 0.9979 J*R2*A*I 8 405 0.53 0.8365 R1*R2*A*I 8 405 0.11 0.9990 G*J*R1*R2*A 32 405 0.33 0.9998 G*J*R].*R2*I 16 405 2.14 0.0063 G*J*Rl*A*I 16 405 0.33 0.9943 G*J*R2*A*I 16 405 0.22 0.9995 G*R1*R2*A*I 16 405 0.48 0.9569 J*R1*R2*A*I 16 405 0.67 0.8243 G*J*R1*R2*A*I 32 405 0.53 0.9837 190 Table A.4 Test of effect slices for three machine problem by considering minimization of makespan for algorithm comparison Differences of Least Squares Means Effect G A G A Pr > t Adjustment Adj P G*A 1 1 1 2 0.0400 Tukey-Kramer 0.5024 G*A 1 1 1 3 0.1668 Tukey-Kramer 0.9031 G*A 1 2 1 3 0.5000 Tukey-Kramer 0.9991 G*A 2 1 2 2 0.0992 Tukey-Kramer 0.7748 G*A 2 1 2 3 0.7011 Tukey-Kramer 1.0000 G*A 2 2 2 3 0.2053 Tukey-Kramer 0.9398 G*A 3 1 3 2 <.0001 Tukey-Kramer <.0001 G*A 3 1 3 3 0.0684 Tukey-Kramer 0.6640 G*A 3 2 3 3 0.0001 Tukey-Kramer 0.0043 191 Table A.4 (Continued) Test of effect slices for three machine problem by considering minimization of makespan for algorithm comparison Effect G J Ri R2 I G J R1R2 I Pr > ti Adju Adj P G*J*R1*R2*I 1 1 1 1 1 1 1 1 1 2 0.4593 T- K 1.0000 G*J*Ri*R2*I 1 1 1 2 1 1 1 1 2 2 0.7672 T- K 1.0000 G*J*R1*R2*I 1 1 1 3 1 1 1 1 3 2 1.0000 T- K 1.0000 G*J*R1*R2*I 1 1 2 1 1 1 1 2 1 2 0.3746 T- K 1.0000 G*J*Ri*R2*I 1 1 2 2 1 1 1 2 2 2 1.0000 T- K 1.0000 G*J*R1*R2*I 1 1 2 3 1 1 1 2 3 2 0.7672 T- K 1.0000 G*J*R1*R2*I 1 1 3 1 1 1 1 3 1 2 0.4593 T- K 1.0000 G*J*R1*R2*I 1 1 3 2 1 1 1 3 2 2 1.0000 T- K 1.0000 G*J*Ri*R2*I 1 1 3 3 1 1 1 3 3 2 1.0000 T- K 1.0000 G*J*R1*R2*I 1 2 1 1 1 1 2 1 1 2 0.6570 T- K 1.0000 G*J*R1*R2*I 1 2 1 2 1 1 2 1 2 2 0.7672 T- K 1.0000 G*J*R1*R2*I 1 2 1 3 1 1 2 2 1 2 0.7734 T- K 1.0000 G*J*Ri*R2*I 1 2 2 1 1 1 2 2 2 1 0.9694 T- K 1.0000 G*J*R1*R2*I 1 2 2 2 1 1 2 2 2 2 0.0004 T- K 0.6903 G*J*R1*R2*I 1 2 2 3 1 1 2 2 3 2 0.4898 T- K 1.0000 G*J*R1*R2*I 1 2 3 1 1 1 2 3 1 2 0.6570 T- K 1.0000 G*J*Ri*R2*I 1 2 3 2 1 1 2 3 2 2 0.3746 T- K 1.0000 G*J*R1*R2*I 1 2 3 3 1 1 2 3 3 2 0.5873 T- K 1.0000 G*J*Ri*R2*I 1 3 1 1 1 1 3 1 1 2 0.8823 T- K 1.0000 G*J*R1*R2*I 1 3 1 2 1 1 3 1 2 2 0.4593 T- K 1.0000 G*J*R1*R2*I 1 3 1 3 1 1 3 1 3 2 0.1999 T- K 1.0000 G*J*R1*R2*I 1 3 2 1 1 1 3 2 1 2 0.3004 T- K 1.0000 G*J*R1*R2*I 1 3 2 2 1 1 3 2 2 2 0.0940 T- K 1.0000 G*J*R1*R2*I 1 3 2 3 1 1 3 2 3 2 <.0001 T- K 0.0742 G*J*R1*R2*I 1 3 3 1 1 1 3 3 1 2 0.1676 T- K 1.0000 G*J*R1*R2*I 1 3 3 2 1 1 3 3 2 2 0.1529 T- K 1.0000 G*J*R1*R2*I 1 3 3 3 1 1 3 3 3 2 <.0001 T- K <.0001 G*J*R1*R2*I 2 1 1 1 1 2 1 1 1 2 1.0000 T- K 1.0000 G*J*R1*R2*I 2 1 1 2 1 2 1 1 2 2 1.0000 T- K 1.0000 G*J*R1*R2*I 2 1 1 3 1 2 1 1 3 2 1.0000 T- K 1.0000 G*J*Ri*R2*I 2 1 2 1 1 2 1 2 1 2 0.0684 T- K 1.0000 G*J*R1*R2*I 2 1 2 2 1 2 1 2 2 2 0.0304 T- K 1.0000 G*J*R1*R2*I 2 1 2 3 1 2 1 2 3 2 0.8823 T- K 1.0000 G*J*R1*R2*I 2 1 3 1 1 2 1 3 1 2 0.6930 T- K 1.0000 G*J*Ri*R2*I 2 1 3 2 1 2 1 3 2 2 0.1040 T- K 1.0000 G*J*R1*R2*I 2 1 3 3 1 2 1 3 3 2 0.2780 T- K 1.0000 G*J*R1*R2*I 2 2 1 1 1 2 2 1 1 2 0.5538 T- K 1.0000 G*J*R1*R2*I 2 2 1 2 1 2 2 1 2 2 0.8051 T- K 1.0000 G*J*Ri*R2*I 2 2 1 3 1 2 2 1 3 2 0.0847 T- K 1.0000 G*J*R1*R2*I 2 2 2 1 1 2 2 2 1 2 0.8051 T- K 1.0000 G*J*Ri*R2*I 2 2 2 2 1 2 2 2 2 2 0.7298 T- K 1.0000 G*J*R1*R2*I 2 2 2 3 1 2 2 2 3 2 0.3004 T- K 1.0000 G*J*R1*R2*I 2 2 3 1 1 2 2 3 1 2 0.6570 T- K 1.0000 G*J*Ri*R2*I 2 2 3 2 1 2 2 3 2 2 0.0684 T- K 1.0000 G*J*R1*R2*I 2 2 3 3 1 2 2 3 3 2 0.5873 T- K 1.0000 G*J*R1*R2*I 2 3 1 1 1 2 3 1 1 2 0.5538 T- K 1.0000 G*J*R1*R2*I 2 3 1 2 1 2 3 1 2 2 0.8435 T- K 1.0000 G*J*R1*R2*I 2 3 1 3 1 2 3 1 3 2 0.8823 T- K 1.0000 G*J*Ri*R2*I 2 3 2 1 1 2 3 2 1 2 1.0000 T- K 1.0000 192 Table A.4 (Continued) Test of effect slices for three machine problem by considering minimization of makespan for algorithm comparison Effect G J Ri R2 I G J R1R2 I Pr > ti Adju Adj P Nurn Den G*J*R1*R2*I 2 3 2 2 1 2 3 2 2 2 0.8823 T- K 1.0000 G*J*R1*R2*I 2 3 2 3 1 2 3 2 3 2 1.0000 T- K 1.0000 G*J*R1*R2*I 2 3 3 1 1 2 3 3 1 2 0.1266 T- K 1.0000 G*J*R1*R2*I 2 3 3 2 1 2 3 3 2 2 0.0023 T- K 0.9721 G*J*R1*R2*I 2 3 3 3 1 2 3 3 3 2 0.7672 T- K 1.0000 G*J*R1*R2*I 3 1 1 1 1 3 1 1 1 2 0.4593 T- K 1.0000 G*J*R1*R2*I 3 1 1 2 1 3 1 1 2 2 0.0020 T- K 0.9605 G*J*R1*R2*I 3 1 1 3 1 3 1 1 3 2 0.0023 T- K 0.9721 G*J*R1*R2*I 3 1 2 1 1 3 1 2 1 2 0.0023 T- K 0.9721 G*J*R1*R2*I 3 1 2 2 1 3 1 2 2 2 0.0140 T- K 1.0000 G*J*R1*R2*I 3 1 2 3 1 3 1 3 1 1 0.0256 T- K 1.0000 G*J*R1*R2*I 3 1 3 1 1 3 1 3 1 2 0.9214 T- K 1.0000 G*J*R1*R2*I 3 1 3 2 1 3 1 3 2 2 0.9606 T- K 1.0000 G*J*R1*R2*I 3 1 3 3 1 3 1 3 3 2 0.0940 T- K 1.0000 G*J*R1*R2*I 3 2 1 1 1 3 2 1 1 2 0.0236 T- K 1.0000 G*J*R1*R2*I 3 2 1 2 1 3 2 1 2 2 0.0762 T- K 1.0000 G*J*R1*R2*I 3 2 1 3 1 3 2 1 3 2 0.4300 T- K 1.0000 G*J*R1*R2*I 3 2 2 1 1 3 2 2 1 2 0.1832 T- K 1.0000 G*J*R1*R2*I 3 2 2 2 1 3 2 2 2 2 0.6930 T- K 1.0000 G*J*R1*R2*I 3 2 2 3 1 3 2 2 3 2 0.6570 T- K 1.0000 G*J*R1*R2*I 3 2 3 1 1 3 2 3 1 2 0.0940 T- K 1.0000 G*J*R1*R2*I 3 2 3 2 1 3 2 3 2 2 0.0489 T- K 1.0000 G*J*R1*R2*I 3 2 3 3 1 3 2 3 3 2 0.7298 T- K 1.0000 G*J*R1*R2*I 3 3 1 1 1 3 3 1 1 2 <.0001 T- K 0.1936 G*J*R1*R2*I 3 3 1 2 1 3 3 1 2 2 0.0038 T- K 0.9918 G*J*R1*R2*I 3 3 1 3 1 3 3 1 3 2 0.6217 T- K 1.0000 G*J*R1*R2*I 3 3 2 1 1 3 3 2 1 2 0.3746 T- K 1.0000 G*J*R1*R2*I 3 3 2 2 1 3 3 2 2 2 0.5873 T- K 1.0000 G*J*R1*R2*I 3 3 2 3 1 3 3 2 3 2 0.9214 T- K 1.0000 G*J*R1*R2*I 3 3 3 1 1 3 3 3 1 2 0.1393 T- K 1.0000 G*J*R1*R2*I 3 3 3 2 1 3 3 3 2 2 0.0343 T- K 1.0000 G*J*R1*R2*I 3 3 3 3 1 3 3 3 3 2 0.8823 T- K 1.0000 193 Table A.5 The ANOVA table for three machine problem by considering minimization of makespan for time spent comparison Type 3 Tests of Fixed Effects Num Den Effect DF DF F Value Pr > F G 2 81 84.46 <.0001 J 2 81 20.64 <.0001 Ri 2 81 0.26 0.7734 R2 2 81 0.15 0.8626 A 2 405 157.48 <.0001 I 1 405 1.70 0.1929 Q*J 4 81 17.94 <.0001 G*R1 4 81 0.31 0.8737 G*R2 4 81 0.12 0.9743 G*A 4 405 140.03 <.0001 G*I 2 405 1.46 0.2328 J*R1 4 81 0.56 0.6902 J*R2 4 81 0.32 0.8670 J*A 4 405 32.80 <.0001 J* 2 405 0.80 0.4516 R1*R2 4 81 1.79 0.1388 Ri*A 4 405 0.54 0.7097 Ri*I 2 405 0.19 0.8281 R2*A 4 405 0.38 0.8244 R2*I 2 405 1.17 0.3128 A*I 2 405 0.44 0.6432 G*J*R1 8 81 0.55 0.8131 G*J*R2 8 81 0.25 0.9799 G*J*A 8 405 28.73 <.0001 G*J*I 4 405 0.68 0.6047 G*R1*R2 8 81 1.73 0.1050 G*R1*A 8 405 0.50 0.8572 G*R1*I 4 405 0.21 0.9321 G*R2*A 8 405 0.35 0.9436 G*R2*I 4 405 1.19 0.3144 G*A*I 4 405 0.36 0.8366 J*R1*R2 8 81 0.95 0.4833 J*R1*A 8 405 1.33 0.2247 J*R1*I 4 405 0.15 0.9637 J*R2*A 8 405 0.89 0.5217 J*R2*I 4 405 0.54 0.7078 J** 4 405 0.18 0.9481 R1*R2*A 8 405 2.93 0.0034 R1*R2*I 4 405 0.26 0.9009 Ri*A*I 4 405 0.18 0.9464 R2*A*I 4 405 0.46 0.7622 G*J*R1*R2 16 81 0.81 0.6690 G*J*Ri*A 16 405 1.23 0.2392 G*J*R1*I 8 405 0.15 0.9964 G*J*R2*A 16 405 0.73 0.7624 G*J*R2*I 8 405 0.55 0.8175 194 Table A.5 (Continued) The ANOVA table for three machine problem by considering minimization of makespan for time spent comparison Type 3 Tests of Fixed Effects Num Den Effect DF DF F Value Pr > F G*J*A*I 8 405 0.16 0.9959 G*R1*R2*A 16 405 2.82 0.0002 G*R1*R2*I 8 405 0.28 0.9721 G*R1*A*I 8 405 0.17 0.9942 G*R2*A*I 8 405 0.47 0.8759 J*R1*R2*A 16 405 1.63 0.0572 J*R1*R2*I 8 405 0.23 0.9856 J*R1*A*I 8 405 0.14 0.9970 J*R2*A*I 8 405 0.23 0.9860 R1*R2*A*I 8 405 0.15 0.9967 G*J*R1*R2*A 32 405 1.43 0.0650 G*J*R1*R2*I 16 405 0.23 0.9994 G*J*R1*A*I 16 405 0.14 1.0000 195 Table A.6 The ANOVA table for six machine problem by considering minimization of makespan for algorithm comparison Type 3 Tests of Fixed Effects Num Den Effect DF DF F Value Pr > F G 2 0 253.77 <.0001 J 2 0 6.97 0.0036 Rl 2 0 270.48 <.0001 A 2 135 8.41 0.0004 I 1 135 0.94 0.3344 G*J 4 0 3.99 0.0114 G*R1 4 0 37.48 <.0001 G*A 4 135 4.02 0.0041 G*I 2 135 5.57 0.0047 J*Jj 4 0 2.34 0.0804 J*A 4 135 0.35 0.8408 J*] 2 135 0.82 0.4443 Rl*A 4 135 1.02 0.3992 R1*I 2 135 0.37 0.6915 A*I 2 135 0.75 0.4763 G*J*R1 8 0 0.83 0.5810 G*J*A 8 135 0.75 0.6474 G*J*I 4 135 2.00 0.0987 G*R1*A 8 135 0.54 0.8264 G*R].*I 4 135 4.09 0.0037 G*A*I 4 135 0.46 0.7668 J*R1*A 8 135 0.31 0.9610 J*R1*I 4 135 3.42 0.0107 J*A*I 4 135 0.08 0.9873 R1*A*I 4 135 0.51 0.7270 G*J*R1*A 16 135 0.45 0.9662 G*J*R1*I 8 135 3.08 0.0032 G*J*A*I 8 135 0.18 0.9935 G*R1*A*I 8 135 0.58 0.7968 J*Rl*A*I 8 135 0.37 0.9336 G*J*Rl*A*I 16 135 0.21 09995 196 Table A.7 Test of effect slices for six machine problem by considering minimization of makespan for the algorithm comparison Differences of Least Squares Means Effect G J Ri A I G J Ri A I Adj Adj P G*A 1 1 1 2 T-K 0.9974 G*A 1 1 1 3 T-K 1.0000 G*A 1 2 1 3 T-K 0.9999 G*A 2 1 2 2 T-K 0.9990 G*A 2 1 2 3 T-K 1.0000 G*A 2 2 2 3 T-K 0.9995 G*A 3 1 3 2 T-K 0.0004 G*A 3 1 3 3 T-K 0.9995 G*A 3 2 3 3 T-K <.0001 G*J*Ri*I 1 1 1 1 1 1 1 2 T-K 1.0000 G*J*Ri*I 1 1 2 1 1 1 2 2 T-K 1.0000 G*J*R1*I 1 1 3 1 1 1 3 2 T-K 1.0000 G*J*Ri*I 1 2 1 1 1 2 1 2 T-K 0.9960 G*J*R1*I 1 2 2 1 1 2 2 2 T-K 1.0000 G*J*Ri*I 1 2 3 1 1 2 3 2 T-K 1.0000 G*J*Ri*I 1 3 1 1 1 3 1 2 T-K 1.0000 G*J*Ri*I 1 3 2 1 1 3 2 2 T-K 1.0000 G*J*R1*I 1 3 3 1 1 3 3 2 T-K 1.0000 G*J*R1*I 2 1 1 1 2 1 1 2 T-K 1.0000 G*J*R1*I 2 1 2 1 2 1 2 2 T-K 1.0000 G*J*R1*I 2 1 3 1 2 1 3 2 T-K 0.7765 G*J*Ri*I 2 2 1 1 2 2 1 2 T-K 1.0000 G*J*R1*I 2 2 2 1 2 2 2 2 T-K 1.0000 G*J*Ri*I 2 2 3 1 2 2 3 2 T-K 1.0000 G*J*R1*I 2 3 1 1 2 3 1 2 T-K 1.0000 G*J*R1*I 2 3 2 1 2 3 2 2 T-K 1.0000 G*J*R1*I 2 3 3 1 2 3 3 2 T-K 0.9870 G*J*R1*I 3 1 1 1 3 1 1 2 T-K 0.4396 G*J*R1*I 3 1 2 1 3 1 2 2 T-K 1.0000 G*J*R1*I 3 1 3 1 3 1 3 2 T-K 1.0000 G*J*Ri*I 3 2 1 1 3 2 1 2 T-K 0.9939 G*J*Ri*I 3 2 2 1 3 2 2 2 T-K 0.9990 G*J*Ri*I 3 2 3 1 3 2 3 2 T-K 1.0000 G*J*Ri*I 3 3 1 1 3 3 1 2 T-K 1.0000 G*J*R1*I 3 3 2 1 3 3 2 2 T-K 0.9910 G*J*R1*I 3 3 3 1 3 3 3 2 T-K 0.0021 197 Table A.8 The ANOVA table for six machine problem by considering minimization of makespan for time spent comparison Type 3 Tests of Fixed Effects Num Den Effect DF DF F Value Pr > F G 2 0 153.58 <.0001 J 2 0 43.91 <.0001 Ri 2 0 10.50 0.0004 A 2 135 84.52 <.0001 I 1 135 5.09 0.0256 G*J 4 0 36.80 <.0001 G*R1 4 0 10.27 <.0001 G*A 4 135 75.91 <.0001 G*I 2 135 4.86 0.0092 J*R1 4 0 4.97 0.0039 J*A 4 135 20.72 <.0001 J*I 2 135 5.99 0.0032 R1*A 4 135 5.03 0.0008 R1*I 2 135 2.25 0.1090 A*I 2 135 1.30 0.2761 G*J*R1 8 0 4.91 0.0008 G*J*A 8 135 17.67 <.0001 G*J*I 4 135 5.76 0.0003 G*R1*A 8 135 4.92 <.0001 G*R1*I 4 135 2.06 0.0891 G*I*I 4 135 1.23 0.3009 J*R1*A 8 135 2.33 0.0223 J*R1*I 4 135 2.19 0.0729 4 135 1.61 0.1749 R1*A*I 4 135 0.54 0.7062 G*J*R1*A 16 135 2.28 0.0055 G*J*R1*I 8 135 2.01 0.0493 G*J*A*I 8 135 1.56 0.1430 G*R1*A*I 8 135 0.49 0.8595 8 135 0.65 0.7322 G*J*R1*A*I 16 135 0.62 0.8636 Table A.9 Test of effect slices for six machine problem by considering minimization of makespan for time spent comparison Differences of Least Squares Means Effect G J Ri Al G J Ri_A_I Adju AdjP G*J*Ri*A 1 1 1 1 1 1 1 2 T-K 1.0000 G*J*R1*A 1 1 1 1 1 1 1 3 T-K 1.0000 G*J*R1*A 1 1 1 2 1 1 1 3 T-K 1.0000 G*J*R1*A 1 1 2 1 1 1 2 2 T-K 1.0000 G*J*R1*A 1 1 2 1 1 1 2 3 T-K 1.0000 G*J*Ri*A 1 1 2 2 1 1 2 3 T-K 1.0000 G*J*R1*A 1 1 3 1 1 1 3 2 T-K 1.0000 G*J*Ri*A 1 1 3 1 1 1 3 3 T-K 1.0000 G*J*R1*A 1 1 3 2 1 1 3 3 T-K 1.0000 G*J*R1*A 1 2 1 1 1 2 1 2 T-K 1.0000 G*J*R1*A 1 2 1 1 1 2 1 3 T-K 1.0000 G*J*R1*A 1 2 1 2 1 2 1 3 T-K 1.0000 G*J*R1*A 1 2 2 1 1 2 2 2 T-K 1.0000 G*J*R1*A 1 2 2 1 1 2 2 3 T-K 1.0000 G*J*R1*A 1 2 2 2 1 2 2 3 T-K 1.0000 G*J*R1*A 1 2 3 1 1 2 3 2 T-K 1.0000 G*J*R1*A 1 2 3 1 1 2 3 3 T-K 1.0000 G*J*R1*A 1 2 3 2 1 2 3 3 T-K 1.0000 G*J*R1*A 1 3 1 1 1 3 1 2 T-K 1.0000 G*J*R1*A 1 3 1 1 1 3 1 3 T-K 1.0000 G*J*R1*A 1 3 1 2 1 3 1 3 T-K 1.0000 G*J*R1*A 1 3 2 1 1 3 2 2 T-K 1.0000 G*J*R1*A 1 3 2 1 1 3 2 3 T-K 1.0000 G*J*R1*A 1 3 2 2 1 3 2 3 T-K 1.0000 G*J*R1*A 1 3 3 1 1 3 3 2 T-K 1.0000 G*J*R1*A 1 3 3 1 1 3 3 3 T-K 1.0000 G*J*R1*A 1 3 3 2 1 3 3 3 T-K 1.0000 G*J*R1*A 2 1 1 1 2 1 1 2 T-K 1.0000 G*J*Ri*A 2 1 1 1 2 1 1 3 T-K 1.0000 G*J*R1*A 2 1 1 2 2 1 1 3 T-K 1.0000 G*J*R1*A 2 1 2 1 2 1 2 2 T-K 1.0000 G*J*R1*A 2 1 2 1 2 1 2 3 T-K 1.0000 G*J*R1*A 2 1 2 2 2 1 2 3 T-K 1.0000 G*J*R1*A 2 1 3 1 2 1 3 2 T-K 1.0000 G*J*R1*A 2 1 3 1 2 1 3 3 T-K 1.0000 G*J*R1*A 2 1 3 2 2 1 3 3 T-K 1.0000 G*J*R1*A 2 2 1 1 2 2 1 2 T-K 1.0000 G*J*R1*A 2 2 1 1 2 2 1 3 T-K 1.0000 G*J*R1*A 2 2 1 2 2 2 1 3 T-K 1.0000 G*J*R1*A 2 2 2 1 2 2 2 2 T-K 1.0000 G*J*R1*A 2 2 2 1 2 2 2 3 T-K 1.0000 G*J*R1*A 2 2 2 2 2 2 2 3 T-K 1.0000 G*J*R1*A 2 2 3 1 2 2 3 2 T-K 1.0000 G*J*R1*A 2 2 3 1 2 2 3 3 T-K 1.0000 199 Table A.9 (Continued) Test of effect slices for six machine problem by considering minimization of makespan for time spent comparison Differences of Least Squares Means Effect G J Ri A I G J Ri A I Adju Adj P G*J*Ri*A 2 2 3 2 2 2 3 3 T-K 1.0000 G*J*R1*A 2 3 1 1 2 3 1 2 T-K 1.0000 G*J*R1*A 2 3 1 1 2 3 1 3 T-K 1.0000 G*J*R1*A 2 3 1 2 2 3 1 3 T-K 1.0000 G*J*R1*A 2 3 2 1 2 3 2 2 T-K 1.0000 G*J*R1*A 2 3 2 1 2 3 2 3 T-K 1.0000 G*J*R1*A 2 3 2 2 2 3 2 3 T-K 1.0000 G*J*R1*A 2 3 3 1 2 3 3 2 T-K 1.0000 G*J*R1*A 2 3 3 1 2 3 3 3 T-K 1.0000 G*J*R1*A 2 3 3 2 2 3 3 3 T-K 1.0000 G*J*R1*A 3 1 1 1 3 1 1 2 T-K 1.0000 G*J*R1*A 3 1 1 1 3 1 1 3 T-K 1.0000 G*J*R1*A 3 1 1 2 3 1 1 3 T-K 1.0000 G*J*Ri*A 3 1 2 1 3 1 2 2 T-K 1.0000 G*J*R1*A 3 1 2 1 3 1 2 3 T-K 1.0000 G*J*Ri*A 3 1 2 2 3 1 2 3 T-K 1.0000 G*J*Ri*A 3 1 3 1 3 1 3 2 T-K 1.0000 G*J*R1*A 3 1 3 1 3 1 3 3 T-K 1.0000 G*J*R1*A 3 1 3 2 3 1 3 3 T-K 1.0000 G*J*R1*A 3 2 1 1 3 2 1 2 T-K 0.1564 G*J*R1*A 3 2 1 1 3 2 1 3 T-K 0.1576 G*J*R1*A 3 2 1 2 3 2 1 3 T-K 1.0000 G*J*R1*A 3 2 2 1 3 2 2 2 T-K <.0001 G*J*R1*A 3 2 2 1 3 2 2 3 T-K <.0001 G*J*R1*A 3 2 2 2 3 2 2 3 T-K 1.0000 G*J*R1*A 3 2 3 1 3 2 3 2 T-K <.0001 G*J*R1*A 3 2 3 1 3 2 3 3 T-K <.0001 G*J*R1*A 3 2 3 2 3 2 3 3 T-K 1.0000 G*J*R1*A 3 3 1 1 3 3 1 2 T-K <.0001 G*J*R1*A 3 3 1 1 3 3 1 3 T-K <.0001 G*J*R1*A 3 3 1 2 3 3 1 3 T-K 1.0000 G*J*R1*A 3 3 2 1 3 3 2 2 T-K <.0001 G*J*R1*A 3 3 2 1 3 3 2 3 T-K <.0001 G*J*R1*A 3 3 2 2 3 3 2 3 T-K 1.0000 G*J*R1*A 3 3 3 1 3 3 3 2 T-K <.0001 G*J*R1*A 3 3 3 1 3 3 3 3 T-K <.0001 G*J*R1*A 3 3 3 2 3 3 3 3 T-K 0.9995 200 Table A.9 (Continued) Test of effect slices for six machine problem by considering minimization of makespan for time spent comparison Effect G J Ri I G _J Ri -I adju Adj P G*J*R1*I 1 1 1 1 1 1 1 2 T-K 1.0000 G*J*R1*I 1 1 2 1 1 1 2 2 T-K 1.0000 G*J*R1*I 1 1 3 1 1 1 3 2 T-K 1.0000 G*J*R1*I 1 2 1 1 1 2 1 2 T-K 1.0000 G*J*R1*I 1 2 2 1 1 2 2 2 T-K 1.0000 G*J*R1*I 1 2 3 1 1 2 3 2 T-K 1.0000 G*J*R1*I 1 3 1 1 1 3 1 2 T-K 1.0000 G*J*R1*I 1 3 2 1 1 3 2 2 T-K 1.0000 G*J*R1*I 1 3 3 1 1 3 3 2 T-K 1.0000 G*J*R1*I 2 1 1 1 2 1 1 2 T-K 1.0000 G*J*R1*I 2 1 2 1 2 1 2 2 T-K 1.0000 G*J*R1*I 2 1 3 1 2 1 3 2 T-K 1.0000 G*J*R1*I 2 2 1 1 2 2 1 2 T-K 1.0000 G*J*R1*I 2 2 2 1 2 2 2 2 T-K 1.0000 G*J*R1*I 2 2 3 1 2 2 3 2 T-K 1.0000 G*J*R1*I 2 3 1 1 2 3 1 2 T-K 1.0000 G*J*R1*I 2 3 2 1 2 3 2 2 T-K 1.0000 G*J*R1*I 2 3 3 1 2 3 3 2 T-K 1.0000 G*J*R1*I 3 1 1 1 3 1 1 2 T-K 1.0000 G*J*R1*I 3 1 2 1 3 1 2 2 T-K 1.0000 G*J*R1*I 3 1 3 1 3 1 3 2 T-K 1.0000 G*J*R1*I 3 2 1 1 3 2 1 2 T-K 1.0000 G*J*R1*I 3 2 2 1 3 2 2 2 T-K 1.0000 G*J*R1*I 3 2 3 1 3 2 3 2 T-K 1.0000 G*J*R1*I 3 3 1 1 3 3 1 2 T-K 0.0013 G*J*R1*I 3 3 2 1 3 3 2 2 T-K 1.0000 G*J*R1*I 3 3 3 1 3 3 3 2 T-K <.0001 201 Table A. 10 The ANOVA table for two machine problem by considering minimization of sum of the completion times for algorithm comparison Type 3 Tests of Fixed Effects Num Den Effect DF DF F Value Pr > F G 2 0 55.66 <.0001 J 2 0 20.87 <.0001 Rl 2 0 0.49 0.6189 A 2 135 3.31 0.0394 I 1 135 2.81 0.0960 G*J 4 0 7.13 0.0005 G*Rl 4 0 0.61 0.6584 G*A 4 135 2.87 0.0256 G*I 2 135 1.57 0.2110 J*R1 4 0 0.90 0.4798 J*A 4 135 0.48 0.7475 2 135 1.87 0.1579 R1*A 4 135 0.11 0.9803 Rl*I 2 135 1.97 0.1439 A*I 2 135 0.08 0.9231 G*J*Rl 8 0 0.19 0.9901 G*J*A 8 135 0.60 0.7776 G*J*I 4 135 1.33 0.2606 G*Rl*A 8 135 0.10 0.9991 G*Rl*I 4 135 2.26 0.0661 G*A*I 4 135 0.12 0.9751 J*R1*A 8 135 0.41 0.9125 J*R1*I 4 135 0.90 0.4653 J*A*I 4 135 0.27 0.8998 R1*A*I 4 135 0.16 0.9581 G*J*R1*A 16 135 0.42 0.9750 G*J*R1*I 8 135 1.04 0.4063 G*J*A*I 8 135 0.41 0.9149 G*R1*A*I 8 135 0.22 0.9873 J*R1*A*I 8 135 0.42 0.9098 G*J*R1*A*I 16 135 0.35 0.9910 202 Table A. 11 Test of effect slices for two machine problem by considering minimization of sum of the completion times for algorithm comparison Differences of Least Squares Means Standard Effect G A _G _A Estimate Error DF Pr > ti Adju Adj G*A 1 1 1 2 8.6944 105.74 135 0.9346 T-K 1.0000 G*A 1 1 1 3 55.0833 105.74 135 0.6033 T-K 0.9999 G*A 1 2 1 3 46.3889 105.74 135 0.6616 T-K 1.0000 G*A 2 1 2 2 51.8056 105.74 135 0.6250 T-K 0.9999 G*A 2 1 2 3 30.0278 105.74 135 0.7769 T-K 1.0000 G*A 2 2 2 3 -21.7778 105.74 135 0.8371 T-K 1.0000 G*A 3 1 3 2 383.81 105.74 135 0.0004 T-K 0.0117 G*A 3 1 3 3 0.6389 105.74 135 0.9952 T-K 1.0000 G*A 3 2 3 3 -383.17 105.74 135 0.0004 T-K 0.0119 203 Table A.12 The ANOVA table for two machine problem by considering minimization of sum of the completion times for time spent comparison Type 3 Tests of Fixed Effects Num Den Effect DF DF F Value Pr > F G 2 0 69.11 <.0001 J 2 0 22.41 <.0001 Ri 2 0 2.32 0.1176 A 2 135 106.04 <.0001 I 1 135 0.01 0.9170 G*J 4 0 18.24 <.0001 G*Ri 4 0 3.02 0.0351 G*A 4 135 79.70 <.0001 G*I 2 135 0.07 0.9298 J*R1 4 0 1.69 0.1821 J*A 4 135 23.87 <.0001 J*J 2 135 0.08 0.9242 R1*A 4 135 4.94 0.0009 R1*I 2 135 1.53 0.2194 A*I 2 135 0.78 0.4589 G*J*Ri 8 0 1.14 0.3714 G*J*A 8 135 19.71 <.0001 G*J*I 4 135 0.05 0.9951 G*R1*A 8 135 5.96 <.0001 G*Ri*I 4 135 1.45 0.2213 G*A*I 4 135 1.01 0.4040 J*R1*A 8 135 3.12 0.0029 J*R1*I 4 135 1.29 0.2784 J*A*I 4 135 0.49 0.7455 Ri*A*I 4 135 0.92 0.4539 G*J*R1*A 16 135 2.51 0.0022 G*J*R1*I 8 135 1.77 0.0876 G*J*A*I 8 135 0.49 0.8603 G*R1*A*I 8 135 0.84 0.5725 J*R1*A*I 8 135 0.62 0.7599 G*J*R1*A*I 16 135 0.73 0.7551 204 Table A. 13 Test of effect slices for two machine problem by considering minimization of sum of the completion times for time spent comparison Effect G J Ri A G J Ri A Estimate DF t Value Pr >ItI Adjus G*J*R1*A 1 1 1 1 1 1 1 2 -367E-14 135 -0.00 1.0000 T-K G*J*Rl*A 1 1 1 1 1 1 1 3 -368E-14 135 -0.00 1.0000 T-K G*J*R1*A 1 1 1 2 1 1 1 3 -711E-17 135 -0.00 1.0000 T-K G*J*Rl*A 1 1 2 1 1 1 2 2 -305E-14 135 -0.00 1.0000 T-K G*J*Rl*A 1 1 2 1 1 1 2 3 -466E-14 135 -0.00 1.0000 T--K G*J*Rl*A 1 1 2 2 1 1 2 3 -161E-14 135 -0.00 1.0000 T-K G*J*Rl*A 1 1 3 1 1 1 3 2 -269E-14 135 -0.00 1.0000 T-K G*J*Rl*A 1 1 3 1 1 1 3 3 -318E-14 135 -0.00 1.0000 T-K G*J*R1*A 1 1 3 2 1 1 3 3 -49E-14 135 -0.00 1.0000 T-K G*J*R1*A 1 2 1 1 1 2 1 2 -195E-14 135 -0.00 1.0000 T-K G*J*Rl*A 1 2 1 1 1 2 1 3 -237E-14 135 -0.00 1.0000 T-K G*J*R1*A 1 2 1 2 1 2 1 3 -419E-15 135 -0.00 1.0000 T-K G*J*R1*A 1 2 2 1 1 2 2 2 -127E-14 135 -0.00 1.0000 T-K G*J*Rl*A 1 2 2 1 1 2 2 3 -269E-14 135 -0.00 1.0000 T-K G*J*Rl*A 1 2 2 2 1 2 2 3 -141E-14 135 -0.00 1.0000 T-K G*J*R1*A 1 2 3 1 1 2 3 2 -419E-15 135 -0.00 1.0000 T-K G*J*R1*A 1 2 3 1 1 2 3 3 8.95E-13 135 0.00 1.0000 T-K G*J*R1*A 1 2 3 2 1 2 3 3 1.31E-12 135 0.00 1.0000 T-K G*J*Ri*A 1 3 1 1 1 3 1 2 -249E-15 135 -0.00 1.0000 T-K G*J*R1*A 1 3 1 1 1 3 1 3 -711E-15 135 -0.00 1.0000 T-K G*J*R1*A 1 3 1 2 1 3 1 3 -462E-15 135 -0.00 1.0000 T-K G*J*R1*A 1 3 2 1 1 3 2 2 -249E-15 135 -0.00 1.0000 T-K G*J*R1*A 1 3 2 1 1 3 2 3 3.75E-12 135 0.00 1.0000 T-K G*J*R1*A 1 3 2 2 1 3 2 3 4E-12 135 0.00 1.0000 T-K G*J*R1*A 1 3 3 1 1 3 3 2 -201E-14 135 -0.00 1.0000 T-K G*J*R1*A 1 3 3 1 1 3 3 3 -585E-14 135 -0.00 1.0000 T-K G*J*R1*A 1 3 3 2 1 3 3 3 -384E-14 135 -0.00 1.0000 T-K G*J*Rl*A 2 1 1 1 2 1 1 2 -2.2500 135 -0.03 0.9761 T-K G*J*Rl*A 2 1 1 1 2 1 1 3 -2.2500 135 -0.03 0.9761 T-K G*J*R1*A 2 1 1 2 2 1 1 3 4.69E-13 135 0.00 1.0000 T-K G*J*R1*A 2 1 2 1 2 1 2 2 -7.5000 135 -0.10 0.9204 T-K G*J*Rl*A 2 1 2 1 2 1 2 3 -8.0000 135 -0.11 0.9151 T-K G*J*R1*A 2 1 2 2 2 1 2 3 -0.5000 135 -0.01 0.9947 T-K G*J*R1*A 2 1 3 1 2 1 3 2 -10.7500 135 -0.14 0.8861 T-K G*J*R1*A 2 1 3 1 2 1 3 3 -10.5000 135 -0.14 0.8887 T-K G*J*R1*A 2 1 3 2 2 1 3 3 0.2500 135 0.00 0.9973 T-K G*J*R1*A 2 2 1 1 2 2 1 2 -140.75 135 -1.88 0.0623 T-K G*J*R].*A 2 2 1 1 2 2 1 3 -165.75 135 -2.2]. 0.0286 T-K G*J*Rl*A 2 2 1 2 2 2 1 3 -25.0000 135 -0.33 0.7390 T-K G*J*Rl*A 2 2 2 1 2 2 2 2 -40.0000 135 -0.53 0.5941 T-K G*J*R1*A 2 2 2 1 2 2 2 3 -65.0000 135 -0.87 0.3869 T-K G*J*R1*A 2 2 2 2 2 2 2 3 -25.0000 135 -0.33 0.7390 T-K G*J*Rl*A 2 2 3 1 2 2 3 2 -20.2500 135 -0.27 0.7873 T-K G*J*R1*A 2 2 3 1 2 2 3 3 -24.5000 135 -0.33 0.7441 T-K G*J*R1*A 2 2 3 2 2 2 3 3 -4.2500 135 -0.06 0.9548 T-K G*J*Rl*A 2 3 1 1 2 3 1 2 -36.2500 135 -0.48 0.6291 T-K 205 Table A. 13 (Continued) Test of effect slices for two machine problem by considering minimization of sum of the completion times for time spent comparison Effect G J Ri A G J Ri A Estimate DF t Value Pr >t Adjus G*J*Rl*A 2 3 1 1 2 3 1 3 -54.0000 135 -0.72 0.4721 T-K G*J*Ri*A 2 3 1 2 2 3 1 3 -17.7500 135 -0.24 0.8130 T-K G*J*Ri*A 2 3 2 1 2 3 2 2 -91.7500 135 -1.23 0.2226 T-K G*J*Ri*A 2 3 2 1 2 3 2 3 -128.50 135 -1.72 0.0885 T-K G*J*Rl*A 2 3 2 2 2 3 2 3 -36.7500 135 -0.49 0.6244 T-K G*J*Ri*A 2 3 3 1 2 3 3 2 -31.2500 135 -0.42 0.6771 T-K G*J*Rl*A 2 3 3 1 2 3 3 3 -41.0000 135 -0.55 0.5849 T-K G*J*Ri*A 2 3 3 2 2 3 3 3 -9.7500 135 -0.13 0.8966 T-K G*J*R1*A 3 1 1 1 3 1 1 2 -27.7500 135 -0.37 0.7115 T-K G*J*R1*A 3 1 1 1 3 1 1 3 -24.7500 135 -0.33 0.7415 T-K G*J*R1*A 3 1 1 2 3 1 1 3 3.0000 135 0.04 0.9681 T-K G*J*R1*A 3 1 2 1 3 1 2 2 -122.00 135 -1.63 0.1056 T-K G*J*R1*A 3 1 2 1 3 1 2 3 -123.25 135 -1.65 0.1021 T-K G*J*R1*A 3 1 2 2 3 1 2 3 -1.2500 135 -0.02 0.9867 T-K G*J*R1*A 3 1 3 1 3 1 3 2 -120.50 135 -1.61 0.1099 T-K G*J*R1*A 3 1 3 1 3 1 3 3 -123.00 135 -1.64 0.1028 T-K G*J*R1*A 3 1 3 2 3 1 3 3 -2.5000 135 -0.03 0.9734 T-K G*J*R1*A 3 2 1 1 3 2 1. 2 -401.25 135 -5.36 <.0001 T-K G*J*R1*A 3 2 1 1 3 2 1 3 -342.75 135 -4.58 <.0001 T-K G*J*R1*A 3 2 1 2 3 2 1 3 58.5000 135 0.78 0.4361 T-K G*J*R1*A 3 2 2 1 3 2 2 2 -388.00 135 -5.18 <.0001 T-K G*J*R1*A 3 2 2 1 3 2 2 3 -414.25 135 -5.53 <.0001 T-K G*J*R1*A 3 2 2 2 3 2 2 3 -26.2500 135 -0.35 0.7265 T-K G*J*R1*A 3 2 3 1 3 2 3 2 -503.50 135 -6.72 <.0001 T-K G*J*R1*A 3 2 3 1 3 2 3 3 -749.25 135 -10.01 <.0001 T-K G*J*R1*A 3 2 3 2 3 2 3 3 -245.75 135 -3.28 0.0013 T-I( G*J*R1*A 3 3 1 1 3 3 1 2 -727.00 135 -9.71 <.0001 T-K G*J*R1*A 3 3 1 1 3 3 1 3 -360.25 135 -4.81 <.0001 T-K G*J*R1*A 3 3 1 2 3 3 1 3 366.75 135 4.90 <.0001 T-K G*J*R1*A 3 3 2 1 3 3 2 2 -1119.50 135 -14.95 <.0001 T-K G*J*R1*A 3 3 2 1 3 3 2 3 -1346.25 135 -17.98 <.0001 T-K G*J*R1*A 3 3 2 2 3 3 2 3 -226.75 135 -3.03 0.0030 T-K G*J*R1*A 3 3 3 1 3 3 3 2 -828.50 135 -11.06 <.0001 T-K G*J*R1*A 3 3 3 1 3 3 3 3 -1169.25 135 -15.61 <.0001 T-K G*J*R1*A 3 3 3 2 3 3 3 3 -340.75 135 -4.55 <.0001 T-K 206 Table A.14 The ANOVA table for three machine problem by considering minimization of sum of the completion times for algorithm comparison Type 3 Tests of Fixed Effects Num Den Effect DF DF F Value Pr > F G 2 0 221.45 <.0001 J 2 0 73.91 <.0001 Ri 2 0 1.88 0.1591 R2 2 0 1.35 0.2653 A 2 405 3.17 0.0429 I 1 405 38.41 <.0001 G*J 4 0 17.39 <.0001 G*Rl 4 0 0.61 0.6534 G*R2 4 0 0.70 0.5933 G*A 4 405 1.19 0.3157 G*I 2 405 21.90 <.0001 J*Rl 4 0 1.25 0.2975 J*R2 4 0 0.16 0.9563 J*A 4 405 0.75 0.5577 J* 2 405 2.27 0.1046 Rl*R2 4 0 1.22 0.3093 R1*A 4 405 0.55 0.6973 R1*I 2 405 0.41 0.6665 R2*A 4 405 0.15 0.9607 R2*I 2 405 1.88 0.1534 A*I 2 405 0.07 0.9323 G*J*Rl 8 0 0.83 0.5772 G*J*R2 8 0 0.18 0.9934 G*J*A 8 405 0.37 0.9367 G*J*I 4 405 3.17 0.0139 G*Ri*R2 8 0 1.52 0.1635 G*R1*A 8 405 0.58 0.7947 G*Rl*I 4 405 0.87 0.4803 G*R2*A 8 405 0.22 0.9873 G*R2*I 4 405 0.80 0.5265 G*A*I 4 405 0.14 0.9691 J*Rl*R2 8 0 0.52 0.8390 J*R1*A 8 405 0.88 0.5349 J*R1*I 4 405 1.45 0.2163 J*R2*A 8 405 0.29 0.9687 J*R2*I 4 405 2.75 0.0281 J*A*I 4 405 0.79 0.5299 R1*R2*A 8 405 0.42 0.9082 R1*R2*I 4 405 3.46 0.0085 Rl*A*I 4 405 0.19 0.9434 R2*A*I 4 405 0.65 0.6248 G*J*Rl*R2 16 0 1.03 0.4335 G*J*Rl*A 16 405 0.98 0.4770 G*J*R1*I 8 405 2.87 0.0041 G*J*R2*P 16 405 0.25 0.9988 207 Table A.14 (Continued) The ANOVA table for three machine problem by considering minimization of sum of the completion times for algorithm comparison Type 3 Tests of Fixed Effects Num Den Effect DF DF F Value Pr > F G*J*R2*I 8 405 3.62 0.0004 G*J*A*I 8 405 1.18 0.3087 G*Rl*R2*A 16 405 0.28 0.9978 G*R1*R2*I 8 405 3.23 0.0014 G*Rl*A*I 8 405 0.14 0.9973 G*R2*A*I 8 405 0.78 0.6249 J*Rl*R2*A 16 405 0.53 0.9292 J*R1*R2*I 8 405 1.07 0.3820 J*R1*A*I 8 405 0.21 0.9898 J*R2*A*I 8 405 0.88 0.5319 R1*R2*A*I 8 405 0.88 0.5346 G*J*R1*R2*A 32 405 0.52 0.9870 G*J*R1*R2*I 16 405 1.19 0.2738 G*J*R1*A*I 16 405 0.36 0.9894 G*J*R2*A*I 16 405 1.12 0.3372 G*R1*R2*A*I 16 405 0.68 0.8116 J*R1*R2*A*I 16 405 0.47 0.9594 G*J*R1*R2*A*I 32 405 0.48 0.9927 Table A. 15 Test of effect slices for three machine problem by considering minimization of sum of the completion times for the algorithm comparison Effect G J El R2 I _G J El _R2 I Adju Adj G*J*R1*R2*I 1 1 1 1. 1 1 1 1 1 T-K 2 1.0000 G*J*R1*R2*I 1 1 1 2 1 1 1. 1 2 T-K 2 1.0000 G*J*R1*R2*I 1 1 1 3 1 1 1 1 3 T-K 2 1.0000 G*J*R1*R2*I 1 1 2 1 1 1 1 2 1 T-K 2 1.0000 G*J*R1*R2*I 1 1 2 2 1 1 1 2 2 T-K 2 1.0000 G*J*R1*R2*I 1 1 2 3 1 1 1 2 3 T-K 2 1.0000 G*J*R1*R2*I 1 1 3 1 1 1 1 3 1 T-K 2 1.0000 G*J*R1*R2*I 1 1 3 2 1 1 1 3 2 T-K 2 1.0000 G*J*R1*R2*I 1 1 3 3 1 1 1 3 3 T-K 2 1.0000 G*J*R1*R2*I 1 2 1 1 1 1 2 1 1 T-K 2 1.0000 G*J*R1*R2*I 1 2 1 2 1 1 2 1 2 T-K 2 1.0000 G*J*R1*R2*I 1 2 1 3 1 1 2 1 3 T-K 2 1.0000 G*J*R1*R2*I 1 2 2 1 1 1 2 2 1 T-K 2 1.0000 G*J*R1*R2*I 1 2 2 2 1 1 2 2 2 2 T-K 1.0000 G*J*R1*R2*I 1 2 2 3 1 1 2 2 3 2 T-K 1.0000 G*J*R1*R2*I 1 2 3 1 1 1 2 3 1 2 T-K 1.0000 G*J*R1*R2*I 1 2 3 2 1 1 2 3 2 2 T-K 1.0000 G*J*R1*R2*I 1 2 3 3 1 1 2 3 3 2 T-K 1.0000 G*J*R1*R2*I 1 3 1 1 1 1 3 1 1 2 T-K 1.0000 G*J*R1*R2*I 1 3 1 2 1 1 3 1 2 2 T-K 1.0000 G*J*R1*R2*I 1 3 1 3 1 1 3 1 3 2 T-K 1.0000 G*J*R1*R2*I 1 3 2 1 1 1 3 2 1 2 T-K 1.0000 G*J*R1*R2*I 1 3 2 2 1 1 3 2 2 2 T-K 1.0000 G*J*R1*R2*I 1 3 2 3 1 1 3 2 3 2 T-K 1.0000 G*J*R1*R2*I 1 3 3 1 1 1 3 3 1 2 T-K 1.0000 G*J*R1*R2*I 1 3 3 2 1 1 3 3 2 2 T-K 1.0000 G*J*R1*R2*I 1 3 3 3 1 1 3 3 3 2 T-K 1.0000 G*J*R1*R2*I 2 1 1 1 1 2 1 1 1 2 T-K 1.0000 G*J*R1*R2*I 2 1 1 2 1 2 1 1 2 2 T-K 1.0000 G*J*R1*R2*I 2 1 1 3 1 2 1 1 3 2 T-K 1.0000 G*J*R1*R2*I 2 1 2 1 1 2 1 2 1 2 T-K 1.0000 G*J*R1*R2*I 2 1 2 2 1 2 1 2 2 2 T-K 1.0000 G*J*R1*R2*I 2 1 2 3 1 2 1 2 3 2 T-K 1.0000 G*J*R1*R2*I 2 1 3 1 1 2 1 3 1 2 T-K 1.0000 G*J*R1*R2*I 2 1 3 2 1 2 1 3 2 2 T-K 1.0000 G*J*R1*R2*I 2 1 3 3 1 2 1 3 3 2 T-K 1.0000 G*J*R1*R2*I 2 2 1 1 1 2 2 1 1 2 T-K 1.0000 G*J*R1*R2*I 2 2 1 2 1 2 2 1 2 2 T-K 1.0000 G*J*R1*R2*I 2 2 1 3 1 2 2 1 3 2 T-K 1.0000 G*J*R1*R2*I 2 2 2 1 1 2 2 2 1 2 T-K 1.0000 G*J*R1*R2*I 2 2 2 2 1 2 2 2 2 2 T-K 1.0000 G*J*R1*R2*I 2 2 2 3 1 2 2 2 3 2 T-K 1.0000 G*J*R1*R2*I 2 3 1 1 1 2 3 1 1 2 T-K 1.0000 G*J*R1*R2*I 2 3 1 2 1 2 3 1 2 2 T-K 1.0000 G*J*R1*R2*I 2 3 1 3 1 2 3 1 3 2 T-K 1.0000 G*J*R1*R2*I 2 3 2 1 1 2 3 2 1 2 T-K 1.0000 G*J*R1*R2*I 2 3 2 2 1 2 3 2 2 2 T-K 1.0000 G*J*R1*R2*I 2 3 2 3 1 2 3 2 3 2 T-K 1.0000 G*J*R1*R2*I 2 3 3 1 1 2 3 3 1 2 T-K 1.0000 G*J*R1*R2*I 2 3 3 2 1 2 3 3 2 2 T-K 1.0000 G*J*R1*R2*I 2 3 3 3 1 2 3 3 3 2 T-K 1.0000 209 Table A. 15 (Continued) Test of effect slices for three machine problem by considering minimization of sum of the completion times for the algorithm comparison Effect G J Ri R2 I _G J Ri R2 I Adju Adj G*J*R1*R2*I 3 1 1 1 1 3 1 1 1 2 T-K 1.0000 G*J*R1*R2*I 3 1 1 2 1 3 1 1 2 2 T-K 1.0000 G*J*R1*R2*I 3 1 1 3 1 3 1 1 3 2 T-K 1.0000 G*J*R1*R2*I 3 1 2 1 1 3 1 2 1 2 T-K 1.0000 G*J*R1*R2*I 3 1 2 2 1 3 1 2 2 2 T-K 1.0000 G*J*R1*R2*I 3 1 2 3 1 3 1 2 3 2 T-K 1.0000 G*J*Ri*R2*I 3 1 3 1 1 3 1 3 1 2 T-K 1.0000 G*J*Ri*R2*I 3 1 3 2 1 3 1 3 2 2 T-K 1.0000 G*J*R1*R2*I 3 1 3 3 1 3 1 3 3 2 T-K 1.0000 G*J*R1*R2*I 3 2 1 1 1 3 2 1 1 2 T-K 0.2505 G*J*R1*R2*I 3 2 1 2 1 3 2 1 2 2 T-K 1.0000 G*J*R1*R2*I 3 2 1 3 1 3 2 1 3 2 T-K 1.0000 G*J*R1*R2*I 3 2 2 1 1 3 2 2 1 2 T-K <.0001 G*J*R1*R2*I 3 2 2 2 1 3 2 2 2 2 T-K 1.0000 G*J*R1*R2*I 3 2 2 3 1 3 2 2 3 2 T-K 1.0000 G*J*R1*R2*I 3 2 3 1 1 3 2 3 1 2 T-K 0.0161 G*J*Ri*R2*I 3 2 3 2 1 3 2 3 2 2 T-K 1.0000 G*J*R1*R2*I 3 2 3 3 1 3 2 3 3 2 T-K 0.0008 G*J*R1*R2*I 3 3 1 1 1 3 3 1 1 2 T-K 1.0000 G*J*R1*R2*I 3 3 1 2 1 3 3 1 2 2 T-K 1.0000 G*J*Ri*R2*I 3 3 1 3 1 3 3 1 3 2 T-K 1.0000 G*J*R1*R2*I 3 3 2 1 1 3 3 2 1 2 T-K <.0001 G*J*R1*R2*I 3 3 2 2 1 3 3 2 2 2 T-K 1.0000 G*J*Ri*R2*I 3 3 2 3 1 3 3 2 3 2 T-K 1.0000 G*J*R1*R2*I 3 3 3 1 1 3 3 3 1 2 T-K 0.1452 G*J*R1*R2*I 3 3 3 2 1 3 3 3 2 2 T-K 0.9465 G*J*Ri*R2*I 3 3 3 2 1 3 3 3 3 2 T-K 0.9991 210 Table A. 16 The ANOVA table for three machine problem by considering minimization of sum of the completion times for time spent comparison Type 3 Tests of Fixed Effects Num Den Effect DF DF F Value Pr > F G 2 0 73.71 <.0001 J 2 0 21.50 <.0001 Ri 2 0 0.18 0.8377 R2 2 0 1.09 0.3412 A 2 405 140.35 <.0001 I 1 405 0.05 0.8320 G*J 4 0 16.69 <.0001 G*Rl 4 0 0.22 0.9249 G*R2 4 0 0.91 0.4613 G*A 4 405 110.12 <.0001 G*I 2 405 0.17 0.8425 J*R1 4 0 0.18 0.9502 J*R2 4 0 0.25 0.9088 J*A 4 405 29.84 <.0001 2 405 0.49 0.6125 R1*R2 4 0 2.45 0.0523 R1*A 4 405 0.89 0.4722 R1*I 2 405 0.55 0.5765 R2*A 4 405 3.24 0.0124 R2*I 2 405 0.02 0.9840 A*I 2 405 0.41 0.6632 G*J*R1 8 0 0.31 0.9596 G*J*R2 8 0 0.18 0.9932 G*J*A 8 405 23.36 <.0001 G*J*I 4 405 0.62 0.6460 G*R1*R2 8 0 2.18 0.0371 G*R1*A 8 405 0.86 0.5533 G*Rl*I 4 405 0.42 0.7910 G*R2*A 8 405 2.85 0.0043 G*R2*I 4 405 0.03 0.9981 G*A*I 4 405 0.48 0.7487 J*R1*R2 8 0 1.27 0.2728 J*R1*A 8 405 0.53 0.8336 J*R1*I 4 405 0.35 0.8435 J*R2*A 8 405 0.94 0.4813 J*R2*I 4 405 0.35 0.8445 4 405 0.14 0.9670 R1*R2*A 8 405 3.27 0.0013 R1*R2*I 4 405 0.37 0.8325 R1*A*I 4 405 0.18 0.9486 R2*A*I 4 405 0.05 0.9951 G*J*R1*R2 16 0 1.12 0.3546 G*J*Rl*A 16 405 0.75 0.7426 G*J*Ri*I 8 405 0.38 0.9291 G*J*R2*A 16 405 0.77 0.7201 G*J*R2*I 8 405 0.47 0.8778 211 Table A. 16 (Continued) The ANOVA table for three machine problem by considering minimization of sum of the completion times for time spent comparison Type 3 Tests of Fixed Effects Num Den Effect DF DF F Value Pr > F G*J*A*I 8 405 0.16 0.9961 G*Rl*R2*A 16 405 2.97 0.0001 G*Rl*R2*I 8 405 0.27 0.9744 G*Rl*A*I 8 405 0.22 0.9877 G*R2*A*I 8 405 0.06 0.9999 J*R1*R2*A 16 405 1.71 0.0424 J*Rl*R2*I 8 405 0.36 0.9417 J*Rl*A*I 8 405 0.31 0.9628 J*R2*A*I 8 405 0.19 0.9916 R1*R2*A*I 8 405 0.23 0.9859 G*J*R1*R2*A 32 405 1.56 0.0287 G*J*R1*R2*I 16 405 0.36 0.9902 G*J*R1*A*I 16 405 0.32 0.9947 G*J*R2*A*I 16 405 0.24 0.9991 G*R1*R2*A*I 16 405 0.26 0.9985 J*R1*R2*A*I 16 405 0.19 0.9998 G*J*Rl*R2*A*I 32 405 0.19 1.0000 212 Table A.17 The ANOVA table for six machine problem by considering minimization of sum of the completion times criterion for the algorithm comparison Type 3 Tests of Fixed Effects Num Den Effect DF DF F Value Pr > F G 2 0 165.81 <.0001 J 2 0 39.29 <.0001 Ri 2 0 62.32 <.0001 A 2 135 0.48 0.6189 I 1 135 17.09 <.0001 T(G*J*Ri) 27 0 15.37 <.0001 G*J 4 0 27.44 <.0001 G*Ri 4 0 0.39 0.8135 G*A 4 135 0.39 0.8135 G*I 2 135 16.95 <.0001 J*Ri 4 0 6.75 0.0007 J*A 4 135 0.25 0.9104 2 135 1.62 0.2008 R1*A 4 135 0.04 0.9970 R1*I 2 135 9.93 <.0001 A*I 2 135 0.13 0.8811 G*J*R1 8 0 3.75 0.0045 G*J*A 8 135 0.22 0.9869 G*J*I 4 135 1.70 0.1545 G*Ri*A 8 135 0.04 1.0000 G*R1*I 4 135 9.43 <.0001 G*A*I 4 135 0.11 0.9782 J*R1*A 8 135 0.06 0.9999 J*Ri*I 4 135 3.85 0.0054 4 135 0.07 0.9910 Ri*A*I 4 135 0.14 0.9659 G*J*R1*A 16 135 0.07 1.0000 G*J*Rl*I 8 135 3.59 0.0008 G*J*A*I 8 135 0.06 0.9999 G*Rl*A*I 8 135 0.14 0.9970 J*R1*A*I 8 135 0.14 0.9970 G*J*Rl*A*I 16 135 0.15 1.0000 213 Table A. 18. Test of effect slices for six machine problem by considering minimization of sum of the completion times for the algorithm comparison Effect G J Ri I _G J Ri I Estimate DF t Value Pr>ItlAdju G*J*Rl*I 1 1 1 1 1 1 1 2 8.73E-12 135 0.00 1.0000 T-K G*J*Ri*I 1 1 2 1 1 1 2 2 26.0000 135 0.04 0.9706 T-K G*J*Ri*I 1 1 3 1 1 1 3 2 -14.1667 135 -0.02 0.9840 T-K G*J*Ri*I 1 2 1 1 1 2 1 2 2.57E-il 135 0.00 1.0000 T-K G*J*Ri*I 1 2 2 1 1 2 2 2 -4.8333 135 -0.01 0.9945 T-K G*J*Ri*I 1 2 3 1 1 2 3 2 4.0000 135 0.01 0.9955 T-K G*J*Ri*I 1 3 1 1 1 3 1 2 -822E-13 135 -0.00 1.0000 T-K G*J*R1*I 1 3 2 1 1 3 2 2 -27.3333 135 -0.04 0.9691 T-K G*J*R1*I 1 3 3 1 1 3 3 2 148.50 135 0.21 0.8334 T-K G*J*R1*I 2 1 1 1 2 1 1 2 2E-i1 135 0.00 1.0000 T-K G*J*R1*I 2 1 2 1 2 1 2 2 1.GGE-il 135 0.00 1.0000 T-K G*J*R1*I 2 1 3 1 2 1 3 2 -34.5000 135 -0.05 0.9610 T-K G*J*R1*I 2 2 1 1 2 2 1 2 93.5000 135 0.13 0.8946 T-K G*J*R1*I 2 2 2 1 2 2 2 2 29.3333 135 0.04 0.9668 T-K G*J*R1*I 2 2 3 1 2 2 3 2 115.33 135 0.16 0.8702 T-K G*J*R1*I 2 3 1 1 2 3 1 2 -768E-13 135 -0.00 1.0000 T-K G*J*R1*I 2 3 2 1 2 3 2 2 -315.33 135 -0.45 0.6551 T-K G*J*R1*I 2 3 3 1 2 3 3 2 23.5000 135 0.03 0.9734 T-K G*J*R1*I 3 1 1 1 3 1 1 2 1236.50 135 1.76 0.0815 T-K G*J*R1*I 3 1 2 1 3 1 2 2 387.83 135 0.55 0.5829 T-K G*J*R1*I 3 1 3 1 3 1 3 2 423.67 135 0.60 0.5486 T-K G*J*R1*I 3 2 1 1 3 2 1 2 7120.00 135 10.11 <.0001 T-K G*J*R1*I 3 2 2 1 3 2 2 2 -335.50 135 -0.48 0.6347 T-K G*J*R1*I 3 3 1 1 3 3 1 2 4116.67 135 5.84 <.0001 T-K G*J*R1*I 3 3 2 1 3 3 2 2 136.00 135 0.19 0.8472 T-K G*J*R1*I 3 3 3 1 3 3 3 2 3119.67 135 4.43 <.0001 T-K 214 Table A. 19 The ANOVA table for six machine problem by considering minimization of sum of the completion times criterion for time spent comparison Type 3 Tests of Fixed Effects Num Den Effect DF DF F Value Pr > F G 2 0 16.59 <.0001 J 2 0 4.22 0.0255 Ri 2 0 3.93 0.0317 A 2 135 24.83 <.0001 I 1 135 2.51 0.1152 G*J 4 0 3.38 0.0229 G*Rl 4 0 3.30 0.0253 G*A 4 135 20.72 <.0001 G*I 2 135 2.66 0.0737 J*R1 4 0 1.64 0.1925 J*A 4 135 5.07 0.0008 2 135 0.60 0.5485 R1*A 4 135 5.73 0.0003 Rl*I 2 135 2.19 0.1157 A*I 2 135 0.53 0.5897 G*J*R1 8 0 1.54 0.1913 G*J*A 8 135 4.32 0.0001 G*J*I 4 135 0.64 0.6359 G*R1*A 8 135 4.93 <.0001 G*R1*I 4 135 2.28 0.0642 G*A*I 4 135 0.48 0.7514 J*R1*A 8 135 2.80 0.0066 J*Ri*I 4 135 0.91 0.4622 J*A*I 4 135 0.14 0.9663 R1*A*I 4 135 0.59 0.6696 G*J*R1*A 16 135 2.68 0.0010 G*J*R1*I 8 135 0.91 0.5113 G*J*A*I 8 135 0.13 0.9977 G*R1*A*I 8 135 0.62 0.7585 J*R1*A*I 8 135 0.32 0.9570 G*J*R1*A*I 16 135 0.36 0.9896 215 Table A.20 Test of effect slices for six machine problem by considering minimization of sum of the completion times for time spent comparison Effect G J Ri A _G J Ri A Estimate DF t Value Pr>ItI adju G*J*Ri*A 1 1 1 1 1 1 1 2 8.35E-14 135 0.00 1.0000 T-K G*J*Rl*A 1 1 1 1 1 1 1 3 -341E-15 135 0.00 1.0000 T-K G*J*Rl*A 1 1 1 2 1 1 1 3 -425E-15 135 0.00 1.0000 T-K G*J*Rl*A 1 1 2 1 1 1 2 2 2.87E-12 135 0.00 1.0000 T-K G*J*Rl*A 1 1 2 1 1 1 2 3 l.25E-12 135 0.00 1.0000 T-K G*J*R1*A 1 1 2 2 1 1 2 3 -162E-14 135 0.00 1.0000 T-K G*J*Rl*A 1 1 3 1 1 1 3 2 -853E-16 135 0.00 1.0000 T-K G*J*R1*A 1 1 3 1 1 1 3 3 -102E-14 135 0.00 1.0000 T-K G*J*R1*A 1 1 3 2 1 1 3 3 -938E-15 135 0.00 1.0000 T-K G*J*R1*A 1 2 1 1 1 2 1 2 -0.2500 135 0.00 0.9994 T-K G*J*R1*A 1 2 1 1 1 2 1 3 1.58E-12 135 0.00 1.0000 T-K G*J*R1*A 1 2 1 2 1 2 1 3 0.2500 135 0.00 0.9994 T-K G*J*R1*A 1 2 2 1 1 2 2 2 -0.2500 135 0.00 0.9994 T-K G*J*R1*A 1 2 2 1 1 2 2 3 6.21E-12 135 0.00 1.0000 T-K G*J*R1*A 1 2 2 2 1 2 2 3 0.2500 135 0.00 0.9994 T-K G*J*R1*A 1 2 3 1 1 2 3 2 4.16E-12 135 0.00 1.0000 T-K G*J*R1*A 1 2 3 1 1 2 3 3 -135E-14 135 0.00 1.0000 T-K G*J*R1*A 1 2 3 2 1 2 3 3 -551E-14 135 0.00 1.0000 T-K G*J*R1*A 1 3 1 1 1 3 1 2 1.17E-12 135 0.00 1.0000 T-K G*J*R1*A 1 3 1 1 1 3 1 3 -432E-14 135 0.00 1.0000 T-K G*J*R1*A 1 3 1 2 1 3 1 3 -549E-14 135 0.00 1.0000 T-K G*J*R1*A 1 3 2 1 1 3 2 2 4.38E-12 135 0.00 1.0000 T-K G*J*R1*A 1 3 2 1 1 3 2 3 1.25E-12 135 0.00 1.0000 T-K G*J*R1*A 1 3 2 2 1 3 2 3 -313E-14 135 0.00 1.0000 T-K G*J*R1*A 1 3 3 1 1 3 3 2 1.08E-12 135 0.00 1.0000 T-K G*J*R1*A 1 3 3 1 1 3 3 3 1.8E-11 135 0.00 1.0000 T-K G*J*R1*A 1 3 3 2 1 3 3 3 1.69E-11 135 0.00 1.0000 T-K G*J*Ri*A 2 1 1 1 2 1 1 2 -13.2500 135 0.04 0.9691 T-K G*J*R1*A 2 1 1 1 2 1 1 3 -15.0000 135 -0.04 0.9650 T-K G*J*R1*A 2 1 1 2 2 1 1 3 -1.7500 135 -0.01 0.9959 T-K G*J*R1*A 2 1 2 1 2 1 2 2 -13.0000 135 -0.04 0.9696 T-K G*J*R1*A 2 1 2 1 2 1 2 3 -12.2500 135 -0.04 0.9714 T-K G*J*R1*A 2 1 2 2 2 1 2 3 0.7500 135 0.00 0.9982 T-K G*J*R1*A 2 1 3 1 2 1 3 2 -1.7500 135 -0.01 0.9959 T-K G*J*R1*A 2 1 3 1 2 1 3 3 -2.7500 135 -0.01 0.9936 T-K G*J*R1*A 2 1 3 2 2 1 3 3 -1.0000 135 -0.00 0.9977 T-K G*J*R1*A 2 2 1 1 2 2 1 2 -1.5000 135 -0.00 0.9965 T-K G*J*R1*A 2 2 1 1 2 2 1 3 -2.5000 135 -0.01 0.9942 T-K G*J*R1*A 2 2 1 2 2 2 1 3 -1.0000 135 -0.00 0.9977 T-K G*J*R1*A 2 2 2 1 2 2 2 2 -82.5000 135 -0.24 0.8092 T-K G*J*R1*A 2 2 2 1 2 2 2 3 -92.7500 135 -0.27 0.7860 T-K G*J*R1*A 2 2 2 2 2 2 2 3 -10.2500 135 -0.03 0.9761 T-K G*J*R1*A 2 2 3 1 2 2 3 2 -4.5000 135 -0.01 0.9895 T-K G*J*R1*A 2 2 3 1 2 2 3 3 -3.5000 135 -0.01 0.9918 T-K G*J*Ri*A 2 2 3 2 2 2 3 3 1.0000 135 0.00 0.9977 T-K G*J*R1*A 2 3 1 1 2 3 1 2 -57.5000 135 -0.17 0.8663 T-K G*J*R1*A 2 3 1 1 2 3 1 3 -58.7500 135 -0.17 0.8634 T-K G*J*R1*A 2 3 1 2 2 3 1 3 -1.2500 135 -0.00 0.9971 T-K 216 Table A.20 (Continued) Test of effect slices for six machine problem by considering minimization of sum of the completion times for time spent comparison Effect G J Ri AGJ Ri A Estimate DF t Value Pr>t adju G*J*R1*A 2 3 2 1 23 2 2 -255.50 135 -0.75 0.4549 T-K G*J*R1*A 2 3 2 1 23 2 3 -264.50 135 -0.78 0.4392 T-K G*J*Ri*A 2 3 2 2 23 2 3 -9.0000 135 -0.03 0.9790 T-K G*J*R1*A 2 3 3 1 23 3 2 -260.75 135 -0.76 0.4457 T-K G*J*Ri*A 2 3 3 1 23 3 3 -132.00 135 -0.39 0.6992 T-K G*J*R1*A 2 3 3 2 23 33 128.75 135 0.38 0.7063 T-K G*J*R1*A 3 1 1 1 31 1 2 -116.75 135 -0.34 0.7326 T-K G*J*R1*A 3 1 1 1 31 1 3 -136.00 135 -0.40 0.6906 T-K G*J*R1*A 3 1 1 2 31 1 3 -19.250 135 -0.06 0.9551 T-K G*J*R1*A 3 1 2 1 31 2 2 -154.00 135 -0.45 0.6522 T-K G*J*Ri*A 3 1 2 1 31 2 3 -160.00 135 -0.47 0.6396 T-K G*J*Ri*A 3 1 2 2 31 2 3 -6.0000 135 -0.02 0.9860 T-K G*J*R1*A 3 1 3 1 31 3 2 -59.000 135 -0.17 0.8629 T-K G*J*Ri*A 3 1 3 1 31 3 3 -55.500 135 -0.16 0.8709 T-K G*J*R1*A 3 1 3 2 31 33 3.5000 135 0.01 0.9918 T-K G*J*R1*A 3 2 1 1 3 2 1 2 -115.25 135 -0.34 0.7359 T-K G*J*R1*A 3 2 1 1 3 2 1 3 -163.75 135 -0.48 0.6318 T-K G*J*R1*A 3 2 1 2 32 1 3 -48.500 135 -0.14 0.8871 T-K G*J*R1*A 3 2 2 1 32 2 2 -1361.5 135 -3.99 0.0001 T-K G*J*R1*A 3 2 2 1 32 2 3 -2293.3 135 -6.73 <.0001 T-K G*J*R1*A 3 2 2 2 32 2 3 -931.75 135 -2.73 0.0071 T-K G*J*R1*A 3 2 3 1 32 3 2 -3568.5 135-10.47 <.0001 T-K G*J*R1*A 3 2 3 1 32 3 3 -2350.8 135 -6.90 <.0001 T-K G*J*R1*A 3 2 3 2 32 3 3 1217.75 135 3.57 0.0005 T-K G*J*R1*A 3 3 1 1 3 3 1 2 -87.250 135 -0.26 0.7984 T-K G*J*R1*A 3 3 1 1 3 3 1 3 -226.25 135 -0.66 0.5081 T-K G*J*R1*A 3 3 1 2 3 3 1 3 -139.00 135 -0.41 0.6841 T-K G*J*R1*A 3 3 2 1 33 2 2 -3148.5 135 -9.23 <.0001 T-K G*J*R1*A 3 3 2 1 33 2 3 -3576.8 135 -10.5 <.0001 T-K G*J*R1*A 3 3 2 2 33 2 3 -428.25 135 -1.26 0.2112 T-K G*J*R1*A 3 3 3 1 33 3 2 -1403.3 135 -4.12 <.0001 T-K G*J*IU*A 3 3 3 1 33 3 3 -1371.0 135 -4.02 <.0001 T-K G*J*R1*A 3 3 3 2 33 3 3 32.2500 135 0.09 0.9248 T-K 217 Appendix B. The Percentage Errors of Schaller et al. (2000) Algorithm Table B.l The percentage error of Schaller et al. (2000) algorithm for two machine problem rJ) eD - eD - 1. 0 0 0 e 0 0 - - -t eD 1 280 288 0.029 28 659 740 0.123 2 237 237 0.000 29 495 530 0.071 3 171 171 0.000 30 678 781 0.152 4 130 130 0.000 31 657 680 0.035 5 321 321 0.000 32 733 780 0.064 6 209 209 0.000 33 859 936 0.090 7 403 411 0.020 34 383 397 0.037 8 354 354 0.000 35 768 813 0.059 9 264 282 0.068 36 471 506 0.074 10 152 152 0.000 37 521 624 0.198 11 527 550 0.044 38 667 818 0.226 12 405 405 0.000 39 605 772 0.276 13 491 515 0.049 40 547 641 0.172 14 249 249 0.000 41 682 835 0.224 15 437 445 0.018 42 786 985 0.253 16 490 496 0.012 43 1176 1314 0.117 17 397 397 0.000 44 965 1051 0.089 18 396 420 0.061 45 766 932 0.217 19 335 335 0.000 46 817 916 0.121 20 457 488 0.068 47 1064 1276 0.199 21 346 430 0.243 48 1066 1300 0.220 22 383 463 0.209 49 947 1047 0.106 23 583 647 0.110 50 1343 1483 0.104 24 330 358 0.085 51 923 1063 0.152 25 880 953 0.083 52 1286 1435 0.116 26 815 937 0.150 53 1115 1237 0.109 27 440 445 0.011 54 1374 1478 0.076 218 Table B.2 The percentage error of Schaller et al.(2000) algorithm for three machine problem 1 221 221 0 31 398 405 0.018 2 481 494 0.027 32 532 545 0.024 3 303 312 0.030 33 397 458 0.154 4 226 236 0.044 34 350 373 0.066 5 239 242 0.013 35 266 270 0.015 6 351 379 0.080 36 571 638 0.117 7 436 474 0.087 37 547 550 0.005 8 268 268 0.000 38 656 666 0.015 9 242 244 0.008 39 494 511 0.034 10 291 332 0.141 40 413 437 0.058 11 335 367 0.096 41 652 671 0.029 12 305 352 0.154 42 241 261 0.083 13 471 514 0.091 43 345 357 0.035 14 203 210 0.034 44 554 554 0.000 15 389 408 0.049 45 734 769 0.048 16 478 492 0.029 46 518 531 0.025 17 194 200 0.031 47 417 422 0.012 18 408 413 0.012 48 617 636 0.031 19 334 344 0.030 49 737 763 0.035 20 429 435 0.014 50 466 475 0.019 21 207 214 0.034 51 689 728 0.057 22 498 505 0.014 52 412 429 0.041 23 430 448 0.042 53 634 662 0.044 24 335 342 0.021 54 615 626 0.018 25 409 415 0.015 55 576 629 0.092 26 456 459 0.007 56 573 719 0.255 27 506 546 0.079 57 403 416 0.032 28 345 352 0.020 58 424 475 0.120 29 428 453 0.058 59 408 466 0.142 30 214 214 0.000 60 533 570 0.069 219 Table B.2 (Continued) The percentage error of Schaller et al.(2000) algorithm for three machine problem Cl) Cl) Cl) - - - - - 1 ,) 0 0 © 0 D © - 0 - - 61 647 724 0.119 91 961 1057 0.100 62 717 778 0.085 92 777 836 0.076 63 529 585 0.106 93 764 864 0.131 64 495 555 0.121 94 820 933 0.138 65 694 735 0.059 95 527 552 0.047 66 384 433 0.128 96 878 969 0.104 67 625 637 0.019 97 897 1018 0.135 68 897 1017 0.134 98 774 856 0.106 69 596 666 0.117 99 964 1047 0.086 70 785 890 0.134 100 640 676 0.056 71 716 776 0.084 101 695 771 0.109 72 562 620 0.103 102 754 901 0.195 73 784 787 0.004 103 1260 1351 0.072 74 596 609 0.022 104 839 889 0.060 75 825 876 0.062 105 747 776 0.039 76 636 704 0.107 106 953 1044 0.095 77 667 739 0.108 107 1061 1183 0.115 78 800 859 0.074 108 1086 1184 0.090 79 1031 1087 0.054 109 1021 1177 0.153 80 643 684 0.064 110 1166 1288 0.105 81 609 676 0.110 111 847 1004 0.185 82 839 897 0.069 112 791 925 0.169 83 509 537 0.055 113 663 741 0.118 84 779 946 0.214 114 796 961 0.207 85 979 1087 0.110 115 1237 1394 0.127 86 873 925 0.060 116 1205 1316 0.092 87 697 816 0.171 117 693 800 0.154 88 902 990 0.098 118 865 1003 0.160 89 754 830 0.101 119 641 743 0.159 90 951 1074 0.129 120 920 1093 0.188 220 Table B.2 (Continued) The percentage error of Schaller et al.(2000) algorithm for three machine problem = - rD 0 - 0 D - - C, 0 1. Cl) . 0 z 0 r 0 0 0 0 - - 121 968 1090 0.126 151 1763 1927 0.093 122 1285 1478 0.150 152 1526 1646 0.079 123 1083 1210 0.117 153 1411 1654 0.172 124 941 1047 0.113 154 1080 1261 0.168 125 1162 1378 0.186 155 1293 1452 0.123 126 1028 1127 0.096 156 1206 1310 0.086 127 1088 1174 0.079 157 1676 1788 0.067 128 1537 1673 0.088 158 1955 2149 0.099 129 966 1060 0.097 159 1697 1815 0.070 130 1307 1456 0.114 160 1558 1688 0.083 131 996 1114 0.118 161 1188 1274 0.072 132 1369 1584 0.157 162 1530 1695 0.108 133 1545 1706 0.104 134 1077 1167 0.084 135 888 960 0.081 136 973 1131 0.162 137 1028 1154 0.123 138 1106 1213 0.097 139 1568 1721 0.098 140 1255 1370 0.092 141 1369 1568 0.145 142 1393 1545 0.109 143 1242 1376 0.108 144 1423 1722 0.210 145 1252 1402 0.120 146 1415 1620 0.145 147 1170 1324 0.132 148 1284 1460 0.137 149 1296 1448 0.117 150 1503 1677 0.116 221 Table B.3 The Percentage error of Schaller et al.(2002) algorithm for six machine problem 0 - 0 = - 0 - ri; 0 1. - 0 r 0 0 - 0 t'D 0 0 1 1666 1688 0.013 28 619 720 0.163 2 1086 1086 0.000 29 2500 2584 0.034 3 262 287 0.095 30 2189 2257 0.031 4 156 204 0.308 31 3266 3376 0.034 5 1391 1427 0.026 32 2945 2967 0.007 6 753 797 0.058 33 667 726 0.088 7 1863 1871 0.004 34 949 1061 0.118 8 1477 1477 0.000 35 3730 3868 0.037 9 195 231 0.185 36 2898 2978 0.028 10 390 450 0.154 37 5255 5498 0.046 11 1442 1481 0.027 38 3927 4042 0.029 12 1765 1803 0.022 39 694_ 866 0.248 13 786 786 0.000 40 721 830 0.151 14 1179 1184 0.004 41 3763 3834 0.019 15 460 520 0.130 42 4726 4884 0.033 16 383 440 0.149 43 4455 4557 0.023 17 1530 1598 0.044 44 5396 5558 0.030 18 1159 1215 0.048 45 1033 1272 0.231 19 3062 3121 0.019 46 957 1123 0.173 20 2631 2667 0.014 47 5225 5412 0.036 21 430 488 0.135 48 5572 5780 0.037 22 506 582 0.150 49 4911 5130 0.045 23 2748 2854 0.039 50 5027 5215 0.037 24 2084 2097 0.006 51 1047 1180 0.127 25 2172 2195 0.011 52 1136 1297 0.142 26 2459 2564 0.043 53 5829 6072 0.042 27 553 608 0.099 54 5802 5999 0.034