54 European J. Industrial Engineering, Vol. 13, No. 1, 2019 Flowshop sequence-dependent group scheduling with minimisation of weighted earliness and tardiness Taha Keshavarz* Department of Industrial Engineering, Yazd University, P.O. Box 89195-741, Yazd, Iran Email:

[email protected]

*Corresponding author Nasser Salmasi Corning Incorporated, 310 N College RD, Wilmington, NC, 28405, USA Email:

[email protected]

Mohsen Varmazyar Department of Industrial Engineering, Sharif University of Technology, Tehran, Iran Email:

[email protected]

Abstract: In this research, we approach the flowshop sequence-dependent group scheduling problem with minimisation of total weighted earliness and tardiness as the objective for the first time. A mixed integer linear programming model is developed to solve the problem optimally. Since the proposed research problem is proven to be NP-hard, a hybrid meta-heuristic algorithm based on the particle swarm optimisation (PSO) algorithm, enhanced with neighbourhood search is developed to heuristically solve the problem. Since the objective is a non-regular, a timing algorithm is developed to find the best schedule for each sequence provided by the metaheuristic algorithm. A lower bounding method is also developed by reformulating the problem as a Dantzig-Wolf decomposition model to evaluate the performance of the proposed PSO algorithm. The computational results, based on using available test problems in the literature, demonstrate that the proposed PSO algorithm and the lower bounding method are quite effective, especially in the instances with loose due date. [Received: 17 February 2018; Revised: 27 July 2018; Accepted: 25 August 2018] Keywords: earliness; tardiness; sequence-dependent setup time; group scheduling; Dantzig-Wolf decomposition; particle swarm optimisation; PSO. Reference to this paper should be made as follows: Keshavarz, T., Salmasi, N. and Varmazyar, M. (2019) ‘Flowshop sequence-dependent group scheduling with minimisation of weighted earliness and tardiness’, European J. Industrial Engineering, Vol. 13, No. 1, pp.54–80. Copyright © 2019 Inderscience Enterprises Ltd. Flowshop sequence-dependent group scheduling 55 Biographical notes: Taha Keshavarz is currently an Assistant Professor at the Department of Industrial Engineering at Yazd University, Yazd, Iran. He received his PhD from the Department of Industrial Engineering at Sharif University of Technology. His primary areas of research interests are applied operations research and planning and scheduling in manufacturing and services. Nasser Salmasi has received his PhD in the area of Industrial Engineering from Oregon State University, USA. He was working in the Department of Industrial Engineering at Sharif University of Technology, Tehran, Iran as an Associate Professor for nine years. Since October 2015 he has joined Corning incorporated in USA as an operations research analyst. His primary areas of research interests are applied operations research, sequencing and scheduling, and simulation. Mohsen Varmazyar is a PhD candidate in Industrial Engineering at Sharif University of Technology in Iran. He is working on operating room scheduling problem under uncertainty time and modeling the duration of surgical procedures by Continuous Phase-Type distributions. He received his BSc and MSc degrees both in Industrial Engineering. His research interests focus on applied operations research, scheduling and stochastic processes. 1 Introduction Flowshop group scheduling (FGSD) problems with sequence dependent setup times has been among the interest of scheduling researchers for more than a decade. In FGSD, it is assumed that several groups of jobs (say N groups), with different number of jobs in each group, are assigned to a flowshop cell with m machines. Each machine needs a setup time to start processing the first job of each group. The setup time depends on the preceding group processed on the machine. This situation is usually referred to as the sequence dependent group scheduling. The challenge is how to find the best sequence of groups and jobs in order to optimise some measure of effectiveness. Schaller et al. (2000) propose several heuristic algorithms to solve the Fm fmls, shgi , prmu Cmax problem. They also propose a lower bounding method to estimate the quality of the proposed heuristic algorithms. Two solution methods based on genetic algorithm (GA) and memetic algorithm (MA) are proposed by Franca et al. (2005) for the Fm fmls, shgi , prmu Cmax problem. Hendizadeh et al. (2008) propose a meta-heuristic algorithm based on tabu search (TS) to solve the Fm fmls, shgi , prmu Cmax problem. Salmasi et al. (2011) provide a meta-heuristic algorithm based on hybrid ant colony optimisation (ACO) and show that their algorithm has a superior performance than the MA developed by Franca et al. (2005). They also propose a lower bounding method for the research problem. Salmasi et al. (2010) investigate the Fm fmls, shgi , prmu  C j problem for the first time and propose a mathematical model for the research problem. Then, they use the mathematical model to develop a lower bound based on the branch-and-price algorithm. They have also proposed two meta-heuristic algorithms based on TS and ACO to solve 56 T. Keshavarz et al. the problem. A meta-heuristic algorithm based on particle swarm optimisation (PSO) is developed by Hajinejad et al. (2011) for the Fm fmls, shgi , prmu  C j problem. They compare their proposed PSO with the ACO proposed by Salmasi et al. (2010) and show its superiority. Naderi and Salmasi (2012) develop two efficient mathematical models for the Fm fmls, shgi , prmu  C j problem. Keshavarz and Salmasi (2014) approach the Fm fmls, shgi , prmu C j problem and propose a hybrid genetic algorithm. They also enhance the lower bounding mechanism proposed by Salmasi et al. (2010). Costa et al. (2014) propose a GA-based algorithm for the Fm fmls, shgi , prmu  C j problem. Two detailed literature review on the area of FGSD can be found in Neufeld et al. (2016) and Allahverdi (2015). In practice, usually the orders (can be considered as jobs) have predetermined due dates and the supplier should pay penalty for any delay. Thus, it is approaching scheduling problems by considering objectives relative to due date make the problems closer to the reality. There are cases such as perishable good manufacturing, catering, and so on that delivering products in both before and after the due date causes expenses for the manufacturer. To the best of our knowledge, the FGSD problem has been approached only with minimisation of makespan and minimisation of total completion time as the objective. Approaching the problem with the objectives related to due dates significantly adds to the complexity of the problem. This is our motivation to approach the problem by considering one of the most complex objective functions in the scheduling environment i.e., minimisation of weighted earliness and tardiness. This performance measure is non- regular because it is not a non-decreasing function in terms of job completion times. By introducing the earliness penalties, it is necessary to decide about the required idleness between the processing of jobs and it makes the problem more complex than approaching the problem with regular objectives. This objective function has also many real world applications since it reduces the total costs significantly. Minimising earliness and tardiness simultaneously is also very critical in the just-in-time production systems. The rest of the paper is organised as follows. In Section 2, the proposed research problem is described and a mathematical model is proposed to solve the problem optimally. The details of the proposed meta-heuristic algorithm are discussed in Section 3. The lower bounding method is presented in Section 4. The similarities and differences of the proposed techniques with the available ones in the literature for the research problem with other objectives are discussed as well. Computational results are reported in Section 5. Finally, conclusions and directions for future research are discussed in Section 6. 2 Problem description and mathematical model The research problem can be denoted as Fm fmls, shgi , prmu  (w′ E j j + w′′j T j ) based on Pinedo (2012). The reasons for this notational choice and the assumptions of this research are: • It is assumed that N groups are assigned to a flowshop cell with m machines (Fm). • Each group (g = 1, …, N) has a different number of jobs (fmls). Flowshop sequence-dependent group scheduling 57 • The setup time of a group (say group h) on each machine (say machine i) depends on the immediately preceding group (say group g) that is processed on that machine (shgi). • All the jobs and groups are processed in the same sequence on all the machines [permutation scheduling (prmu)]. • The objective is to minimise the total weighted earliness and tardiness, i.e.,  ( w′j E j + w′′j T j ). The other assumptions of this research are as follows: • Each job does not need a separate setup time on machines. If it does, the required setup time is assumed to be included in its processing time. • The setups are assumed to be anticipatory. It means that the machine setup for a group can be started before any job that belongs to the group is available. • When processing the jobs of a group is started, all of the jobs belonging to that group should be processed sequentially and the jobs of the other groups cannot be processed between the processing of the jobs of the other groups. This assumption is usually called group technology assumption. To the best of our knowledge, Naderi and Salmasi (2012) propose the best mixed integer programming model for the Fm fmls, shgi , prmu  C j problem. Motivated from this, a mathematical model is proposed for the research problem. It is assumed that there is a dummy group that is processed as the first group on each machine. The parameters, decision variables, and the mathematical model are as the followings: Parameters N Number of groups g, h Indices used for groups g, h = 0, …, N m Number of machines i Index used for machines i = 1, …, m ng Number of jobs in group g g = 1, …, N j, l Indices used for jobs in group g j, l = 1, …, ng pgji Processing time of job j of group g on machine i g = 1, …, N; j = 1, …, ng; i = 1, …, m dgj Due date of job j of group g g = 1, …, N; j = 1, …, ng w′gj Earliness penalty of job j of group g g = 1, …, N; j = 1, …, ng w′′gj Tardiness penalty of job j of group g g = 1, …, N; j = 1, …, ng shgi The setup time to process group g on machine i if h = 0, …, N; g = 1, …, N; group h is the preceding group i = 1, …, m M A large positive number Decision variables 58 T. Keshavarz et al. 1 If job l is processed after job j in group g g = 1, …, N; j = 1, …, ng; xgjl =  0 Otherwise l = 1, …, ng; l ≠ j 1 If group g is processed immediately after group h h = 0, …, N; g = 1, …, N; U hg =  0 Otherwise g≠h g = 1, …, N; j = 1, …, ng; Cgji Completion time of job j of group g on machine i i = 1, …, m FTgi Completion time of group g on machine i g = 1, …, N; j = 1, …, m STgi Starting time of group g on machine i g = 1, …, N; j = 1, …, m Egj Earliness of job j of group g g = 1, …, N; j = 1, …, ng Tgj Tardiness of job j of group g g = 1, …, N; j = 1, …, ng The model: N ng Min  ( w′ E g =1 j =1 gj gj + wgj′′ Tgj ) (1) Subject to: Cgji ≥ Cgj (i −1) + pgji g = 1, ..., N ; j = 1, ..., ng ; i = 1,..., m (2) Cgji ≥ C gli + pgji − M (1 − X glj ) g = 1, ..., N ; j = 1, ..., ng ; l = 1, ..., ng ; l ≠ j; (3) i = 1, ..., m Cgli ≥ C gji + pgli − MX glj g = 1, ..., N ; j = 1, ..., ng ; l = 1, ..., ng ; l ≠ j; (4) i = 1, ..., m Cgji ≥ STgi + pgji g = 1, ..., N ; j = 1, ..., ng ; i = 1,..., m (5) FTgi ≥ Cgji g = 1, ..., N ; j = 1, ..., ng ; i = 1,..., m (6) STgi ≥ FThi + shgi − M (1 − U hg ) h = 0, ..., N ; g = 1, ..., N ; g ≠ h; i = 1,..., m (7) N U h=0 hg = 1 g = 1, ..., N (8) h≠ g N U g =1 hg ≤ 1 h = 1, ..., N (9) g ≠h N U g =1 0g =1 (10) U hg + U gh ≤ 1 h = 1, ..., N − 1; g = h + 1, ..., N ; i = 1, ..., m (11) Cgjm = d gj + Tgj − E gj g = 1, ..., N ; j = 1, ..., ng (12) Flowshop sequence-dependent group scheduling 59 Cgji , STgi , FTgi , Egj , Tgj ≥ 0 (13) U hg , X gjl ∈ {0, 1} (14) The objective function is minimisation of total weighted earliness and tardiness as presented in equation (1). The completion time of a job on a machine (say machine i) should be greater than or equal to its completion time on the preceding machine (say machine i – 1) plus its processing time on the machine i. Constraint set (2) is incorporated into the model to support this fact. At most one job can be processed on each machine at each period of time. Constraint sets (3) and (4) are incorporated into the model for this reason. The completion time of a job on a machine should be greater than or equal to the processing time of the job on that machine plus the required setup time of its group on that machine. Constraint set (5) is incorporated into the model to support this fact. Constraint set (6) is incorporated into the model to support the fact that the completion time of a group on a machine should be greater than or equal to the completion time of all the jobs belonging to that group. The start time of processing the jobs belonging to a group on a machine should be greater than or equal to the completion time of the preceding group plus the required setup time of preparing the machine to process the jobs belonging to the group. This constraint is presented as equation (7). Each group has a preceding group on each machine. Constraint set (8) is incorporated into the model for this reason. Constraint set (9) is incorporated into the model to make sure that each group has at most one successor group. The dummy group is processed on all the machines as the first group. Constraint set (10) is incorporated into the model to support this fact. Each group can be processed either before or after another group. This is expressed as constraint set (11). Constraint set (12) is incorporated into the model to calculate the penalty of earliness and tardiness for each job. Constraint sets (13) and (14) define the type of variables. Salmasi et al. (2010) show that the Fm fmls, shgi , prmu  C j problem is a NP-hard, since the objective function approached in this research is more general than the minimisation of total completion time, the proposed research problem is an NP-hard problem as well. Thus, meta-heuristic algorithms are required to solve the industry sized instances. 3 Particle swarm optimisation PSO introduced by Eberhart and Kennedy (1995), is a population based search algorithm. Recently, several researchers have developed several variants of PSO to solve scheduling problems (Kuo et al., 2009; Hajinejad et al., 2011; Arabameri and Salmasi, 2013). PSO is an iterative algorithm that starts with a number of initial feasible solutions known as particles. The number of initial particles is called p-size. Each particle is presented by two different vectors as its current position and velocity. The position and the velocity of the ith particle in a d-dimensional search space at iteration t are represented as X it = [ xit1 , xit2 , ..., xidt ] and Vit = [vit1 , vit2 , ..., vidt ], respectively. In this research, the dimension of the search space, i.e., d, is the sum of the number of groups and the total number of jobs in all the groups. All the particles move through the d-dimensional search space by learning from the movement of swarm population. Therefore, particles move 60 T. Keshavarz et al. toward areas with better objective function values. The position with the best objective function value obtained so far by the ith particle at the tth iteration is considered as the p-best ( Pit ). The best position reached by all the particles during the t iterations is called the g-best (Gt). The new velocity of the ith particle in the (t + 1)th iteration is updated based on the current velocity, p-best, and g-best and calculated by the following formula: Vit +1 = wVit + c1r1 ( Pit − X it ) + c2 r2 ( G t − X it ) (15) where w is the inertia weight that controls the influence of the previous velocity of the particle, c1 and c2 are constant numbers denoted the self-confidence coefficient and the social confidence coefficient, respectively. r1 and r2 are random numbers generated uniformly between [0, 1]. Shi and Eberhart (1999) observe that if the value of w is varying with time from high to low, the PSO algorithm can find a better solution. Also Ratnaweera et al. (2004) observe that if the value of c1 varies with time from high to low and the value of c2 varies with time from low to high; the PSO algorithm can find better solutions. Therefore, in this research, instead of applying a fixed value for w, c1 and c2 in equation (13), dynamic values are assigned to them based on the method proposed by Hajinejad et al. (2011). Dynamic coefficients can improve the solution quality of the FGSD problem by adjusting new nonlinear scheme. The position of the ith particle at the (t + 1)th iteration is updated by equation (16). X it +1 = X it + Vit +1 (16) In order to control the excessive roaming of particles outside the search space, the velocity values are generally restricted in the range of [vmiin, vmax] Based on the computational experiments in this research and Arabameri and Salmasi (2013), the new velocity of each particle is restricted by interval [vmiin, vmax] = [–4, 4]. The particle moves toward a new position and this process continues until reaching a predetermined stopping criterion. 3.1 Initial population The PSO algorithm requires an initial population to trigger the particles move through the d-dimensional searching space. Several research, such as Lu and Logendran (2013) have shown that using better quality initial solutions can help the meta-heuristic algorithms to identify a better final solution. Therefore, several efficient rules and algorithms are applied to generate nine different initial solutions. To generate the initial solutions, three different mechanisms based on the earliest due date (EDD) rule, the longest tardiness/earliness rate (LTER) rule, and an algorithm to minimise total earliness and tardiness with loose or tight due dates for single machine (METSM) scheduling problems are considered. The EDD and LTER rules are applied by Arabameri and Salmasi (2013), and the METSM algorithms can be found in Pinedo (2012). The METSM algorithms for loose or tight due dates are polynomial time algorithms for minimising the total earliness and tardiness on a single machine. Each initial solution provides a sequence of groups and a sequence of jobs in groups. The solution representation for each initial population is shown as a string such as [G1 , G2 , ..., GN | J11 , J12 , ..., J1n1 | ... | J N 1 , J N 2 , ..., J NnN ]. The first part of this string represents the sequence of processing the groups. The other parts represent the sequence of jobs within each group, sequentially. By using different Flowshop sequence-dependent group scheduling 61 methods of generating the initial sequences for the groups and the jobs, nine different initial solutions can be found in Table 1. Table 1 The methods used to generate initial solutions The method used to generate The method used to generate the sequence of groups the sequence of jobs EDD LTER METSM EDD Initial solution 1 Initial solution 4 Initial solution 7 LTER Initial solution 2 Initial solution 5 Initial solution 8 METSM Initial solution 3 Initial solution 6 Initial solution 9 For implementing these initial solution generating mechanisms, the average of due dates, earliness, and tardiness penalties of jobs belonging to a group are considered as the parameters for that group. The rest of the initial populations are generated randomly. 3.2 Hybrid PSO algorithm Since the PSO algorithm is basically developed for continuous optimisation problems and the FGSD problem is a discrete one, an encoding scheme based on the smallest position value (SPV) rule, applied by Hajinejad et al. (2011) and Tadayon and Salmasi (2013), is used. The SPV rule converts the continuous position value of the particles to job and group sequences. Figure 1 Pseudo-code for the HPSO algorithm Begin Generate initial population of N solutions (particles); Find permutation for each particle; For each particle i ∈ N calculate objective function (i); Set p-best as the best neighbourhood of each particle; Set g-best as the best particle; While termination condition is not true DO: For each particle DO: Calculate particle velocity according to equation (15); Update particle position according to equation (16); Evaluate the new objective function of each particle; Apply neighbourhood search; Update the best neighbourhood of each particle (p-best); Update the best particle (g-best); End End End Assume that the position vector of each particle for the jobs of a group is a n-dimensional vector and each element of the vector is related to a job. To determine this relation based on SPV rule, the smallest position value in position vector is firstly 62 T. Keshavarz et al. mapped to the first rank. Then, the second smallest position value is nominated as the second rank. In the same way, all position values are converted to a job sequence of a group. For instance, assume that in the tth iteration the position vector of the ith particle is X it = [0.3, 0.1, 0.25]. In this case the sequence of processing jobs is [J3, J1, J2] for this particle by sorting the jobs based on their position values. Then, the neighbourhood search strategy is combined by the PSO algorithm to enhance the searching ability and to balance exploration and exploitation. Therefore, hybrid PSO (HPSO) algorithm, resulting from the neighbourhood search and the PSO algorithm, are developed for this research problem. A pseudo-code for the HPSO algorithm is shown in Figure 1. 3.3 Neighbourhood search approach In this research, four neighbourhood search (NS) approaches are generated as follows: NS1 Two groups are selected and their positions are exchanged with each other. Then, the positions of adjacent jobs in the two selected groups are swapped. For instance, consider a 3-group problem that has three jobs in each group. Assume that the initial sequence of the groups and the jobs is [G1, G2, G3|J11, J12, J13| J23, J22, J21| J32, J33, J31]. We select groups 2 and 3 and swap their position of adjacent jobs in both groups. The result of exchanging the position of groups is [G1, G3, G2]. Swapping adjacent jobs in group 2 generates two job sequence [J22, J23, J21] and [J23, J21, J22]. The result of swapping jobs in group 3 is [J33, J32, J31] and [J32, J31, J33]. NS2 Two groups are selected and their positions are exchanged with each other. Then, the positions of adjacent jobs in all groups are swapped. Based on previous example, the position of adjacent jobs in all three groups is swapped. The result of swapping jobs in group 1 is [J12, J11, J13] and [J11, J13, J12]. The results of swapping jobs in group 2 and group 3 are the same as presented in NS1. NS3 Two groups are selected and their positions are exchanged with each other. Then, if the objective function value after exchanging two groups is better than the current objective function value, the position of adjacent jobs in the two selected groups are swapped. NS4 Two groups are selected and their positions are exchanged with each other. Then, if the objective function value after exchanging two groups is better than the current objective function value, the position of adjacent jobs in all groups are swapped. It is clear that the neighbourhood search is not directly applied to the continuous position value of the particles but to the group and job sequence in the discrete one. Then, when we change the positions of groups or jobs, we should change it in continuous position value. For instance, if [J2, J3, J1, J4] is the job sequence of a group related to the continuous position value (0.11, 0.32, 0.06, 1.13), obviously swapping jobs 2 and 4 is corresponding to swapping 0.11 and 1.13. In all the above approaches, when the p-best of each particle updates, a neighbourhood search is performed around it to find a better solution. If the neighbourhood search finds a better solution, the better one is considered as the new p-best. After updating all the p -bests by neighbourhood search approach and identifying the new g-best, a neighbourhood search is performed around the new g-best to find a Flowshop sequence-dependent group scheduling 63 better one. In this research, four different HPSO algorithms based on NSi, i = 1, …, 4 are developed and denoted by HPSOI, HPSOII, HPSOIII, and HPSOIV. 3.4 Timing algorithm In scheduling context, a regular performance measure such as minimisation of makespan (Cmax) is a function that is non-decreasing in terms of job completion times (Cj). In this case, the objective function cannot be improved by increasing the job completion times. If job completion times increase, the objective function will not be improved. However, in the case of non-regular objective functions, the objective function may be improved by increasing the job completion times. For example, when job j has a due date dj, the earliness penalty, i.e., Ej = max(dj – Cj, 0), is non-decreasing in terms of job completion time. By increasing the job completion time, the earliness penalty may decrease. When the objective function is regular, scheduling a fixed and predetermined sequence of jobs is trivial, i.e., all the jobs should be completed as quickly as possible. However, when the objective function is non-regular, it is more challenging. In other words, for a given sequence, there are works to do to find the best schedule. Inserting unforced idle time between processing the jobs may lead to improvement in the overall objective function. So, even for a predetermined sequence of jobs we need a timing algorithm to determine the best starting time for processing each job. When the objective function is non-regular such as minimisation of total earliness and tardiness, finding the optimal schedule for a fixed and predetermined sequence is not trivial. Unforced idleness may be required between the processing of two consecutive jobs or groups. This challenge makes the problem so different and more complex than the available researches in the literature. In this research, a timing algorithm is proposed to find the optimal schedule as well as the objective function value for a given sequence of groups and jobs. Hendel and Sourd (2007) propose an efficient timing algorithm for the problems with convex objective function. In this research, we apply this algorithm to solve the single machine earliness tardiness problem with job release dates (rj), i.e., 1| rj | ( w′j E j + w′′jT j ), to find the optimal schedule for a given sequence of groups and jobs of the Fm | fmls, shgi , prmu |  (w′ Ej j + w′′j T j ) problem. The basic idea for developing a timing algorithm for a fixed sequence of groups and jobs of the Fm | fmls, shgi , prmu |  ( w′j E j + w′′j T j ) problem is to transform the problem into a 1| rj |  (w′ E j j + w′′j T j ) problem. At the first step, to transform the problem into a single machine problem, the groups and jobs are processed on the first m – 1 machines as quickly as possible. Then, the completion time of each job on the (m – 1)th machine is considered as its release date for processing on the mth machine. Suppose a fixed sequence of groups and jobs as [G1 , G2 , ..., GN | J11 , J12 , ..., J1n | ... | J N 1 , J N 2 , ..., J NnN ]. First, the groups and jobs are processed based on this predetermined sequence on the first m – 1 machines without any unforced idleness. After this step, let Cgj(m–1) be the completion time of job j of group g on the (m – 1)th machine. Consider Cgj(m–1) as the release date of this job on the last machine. By this approach, the problem can be transformed into a 1| fmls, shg , rj |  ( w′j E j + w′′j T j ). Every sequence of groups and jobs 64 T. Keshavarz et al. for the 1| fmls, shg , rj |  (w′ E + w′′T ) problem can be transformed into a job j j j j sequence for the 1| shg , r |  ( w′ E + w′′T ) problem. Only the processing time and due j j j j j date of the last job of each group should be modified to consider the required setup times between the groups. The required setup time is added to the processing time and due date of the last job of each group [equations (19) and (20)] and the other jobs do not change [equations (17) and (18)]. Since there is no setup after the last job of the last group, the processing time and due dates of this job do not change [equations (21) and (22)]. Let pgj , d gj be the modified processing time and due date of job j of group g. Then, the modified processing times and due dates can be calculated as follows: pGk JGk j = pGk JGk j m k = 1, ..., N and j = 1, ..., nGk − 1 (17) dGk JGk j = dGk JGk j m k = 1, ..., N and j = 1, ..., nGk − 1 (18) pGk JGk nGk = pGk JGk nGk m + SGk G( k +1) m k = 1, ..., N − 1 (19) dGk JGk nGk = dGk JGk nGk + SGk G( k +1) m k = 1, ..., N − 1 (20) pGN JGN nGN = pGN JGN nGN m (21) dGN JGN nGN = dGN JGN nGN m (22) The release dates (rgj) in the transformed single machine problem are as follows: rG1 JG1 1 = max ( CG1 JG1 j ( m −1) , s0G1m ) (23) rG1 JG1 j = CG1 JG1 j ( m −1) j = 2, ..., nG1 (24) rGk JGk i = CGk JGk i ( m −1) kj = 2, ..., N and i = 1, ..., nG jk (25) The first job of the first group can be processed after its completion time on the (m – 1)th machine and after the required initial setup time on the last machine, so its release date is considered as the maximum of these two values as expressed in equation (23). The release dates of the other jobs are simply set to their completion time on the (m – 1)th machine [equations (24) and (25)]. After transforming the timing problem into a single machine timing problem, the timing algorithm proposed by Hendel and Sourd (2007) for the 1| rj |  ( w′j E j + w′′j T j ) problem is applied to find the optimal timing and objective function for the given sequence. 4 Lower bound Salmasi et al. (2010) developed a lower bounding method based on branch-and-price (B&P) algorithm for the Fm | fmls, shgi , prmu |  C j problem. In B&P algorithm, the problem is reformulated with a huge number of variables and then is decomposed into a master problem and one or more sub-problems. The LP relaxation of the master problem Flowshop sequence-dependent group scheduling 65 is solved with a number of solutions. To check the optimality of the master problem, the sub-problems are solved to try to identify columns to enter the basis. If such columns are found, the master problem is re-optimised. Branching occurs when no columns price out to enter the basis and the solution does not satisfy the integrality condition. The B&P algorithm allows column generation to be applied during the branch-and-bound tree. The details of the B&P algorithm can be found at Savelsbergh (1997) and Barnhart et al. (1998). Keshavarz and Salmasi (2014), enhance the method developed by Salmasi et al. (2010), by proposing a more efficient algorithm to solve sub problems as well as a better branching rule. Motivated from Keshavarz and Salmasi (2014), we propose a lower bounding method for the problem with minimisation of weighted earliness and tardiness. Since the objective function is non-regular, the sub-problems are more complex that the one proposed by Keshavarz and Salmasi (2014) and our contribution is to develop efficient methods for tackling the sub-problems. The problem first is reformulated as a Dantzig-Wolf decomposition model. The reformulated mathematical model and the proposed lower bounding method are described in the following sections. The following parameters and decision variables in addition to the ones defined in Section 2 are required: Parameters Qi Set of all feasible schedules on machine i i = 1, …, m Ki Number of all feasible schedules on machine i i = 1, …, m th qik k feasible schedule on machine i i = 1, …, m; k = 1, …, Ki Cgji k Completion time of job j of group g on machine i in g = 1, …, N; j = 1, …, ng; schedule qik i = 1, …, m’ k = 1, …, Ki Egjk Earliness of job j of group g in schedule qmk , i.e., g = 1, …, N; j = 1, …, ng; k = 1, …, Km E = max {d gj − C k gj k gjm , 0} Tgjk Tardiness of job j of group g in schedule qmk , i.e., g = 1, …, N; j = 1, …, ng; k = 1, …, Km Tgjk = max {C gjm k − d gj , 0} Decision variables 1 If schedule qik ∈ Qi is selected on the machine i λki =  i = 1, …, m; k = 1, …, Ki 0 Otherwise 4.1 Decomposition model The research problem is reformulated as a set-partitioning master problem. In this model, each column represents a feasible schedule on a machine (say machine i). Define Qi as the set of all feasible schedules on machine i. Each schedule qik , k = 1, ..., Ki , where K i =| Qi |, is defined by a set of completion times of jobs, i.e., qik = {C gji k }. There are a very large number of feasible schedules on each machine, and hence it is not possible to include all of them in the model. Hence, to solve the LP relaxation of the master problem, the column generation technique is applied. Improving columns are added to the restricted master problem by solving the sub-problems. The LP relaxation of the reformulated model can be presented as follows: 66 T. Keshavarz et al. Km N ng Min Z =  λ k =1 g =1 j =1 k k ( i −1) C gj ( i −1) ≥ pgji i = 2, ..., m; g = 1, ..., N ; j = 1, ..., ng (26) Subject to: ki K ( i −1) i = 2, ..., m; g = 1, ..., N ;  k =1 λki Cgji k − λ k =1 k k ( i −1) C gj ( i −1) ≥ pgji j = 1, ..., ng (vgji ) (27) Ki λ k =1 k i = 1 i = 1, ..., m (α i ) (28) λki ≥ 0 i = 1, ..., m; k = 1, ..., K i (29) The objective is to minimise the total weighted earliness and tardiness penalties as presented by equation (26). A job can be completed on machine i only after processing on the preceding machine and by considering its processing time on machine i. This linking constraint is expressed as equation (27). This constraint set relates the completion time of a job on two consecutive machines. Exactly one feasible schedule should be selected on each machine. Constraint set (28) is incorporated into model for this reason. This constraint is usually referred to as the convexity constraint. The 0-1 variables λki are relaxed to continuous variables in the model. vgji and αi are defined as the dual decision variables related to constraint sets (27) and (28), respectively. Sub-problems are employed for checking the variables that are left out of the model to find possibly improving variables. When a solution is non-optimal its corresponding dual solution is infeasible. In other words, primal non-optimality corresponds to dual infeasibility. So, the dual of the model can be used to check the optimality of the solution and to derive the sub-problems. The dual of the LP relaxation of the master problem can be presented as follows: m N ng m Max Z D =  i = 2 g =1 j =1 pgji vgji + α i =1 i (30) Subject to: N ng −  C g =1 j =1 k gj1vgj 2 + α1 ≤ 0 k = 1, ..., K1 (31) N ng N ng  g =1 j =1 k Cgji vgji −  C g =1 j =1 k gji vgj ( i +1) + α1 ≤ 0 i = 2, ..., m − 1; k = 1, ..., Ki (32) N ng N ng  g =1 j =1 k Cgjm vgjm −  ( w′ E g =1 j =1 gj k ′′ k gj wgj Tgj ) + αm ≤ 0 k = 1, ..., K m (33) vgji ≥ 0 i = 2, ..., m; g = 1, ..., N ; j = 1, ... , ng (34) α i unrestricted i = 1, ..., m (35) Flowshop sequence-dependent group scheduling 67 Let vgji1 = 0 for g = 1, …, N and j = 1, …, ng. By considering the constraints of the dual model and in order to find the schedule with the most negative reduced cost on the ith machine, i = 1, …, m – 1, the following sub-problem (SPi) should be solved: SP i (i = 1, ..., m − 1) : N ng (36) Min Z SPi =  ( v g =1 j =1 gj ( i +1) − vgji ) Cgji − α i Subject to: constraint sets (3)–(11) for machine i. αi is a constant value, so SPi is equivalent to minimising the total weighted completion time in a single machine sequence-dependent group scheduling problem i.e., 1| fmls, shg |  w j C j . In this sub-problem the weight of job j of group g is vgj(i+1) – vgji. In order to find the schedule with the most negative reduced cost on the last machine, the following sub-problem (SPm) should be solved: SP m : N ng N ng (37) Min Z SP m =  g =1 j =1 ( wgj′ Egj + wgj′′ Tgi ) −  v g =1 j =1 gjm C gjm − αm Subject to: constraint sets (3)–(12) for machine m. Since Cgjm = dgj + Tgj – Egj, the objective function of SPm can be rearranged as follows: N ng N ng Z SP = m  g =1 j =1 ( wgj′ Egj + wgj′′ Tgj ) −  v g =1 j =1 gjm ( d gj + Tgj − Egj ) − α m ng ng (38) N N =  (( w′ g =1 j =1 gj + vgjm ) Egj + ( wgj′′ − vgjm ) Tgj ) −  v g =1 j =1 gjm d gj − αm The last two terms in (38) are constants and hence the sub-problem for the last machine has a similar structure to the single machine sequence-dependent group scheduling problem with total weighted earliness and tardiness penalties as the objective function i.e., 1| fmls, shg |  ( w′j E j + w′′j T j ) in which the earliness and tardiness penalty of each job can be considered as ( wgj′ + vgjm ) and ( wgj′′ − vgjm ), respectively. This sub-problem has a non-regular objective and finding its optimal solution is challenging. In Section 4.2, we discussed about efficient algorithms to solve this sub-problem. Finally, observe that 1| fmls, shg |  w j C j is a special case of 1| fmls, shg |  (w′ Ej j + w′′j T j ) obtained by setting all the due dates equal to zero (dj = 0, ∀j) and all the tardiness penalties equal to the completion time weights ( w′′j = w j , ∀j ). So all the sub-problems have a similar structure and are special cases of 1| fmls, shg |  ( w′j E j + w′′j T j ). It can be shown that all of the sub-problems are strongly NP-hard problems (Kan, 1976). 68 T. Keshavarz et al. In SP2 through SPm–1, the coefficient of Cgji and in SPm the coefficient of Tgj in the objective function may be negative. Hence there is a possibility that the sub-problems be unbounded. It slows down the convergence process of the column generation approach. To overcome this drawback, several dual cuts are incorporated into the dual of the master problem. These cuts are added to make sure that all of the coefficients are positive. To enforce the sub-problems to be bounded, the following cuts should be added to the dual of master problem: ( vgj (i +1) − vgji ) ≥ 0 i = 2, ..., m − 1; g = 1, ..., N ; j = 1, ..., ng (39) ( wgj′′ − vgjm ) ≥ 0 g = 1, ..., N ; j = 1, ..., ng (40) Several cuts are added to the dual model, so the primal model generates a lower bound for the LP relaxation of the master problem. Incorporating these constraints into the dual model is equivalent to adding artificial variables Rgji to the master problem. Finally, the master problem with artificial variables can be written as follows: Km N ng N ng Min Z =  k =1 g =1 j =1 λkm ( wgj′ E gjk + wgj′′ Tgjk ) +  w′′ R g =1 j =1 gj gjm (41) Subject to: K2 K1 k =1 λk2 Cgjk 2 − λ C k =1 k 1 k gj1 + Rgj 2 ≥ pgj 2 g = 1, ..., N ; j = 1, ..., ng (42) Ki K ( i −1) i = 3, ..., m; g = 1, ..., N ; k =1 λki Cgji k − λ k =1 k k ( i −1) C gj ( i −1) + Rgji − Rgj (i −1) ≥ pgji j = 1, ..., ng (43) Ki λ k =1 k i = 1 i = 1, ..., m (44) λki ≥ 0 i = 1, ..., m ; k = 1, ..., K i 45) Rgji ≥ 0 i = 2, ..., m; g = 1, ..., N ; j = 1, ..., ng (46) 4.2 Solving the sub-problems As mentioned before, all the sub-problems are special cases of the 1| fmls, shg |  ( w′j E j + w′′j T j ) problem. Keshavarz et al. (2015) model this problem as a graph G = (V, E) and show that finding the optimal solution is equivalent to finding a constrained shortest path in the graph representation of the problem. They also propose an arc-time-indexed formulation and an efficient Lagrangian-based branch-and-bound algorithm that can be used to solve the sub-problems optimally. They also propose a multi-start local search algorithm for solving the sub-problems heuristically and show its effectiveness. We use the algorithms developed by Keshavarz et al. (2015) to solve the sub-problems heuristically or optimally or to find a valid lower bound on the optimal value of the sub-problems. Flowshop sequence-dependent group scheduling 69 Solving the sub-problems optimally is not necessary in the early iterations of column generation approach. The multi-start local search algorithm (or any other heuristic or meta-heuristic algorithm) can be employed for solving the sub-problems heuristically and to find improving columns. Only when the heuristic algorithm fails to find an entering variable for all the sub-problems, it is needed to solve the sub-problems optimally. When the optimal solution of all the sub-problems become non-negative, the column generation process terminates and the optimal LP relaxation value is reached. 4.3 Branching After solving the master problem using the column generation approach, there is no guarantee that the values of λki be integer. To improve the obtained lower bound value, Desrosiers et al. (1995) and Barnhart et al. (1994) suggest branching on the variables of the original model. Due to the importance of the group sequence variables Uhg in this research, these variables are selected for branching. If branching occurs on variable Uhg, in one of the new child nodes group h is processed exactly before group g and on the other node only schedules that process group h not immediately before group g are accepted. An index is calculated for each pair of groups to determine the best variable Uhg for branching. If after solving the LP relaxation of the master problem, in most of the columns group h is processed exactly before group g or in most of the columns group h is not processed immediately before group g, then branching on Uhg does not change the lower bound noticeably. The selected groups should be the groups that determining the proper value of variable Uhg is challenging. The index can be defined as the summation of the λki s which in their corresponding schedule, group h is processed immediately before   m Ki group g. Since λki = m, the range of this index is from 0 to m. When the i =1 k =1 index is close to m, it reveals that in almost all of the schedules group h is processed immediately before group g. However, the value close to 0 for this index shows that group h is not processed exactly before group g in most of the schedules. The most challenging groups are the ones that their corresponding index is closer to m2 . So for each m two groups, the branching index is calculated and two groups with closer index to 2 are chosen for branching. In the large sized instances, the branching process needs a considerable amount of computational effort and time. So a time limitation of 30 minutes is considered for execution of the lower bounding algorithm. If the problem is solved before reaching time limitation, the solution is a valid lower bound for the research problem. However, if after 30 minutes the B&P algorithm could not fathom all of the nodes, the minimum value of the lower bound of the nodes that are solved but their child nodes are not fathomed so far, is reported as the lower bound of the problem. 5 Computational results In this paper, a mixed integer linear programming model, four meta-heuristic algorithms, nine heuristic algorithms for generating initial solutions, and a lower bounding method 70 T. Keshavarz et al. are proposed for the Fm | fmls, shgi , prmu |  (w′ E j j + w′′j T j ) problem. For evaluating the performance of the proposed algorithms, the same test instances for the FGSD problems generated by Salmasi et al. (2010) are employed. This problem set has been used in several related research so far. The test instances are for two, three, and six machine FGSD problems. The number of groups varies from two to 16 in three different categories: small (2 to 5 groups), medium (6 to 10 groups), and large (11 to 16 groups). There are between two to ten jobs in each group. The test problems are generated by considering different ratio of setup times for groups on consecutive machines. Interested readers can find the details of the test problems in Salmasi et al. (2010). Since our research is on minimising weighted earliness and tardiness, we also need to generate due dates and earliness and tardiness penalties for jobs. The earliness and tardiness penalties are drawn from a discrete uniform random distribution with lower and upper limit 1 and 5. The due date for a job is drawn from a uniform random distribution [0.3Cmax, 1.7Cmax] and [0.1Cmax, 0.9Cmax] for loose and tight due date respectively; where Cmax is the best solution for the Fm|fmls, shgi, prmu|Cmax problem. These specifications are used to generate the test problems. Then, each test problem is solved by using the four proposed meta-heuristic algorithms. Different versions of the HPSO algorithms are coded using Microsoft visual C++ 2008. ILOG CPLEX (version 12.7) concert technology is used to code the proposed B&P algorithm. All the computational experiments are run on a personal computer with 2.4 GHz CPU and 2 GB RAM. To compare the performance of the proposed meta-heuristic algorithms statistically, a single-factor (algorithm effect) experiment with four levels (number of algorithms) is performed. Statistical analysis system, (SAS) release 9.1 is used to code the experimental design for finding the best proposed HPSO algorithm. As mentioned before, four different versions of HPSO algorithm are based on four different neighbourhood search mechanism. The significance level is set to be 5% in this experiment. The time limitation for solving problems with less than 19 jobs, between 19 to 50 jobs and between 51 to 100 jobs by each algorithm is set to 30, 60, and 120 seconds, respectively. The results of the experiment are summarised in Appendix A for two, three, and six machine problems with loose and tight due date, separately. The P-values are less than 0.05 for two, three, and six machine problems; hence the results show that there is a significant difference among the performance of the algorithms. Tukey test is performed to find the differences. The results of the Tukey test are presented in Appendix B. The results show that HPSOII has the worst performance and there is not a significant difference among the performance of HPSOI, HPSOIII and HPSOIV. Thus, in order to compare these algorithms based on the time spent to find the best solution, the computational time is reported in Table 2. Table 2 The average CPU time (seconds) for different algorithms Loose due date 2-Machine 3-Machine 6-Machine Average HPSOI 30.48 27.42 25.56 27.82 HPSOII 50.31 37.40 55.26 47.66 HPSOIII 41.94 36.11 36.94 38.33 HPSOIV 26.94 27.75 26.61 27.10 Flowshop sequence-dependent group scheduling 71 Table 2 The average CPU time (seconds) for different algorithms (continued) Tight due date 2-Machine 3-Machine 6-Machine Average HPSOI 22.17 21.14 22.44 21.92 HPSOII 47.43 40.12 61.06 49.53 HPSOIII 39.07 20.67 35.13 31.62 HPSOIV 25.13 34.22 23.17 27.50 As it can be seen from Table 2, the average time to find the best solution by HPSOI is less than the other ones, so it can be concluded that, it has the best performance. In Section 3.1, several heuristic algorithms are proposed. These heuristic algorithms are used to generate the initial solutions of the HPSO. To compare the performance of these heuristic algorithms, a single-factor experiment with nine levels (number of heuristic algorithms) is performed. All the instances are solved by using the nine initial solution generating mechanisms and a significance level of 5% is used in this experiment. The results of the analysis of variance (ANOVA) and Tukey comparisons are presented in Appendix C. The P value is 0.000 and it can be concluded that there is a significant difference between the performances of heuristic algorithms. Based on the results of Tukey’s Studentised Range (HSD), Initial solution 4 has the best performance. In other words, the one that uses EDD rule to sequence the jobs and LTER rule to sequence the groups is the best heuristic algorithm. Initial solution 3 and initial solution 5 are in the next ranks. The mathematical model presented in Section 2 is used to solve small sized instances optimally. CPLEX software is used to solve the mathematical model. The results are presented in Tables 3 and 4 for the problems with loose and tight due date, respectively. In these tables, the second column represents the number of small sized instances that are solved optimally. Columns 3 and 4 represent the largest solved instance. Column 5 represents the average percentage gap between HPSOI as the best meta-heuristic algorithm and the optimal solution (CPLEX). The last column represents the average percentage gap between the lower bound (LB) obtained by using the B&P algorithm and the optimal solution. From Tables 3 and 4 it can be seen that the average percentage gap of HPSOI is around 1% and the average gap between the lower bound and the optimal solution is around 5% for small sized instances. It reveals the validity and effectiveness of the proposed upper and lower bounding methods for the research problem. Table 3 Comparing the optimal solution with HPSOI and the lower bound for small sized instances with loose due date No. Largest solved 100 ( HPSOI − CPLEX ) 100 ( HPSOI − LB ) solved No. group No. job CPLEX CPLEX 2-Machine 17 5 40 0.46 0.00 3-Machine 44 5 36 1.05 0.01 6-Machine 16 5 25 1.19 0.33 Average 0.95 0.07 72 T. Keshavarz et al. Table 4 Comparing the optimal solution with HPSOI and the lower bound for small sized instances with tight due date No. Largest solved 100 ( HPSOI − CPLEX ) 100 ( HPSOI − LB ) solved No. group No. job CPLEX CPLEX 2-Machine 10 4 22 0.00 3.19 3-Machine 37 5 26 0.11 4.05 6-Machine 13 4 21 0.08 5.40 Average 0.09 4.20 To evaluate the performance of the HPSOI, the average percentage gap between the solution of the HPSOI and the lower bound obtained by using the B&P method is presented in Table 5. The gap is calculated based on the following formula for each instance: Gap = 100 × ( HPSOI − LB ) / LB (47) The result shows that the average percentage gap for the problems with loose and tight due date are 1.39% and 11.99%, respectively. It shows that both HPSOI as the upper bound and the lower bounding method are efficient, especially for the problems with loose due date. It also can be concluded that the problems with tight due date are more difficult and need more computational effort. Table 5 Average percentage gap based on the group size and loose/tight due date Loose due date Tight due date No. Group size No. test No. test machines Average gap Average gap problems problems 2-Machine Small 18 1.18 18 5.07 Medium 18 1.34 18 8.22 Large 18 1.25 18 14.91 Total 54 1.26 54 9.40 3-Machine Small 54 1.05 54 6.73 Medium 54 1.14 54 13.56 Large 54 1.87 54 14.40 Total 162 1.35 162 11.56 6-Machine Small 18 1.29 18 8.43 Medium 18 1.43 18 18.80 Large 18 2.16 18 20.44 Total 54 1.63 54 15.89 1.39 11.99 Flowshop sequence-dependent group scheduling 73 6 Conclusions Minimisation of weighted earliness and tardiness in FGSD problem with sequence- dependent setup times has been investigated for the first time in this research. The problem is more challenging than the ones that are available in the FGSD literature, because unforced idleness may be required between the processing of consecutive jobs. Our main contributions in this research are as follows: • A mixed integer linear programming model has been proposed to solve the proposed research problem optimally for the first time. • A timing algorithm has been developed to determine the optimal schedule for a given and fixed sequence of jobs and groups for the first time. This algorithm can be utilised in the heuristic and meta-heuristic approaches for the research problem. • A meta-heuristic algorithm based on the PSO algorithm has been developed that finds near optimal solutions for the research problem. • A lower bounding method based on the branch and price algorithm has been proposed. Efficient algorithms have been also proposed to solve the sub-problems with non-regular objective function. Computational experiments have been performed using randomly generated test instances to evaluate the efficiency of the proposed algorithms. The results demonstrate that the proposed meta-heuristic algorithm and the lower bounding method are quite effective, especially for the problems with loose due date. The average percentage gap between the upper and lower bound for the problems with loose and tight due date are 1.39% and 11.99%, respectively. The results also show that the heuristic that uses EDD rule to sequence the jobs and LTER rule to sequence the groups has the best performance. Due to industrial importance of group scheduling and earliness/tardiness minimisation, further research can be performed for developing more efficient upper and lower bounding algorithms. Extending the proposed techniques in this research to other machine environments, e.g., flexible flowshop, is another possible future research topic. References Allahverdi, A. (2015) ‘The third comprehensive survey on scheduling problems with setup times/costs’, European Journal of Operational Research, Vol. 246, No. 2, pp.345–378. Arabameri, S. and Salmasi, N. (2013) ‘Minimization of weighted earliness and tardiness for no-wait sequence-dependent setup times flowshop scheduling problem’, Computers and Industrial Engineering, Vol. 64, No. 4, pp.902–916. Barnhart, C., Hane, C.A., Johnson, E.L. and Sigismondi, G. (1994) ‘A column generation and partitioning approach for multi-commodity flow problems’, Telecommunication Systems, Vol. 3, No. 3, pp.239–258. Barnhart, C., Johnson, E.L., Nemhauser, G.L., Savelsbergh, M.W.P. and Vance, P.H. (1998) ‘Branch-and-price: column generation for solving huge integer programs’, Operations Research, Vol. 46, No. 3, pp.316–329. Costa, A., Cappadonna, F.A. and Fichera, S. (2014) ‘A hybrid metaheuristic approach for minimizing the total flow time in a flow shop sequence dependent group scheduling problem’, Algorithms, Vol. 7, No. 3, pp.376–396. 74 T. Keshavarz et al. Desrosiers, J., Dumas, Y., Solomon, M.M. and Soumis, F. (1995) ‘Chapter 2 time constrained routing and scheduling’, in Ball, M.O., Magnanti, T.L., Monma, C.L. and Nemhauser, G.L. (Eds.): Handbooks in Operations Research and Management Science, Vol. 8, pp.35–139, Elsevier, Amsterdam. Eberhart, R.C. and Kennedy, J. (1995) ‘A new optimizer using particle swarm theory’, in Proceedings of the Sixth International Symposium on Micro Machine and Human Science, New York, NY, Vol. 1, pp.39–43. Franca, P.M., Gupta, J.N.D., Mendes, A.S., Moscato, P. and Veltink, K.J. (2005) ‘Evolutionary algorithms for scheduling a flowshop manufacturing cell with sequence dependent family setups’, Computers and Industrial Engineering, Vol. 48, No. 3, pp.491–506. Hajinejad, D., Salmasi, N. and Mokhtari, R. (2011) ‘A fast hybrid particle swarm optimization algorithm for flowshop sequence dependent group scheduling problem’, Scientia Iranica, Vol. 18, No. 3, pp.759–764. Hendel, Y. and Sourd, F. (2007) ‘An improved earliness-tardiness timing algorithm’, Computers and Operations Research, Vol. 34, No. 10, pp.2931–2938. Hendizadeh, H.S., Faramarzi, H., Mansouri, S.A., Gupta, J.N. and ElMekkawy, T.Y. (2008) ‘Meta-heuristics for scheduling a flowline manufacturing cell with sequence dependent family setup times’, International Journal of Production Economics, Vol. 111, No. 2, pp.593–605. Kan, A.H.G.R. (1976) Machine Scheduling Problems: Classification, Complexity and Computations, Stenfert Kroese, Boston. Keshavarz, T. and Salmasi, N. (2014) ‘Efficient upper and lower bounding methods for flowshop sequence-dependent group scheduling problems’, European Journal of Industrial Engineering, Vol. 8, No. 3, pp.366–387. Keshavarz, T., Savelsbergh, M. and Salmasi, N. (2015) ‘A branch-and-bound algorithm for the single machine sequence-dependent group scheduling problem with earliness and tardiness penalties’, Applied Mathematical Modelling, Vol. 39, No. 20, pp.6410–6424. Kuo, I-H., Horng, S-J., Kao, T-W., Lin, T-L., Lee, C-L., Terano, T. and Pan, Y. (2009) ‘An efficient flow-shop scheduling algorithm based on a hybrid particle swarm optimization model’, Expert Systems with Applications, Vol. 36, No. 3, pp.7027–7032. Lu, D. and Logendran, R. (2013) ‘Bi-criteria group scheduling with sequence-dependent setup time in a flow shop’, Journal of the Operational Research Society, Vol. 64, No. 4, pp.530–546. Naderi, B. and Salmasi, N. (2012) ‘Permutation flowshops in group scheduling with sequence-dependent setup times’, European J. Industrial Engineering, Vol. 6, No. 2, pp.177–198. Neufeld, J.S., Gupta, J.N. and Buscher, U. (2016) ‘A comprehensive review of flowshop group scheduling literature’, Computers and Operations Research, Vol. 70, No. 6, pp.56–74. Pinedo, M.L. (2012) Scheduling: Theory, Algorithms and Systems, Springer Verlag, New York. Ratnaweera, A., Halgamuge, S.K. and Watson, H.C. (2004) ‘Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients’, IEEE Transactions on Evolutionary Computation, Vol. 8, No. 3, pp.240–255. Salmasi, N., Logendran, R. and Skandari, M.R. (2010) ‘Total flow time minimization in a flowshop sequence-dependent group scheduling problem’, Computers and Operations Research, Vol. 37, No. 1, pp.199–212. Salmasi, N., Logendran, R. and Skandari, M.R. (2011) ‘Makespan minimization of a flowshop sequence-dependent group scheduling problem’, The International Journal of Advanced Manufacturing Technology, Vol. 56, Nos. 5–8, pp.699–710. Savelsbergh, M. (1997) ‘A branch-and-price algorithm for the generalized assignment problem’, Operations Research, Vol. 45, No. 6, pp.831–841. Schaller, J.E., Gupta, J.N. and Vakharia, A.J. (2000) ‘Scheduling a flowline manufacturing cell with sequence dependent family setup times’, European Journal of Operational Research, Vol. 125, No. 2, pp.324–339. Flowshop sequence-dependent group scheduling 75 Shi, Y. and Eberhart, R.C. (1999) ‘Empirical study of particle swarm optimization’, in Proceedings of the 1999 Congress on Evolutionary Computation, CEC 99, IEEE, Vol. 3. Tadayon, B. and Salmasi, N. (2013) ‘A two-criteria objective function flexible flowshop scheduling problem with machine eligibility constraint’, The International Journal of Advanced Manufacturing Technology, Vol. 64, Nos. 5–8, pp.1001–1015. Appendix A The ANOVA table for two, three, and six-machine problems with loose and tight due date Table A1 The ANOVA table for two-machine problems with loose due date Source Degree of freedom Sum of square Mean of square F-value P value Algorithms 3 8.999E+05 3.000E+05 4.82 0.003 Blocks 53 2.713E+11 5.118E+09 Error 159 9.889E+06 6.220E+04 Total 215 2.713E+11 Table A2 The ANOVA table for two-machine problems with tight due date Source Degree of freedom Sum of square Mean of square F-value P value Algorithms 3 7.842E+05 2.614E+05 3.75 0.0123 Blocks 53 1.334E+11 2.518E+09 Error 159 1.109E+07 6.973E+04 Total 215 1.335E+11 Table A3 The ANOVA table for three-machine problems with loose due date Source Degree of freedom Sum of square Mean of square F-value P value Algorithms 3 4.450E+06 1.483E+06 20.03 <.0001 Blocks 161 1.040E+12 6.457E+09 Error 483 3.577E+07 7.406E+04 Total 1.040E+12 Table A4 The ANOVA table for three-machine problems with tight due date Source Degree of freedom Sum of square Mean of square F-value P value Algorithms 3 5.093E+05 1.698E+05 2.67 0.0469 Blocks 161 4.837E+11 3.004E+09 Error 483 3.068E+07 6.353E+04 Total 647 4.837E+11 76 T. Keshavarz et al. Table A5 The ANOVA table for six-machine problems with loose due date Source Degree of freedom Sum of square Mean of square F-value P value Algorithms 3 4.833E+06 1.611E+06 5.01 0.0024 Blocks 53 4.754E+12 8.969E+10 Error 159 5.114E+07 3.216E+05 Total 215 4.754E+12 Table A6 The ANOVA table for six-machine problems with tight due date Source Degree of freedom Sum of square Mean of square F-value P value Algorithms 3 1.320E+07 4.401E+06 4.17 0.0072 Blocks 53 1.903E+12 3.591E+10 Error 159 1.680E+08 1.057E+06 Total 215 1.904E+12 Appendix B The results of Tukey simultaneous 95% CIs for two, three, and six-machine problems with loose and tight due date Figure B1 Tukey simultaneous 95% CIs for two-machine with loose due date (see online version for colours) Tukey Simultaneous 95% CIs Differences of Means for HPSOI, HPSOII, HPSOIII and HPSOIV HPSOII - HPSOI Algorithms Comparison HPSOIII - HPSOI HPSOIV - HPSOI HPSOIII - HPSOII HPSOIV - HPSOII HPSOIV - HPSOIII -300 -200 -100 0 100 200 300 Note: If an interval does not contain zero, the corresponding means are significantly different. Flowshop sequence-dependent group scheduling 77 Figure B2 Tukey simultaneous 95% CIs for two-machine with tight due date (see online version for colours) Tukey Simultaneous 95% CIs Differences of Means for HPSOI, HPSOII, HPSOIII and HPSOIV HPSOII - HPSOI Algorithms Comparison HPSOIII - HPSOI HPSOIV - HPSOI HPSOIII - HPSOII HPSOIV - HPSOII HPSOIV - HPSOIII -300 -200 -100 0 100 200 300 Note: If an interval does not contain zero, the corresponding means are significantly different. Figure B3 Tukey simultaneous 95% CIs for three-machine with loose due date (see online version for colours) Tukey Simultaneous 95% CIs Differences of Means for HPSOI, HPSOII, HPSOIII and HPSOIV HPSOII - HPSOI Algorithms Comparison HPSOIII - HPSOI HPSOIV - HPSOI HPSOIII - HPSOII HPSOIV - HPSOII HPSOIV - HPSOIII -300 -200 -100 0 100 200 300 Note: If an interval does not contain zero, the corresponding means are significantly different. 78 T. Keshavarz et al. Figure B4 Tukey simultaneous 95% CIs for three-machine with tight due date (see online version for colours) Tukey Simultaneous 95% CIs Differences of Means for HPSOI, HPSOII, HPSOIII and HPSOIV HPSOII - HPSOI Algorithms Comparison HPSOIII - HPSOI HPSOIV - HPSOI HPSOIII - HPSOII HPSOIV - HPSOII HPSOIV - HPSOIII -150 -100 -50 0 50 100 150 Note: If an interval does not contain zero, the corresponding means are significantly different. Figure B5 Tukey simultaneous 95% CIs for six-machine with loose due date (see online version for colours) Tukey Simultaneous 95% CIs Differences of Means for HPSOI, HPSOII, HPSOIII and HPSOIV HPSOII - HPSOI Algorithms Comparison HPSOIII - HPSOI HPSOIV - HPSOI HPSOIII - HPSOII HPSOIV - HPSOII HPSOIV - HPSOIII -750 -500 -250 0 250 500 Note: If an interval does not contain zero, the corresponding means are significantly different. Flowshop sequence-dependent group scheduling 79 Figure B6 Tukey simultaneous 95% CIs for six-machine with tight due date (see online version for colours) Tukey Simultaneous 95% CIs Differences of Means for HPSOI, HPSOII, HPSOIII and HPSOIV HPSOII - HPSOI Algorithms Comparison HPSOIII - HPSOI HPSOIV - HPSOI HPSOIII - HPSOII HPSOIV - HPSOII HPSOIV - HPSOIII -1000 -500 0 500 1000 Note: If an interval does not contain zero, the corresponding means are significantly different. Appendix C The ANOVA and Tukey test results for comparing the initial solutions Table C1 The ANOVA table for comparing the initial solutions Source Degree of freedom Sum of square Mean of square F-value P value Initial 8 2.763E+11 3.454E+10 20.17 <.0001 solutions Blocks 540 6.671E+14 1.235E+12 Error 4320 7.396E+12 1.712E+09 Total 4868 6.748E+14 80 T. Keshavarz et al. Figure C1 Tukey simultaneous 95% CIs for comparing the initial solutions (see online version for colours) Tukey Simultaneous 95% CIs Differences of Means for Initial Solutions Initial Solution 2 - Initial Solution 1 Initial Solution 3 - Initial Solution 1 Initial Solution 4 - Initial Solution 1 Initial Solution 5 - Initial Solution 1 Initial Solution 6 - Initial Solution 1 Initial Solution 7 - Initial Solution 1 Initial Solution 8 - Initial Solution 1 Initial Solutions Comparison Initial Solution 9 - Initial Solution 1 Initial Solution 3 - Initial Solution 2 Initial Solution 4 - Initial Solution 2 Initial Solution 5 - Initial Solution 2 Initial Solution 6 - Initial Solution 2 Initial Solution 7 - Initial Solution 2 Initial Solution 8 - Initial Solution 2 Initial Solution 9 - Initial Solution 2 Initial Solution 4 - Initial Solution 3 Initial Solution 5 - Initial Solution 3 Initial Solution 6 - Initial Solution 3 Initial Solution 7 - Initial Solution 3 Initial Solution 8 - Initial Solution 3 Initial Solution 9 - Initial Solution 3 Initial Solution 5 - Initial Solution 4 Initial Solution 6 - Initial Solution 4 Initial Solution 7 - Initial Solution 4 Initial Solution 8 - Initial Solution 4 Initial Solution 9 - Initial Solution 4 Initial Solution 6 - Initial Solution 5 Initial Solution 7 - Initial Solution 5 Initial Solution 8 - Initial Solution 5 Initial Solution 9 - Initial Solution 5 Initial Solution 7 - Initial Solution 6 Initial Solution 8 - Initial Solution 6 Initial Solution 9 - Initial Solution 6 Initial Solution 8 - Initial Solution 7 Initial Solution 9 - Initial Solution 7 Initial Solution 9 - Initial Solution 8 -30000 -20000 -10000 0 10000 20000 30000 Note: If an interval does not contain zero, the corresponding means are significantly different. Table C2 Tukey’s studentised range (HSD) test for comparing the initial solutions Alpha 0.05 Error degrees of freedom 4320 Error mean square 1,712,100,000 Critical value of studentized range 4.38 Minimum significant difference 7,807.4 Means with the same letter are not significantly different. Tukey grouping Means N Algorithms A 81,609 541 Initial solution 6 A 80,948 541 Initial solution 9 A 79,858 541 Initial solution 8 B A 79,315 541 Initial solution 2 B A 77,789 541 Initial solution 1 B A 77,288 541 Initial solution 7 B 71,932 541 Initial solution 5 C 63,245 541 Initial solution 3 C 59,837 541 Initial solution 4