© KU Leuven Numerical linear programming under non-probabilistic uncertainty models—interval & fuzzy sets Keivan Shariatmadar Mechanical Engineering Technology TC, KU Leuven, Campus Bruges, Spoorwegstraat 12, Sint-Michiels, 8200, Belgium
[email protected]Mark Versteyhe Faculty of Engineering Technology TC, KU Leuven, Campus Bruges, Spoorwegstraat 12 Sint-Michiels, 8200, Belgium
[email protected]Received (Nov 11, 2017) Revised (Jan 20, 2019) Accepted (May 11, 2019) This paper considers a linear optimisation problem under uncertainty with at least one element modelled as a non-probabilistic uncertainty. The uncertainty is expressed in the coefficient matrices of constraints and/or coefficients of goal function. Previous work converts such problems to classical (linear) optimisation problems and eliminates uncertainty by converting the linear programming under uncertainty problem to a decision problem using imprecise probability and imprecise decision theory. Our aim here is to generalise this approach numerically and present three methods to calculate the solution. We investigate what numerical results can be obtained for interval and fuzzy types of uncertainty models and compare them to classical probabilistic cases—for two different optimality criteria: maximinity and maximality. We also provide an efficient method to calculate the maximal solutions in the fuzzy set model. A numerical example is considered for illustration of the results. Keywords: Imprecise probability; linear programming; imprecise decision theory; maximinity & maximality. 1. Introduction One of the important fields in Optimisation is Linear Programming (LP) or Linear Optimisation (17, 10). Many problems in Operations Research (OR) (1) can be expressed as LP problems. It is heavily used in company management and industry e.g., planning, production, transportation, technology and other issues where the companies want to maximise profits or minimise costs in a conditional domain. Applications of linear optimisation have been developed for many kinds of problems such as urban and policy analysis fields, e.g., education, law enforcement, health, energy planning and environmental quality see, e.g. (3, 11, 25, 31). From the very beginning of the application of optimisation to these problems, it was recognised that analysis of natural and technological systems is almost 1 2 K. Shariatmadar et al always confronted with uncertainty (22, 13, 16). A linear programming under uncertainty (LPUU) problem is a generalisation of a LP problem where at least one of the parameters (coefficients) of the LP problem is not precisely known. There is at least one element of the coefficients’ matrices of the LP problem, which is uncertain or generally unknown e.g., they are not deterministic, we do not know the exact values or the values are not known with certainty. For instance, the only information about the coefficient is just boundaries or lower and upper values in an interval. In this paper, interval and fuzzy set models are considered based on the uncertainty of the parameters in the coefficients, which makes the problem more difficult but nonetheless more interesting to work on. The difficulty or challenge lies in optimising an objective function under an unknown domain. Usually, we do not know exactly whether the problem is feasible or not because the values of the constraints or the coefficients of the goal function are uncertain. In other words, the problem—maximising a linear function over the unknown set (unknown feasibility space)—is not a well-posed problem. In previous work (23, 19, 20) analytical methodologies and solutions have been discussed to eliminate the uncertainties from LPUU problems. In this paper, we discuss implementation of the theories and some numerical solutions. There are many applications for LPUU problems, some of them are given by Dantzig in (5, Example 2.), which is about finding the minimum expected cost diet in a Nutrition problem. These are some examples of a broader application on optimisation under uncertainty: Generation of electrical power, Operation of reservoirs, Inventory management, Portfolio selection, Facility planning, Pollution control, Stabilisation of mechanisms, and Analysis of biological systems see, e.g. (21). Another classical way to solve LP problems under interval uncertainty is interval arithmetic see, e.g. (18). We will discuss the potential of our method in Section 4 and we will compare it with other related alternative works. 1.1. Mathematical overview of LP problems A canonical LP problem is expressed in the following form (17): maximise cT x such that Ax ≤ b, x ≥ 0 (1) where x ∈ Rn is an optimisation variable, A ∈ Rm×n is the coefficient matrix, c ∈ Rn is the objective function coefficient vector and b ∈ Rm is the constraints vector. 1.2. Mathematical overview of LPUU problems In (23, 19, 20) papers, we considered a case in which the (elements of) matrices A and/or b are unknown. In other words, there were uncertainties in the elements of those matrices and we assumed uncertainty models are given with interval or fuzzy sets uncertainty models. In this paper, we have the following generic uncertain linear programming problem: maximise U T x such that Y x ≤ Z, x ≥ 0 (2) Manuscripts 3 where x ∈ Rn is a vector of optimisation variables x j , U is a random vector taking variables u ∈ Rn , the matrices Y and Z are random variables taking values y ∈ Rm×n and z ∈ Rm , respectively§ . In this work, the generic form of (2) will be discussed in Section 3 for interval uncertainty model, only. In Section 2, for simplicity, it is assumed the objective function coefficient vector is known. In other words, U := c and it is known and the LPUU problem is given in the following canonical form: n maximise ∑ c jx j j=1 n such that ∑ yi j x j ≤ zi , x j ≥ 0 ∀i = 1, 2, . . . , m. (3) j=1 where x ∈ Rn is a vector of optimisation variables x j , c is objective function coefficient taking values c j ∈ Rn , the matrices Y and Z are random variables taking values y ∈ Rm×n and z ∈ Rm , respectively. The paper is organised as follows. In Section 2, the analytical result on LPUU problem is given. A short overview and definitions of coherent lower and upper previsions† as well as the imprecise decision theory described in Appendix A. In Section 3, numerical implementable solutions to the theoretical results are discussed, together with solutions of LPUU problems‡ . Section 4 compares the approach in this paper with related work and runs a numerical example to show the computational effort. Section 5 discusses the conclusion. 2. Overview and definitions of the theoretical framework This section helps the reader to grasp a quick view on the most important concepts of imprecise probability. For more details, we propose consulting the reference book (27) as well as the Appendix A. To make the concept of coherent lower previsions in imprecise probabilities theory more understandable, a short description will be given in Section. A.1. In Section. A.2 the concept of imprecise decision theory is explained. In the next three sections, we will give maximin and maximal solutions to problem (3) in three separate uncertainty models—linear previsions (or probability distributions), vacuous previsions (or intervals), and possibility distributions (or fuzzy sets). 2.1. Linear prevision model Suppose that the uncertainty about the random variables Y and Z is given by cumulative distribution functions FY and FZ respectively, which are (assumed to) be independent and § We assumed that y , z and u the elements of Y , Z and U are independent. In this paper, we worked with ij i j the maximisation operator. Since, min U T x = − max −U T x therefore all results and proves can be applied and held for the minimisation operator as well. † For the general uncertainty models and a tool—expectation operators or probability measures for (Y, Z) in problem (3)—we direct the reader to (27). The uncertainty models—probability measures, intervals and fuzzy sets—can all be special cases of uncertainty modelling framework: the theory of coherent lower previsions (and imprecise probability models in general) see, e.g. (27, 14). ‡ The difficulty of the implementation of the theoretical solutions is reduced to well-posed classical numerical solutions by illustrating two novel methods in this paper. 4 K. Shariatmadar et al FV is a joint distribution of the random variable V = (Y, Z), then we can find maximin and maximal solutions as follows. 2.1.1. Optimal solutions in linear prevision case In this model, because of the property A.1, the maximin and maximal solutions coincide and they can be found by just replacing the lower probability P in the Equation (A.10) with a probability measure P, argmax(cT x − L)P(Y x ≤ Z) (4) x≥0 if Y is invertible then we have: argmax(cT x − L)P(Y x ≤ Z) = argmax(cT x − L) 1 − P(Y x ̸≤ Z) x≥0 x≥0 = argmax(cT x − L) 1 − FY −1 Z (x) (5) x≥0 where F is the cumulative distribution function. 2.2. Vacuous model In this case, the uncertainty about V = (Y, Z) is modelled by coherent lower prevision PV on A := A × B ⊆ Rm×n × Rm for a given g = g(V ) on A as, PV (g) := min g(v) equivalently, PV (g) := max g(v) v∈A where A := m n k=1 l=1 Akl ; v∈A Akl := [ykl , ykl ], B := m k=1 Bk ; (6) Bk := [zk , zk ]. 2.2.1. Maximin solutions in vacuous case By combining these vacuous prevision definitions in (6) with the Equation (A.10), the maximin solutions become a classical linear programming problem: argmax P(Gx ) = argmax (cT x − L) min Iyx≤z (x) (y,z)∈A x≥0 x≥0 = argmax T c x {x≥0:∀(y,z)∈A , yx≤z} =: argmax x∈A cT x (7) Manuscripts 5 where A := {x ≥ 0 : yx ≤ z} is an inner feasibility space and is calculated as follows: T (y,z)∈A \ A:= (y,z)∈A = ( x∈ ( x∈ Rn≥0 ) n Rn≥0 : ∑ ykl xl ≤ zk , k = 1, 2, . . . , m l=1 ) n : (∀(y, z) ∈ A ) ∑ ykl xl ≤ zk , k = 1, 2, . . . , m l=1 = ( = ( x∈ x∈ Rn≥0 Rn≥0 ) n : max zk , k = 1, 2, . . . , m ∑ ykl xl ≤ zmin k ∈Bk ykl ∈Akl l=1 ) n : ∑ ykl xl ≤ zk , k = 1, 2, . . . , m l=1 Thus, the solution in this case can be written as: maxn cT x =: x ∈ Rn≥0 : Y x ≤ Z (8) x∈R such that Y x ≤ Z, x ≥ 0. (9) / then the problem is infeasible. Indeed, if A = 0, 2.2.2. Maximal solutions in vacuous case We arrange the decision x ≥ 0 in an (partial) order so that is not dominated by any other decisions w ≥ 0. To do that, we divide the (decisions) space (x, w) ∈ R2n in quadrants: x+ ∩ w+ x+ ∩ w− x− ∩ w+ x− ∩ w− x+ :={(y, z) ∈ A : yx ≤ z, x ≥ 0} x− := {(y, z) ∈ A : yx ̸≤ z, x ≥ 0} w+ :={(y, z) ∈ A : yw ≤ z, w ≥ 0} w− := {(y, z) ∈ A : yw ̸≤ z, w ≥ 0} (10) By considering the maximality definition (A.11) and all possible non-empty (24 − 1 = 15) cases where A is relative to these quadrants, we seek an expression for P(Gx − Gw ), A ∩ x+ ∩ w− , 0, / cT x − L +=0 0 A ∩ x / ∧ A ∩ w− , 0, / T T + − P(Gx − Gw ) = max{0, c x − c w} A ∩ x ∩ w = 0/ ∧ A ∩ x+ , 0/ ∧ A ∩ w− , 0, / T T + − c x−c w A ∩ x , 0/ ∧ A ∩ w = 0, / L − cT w + − A ∩ x = 0/ ∧ A ∩ w = 0. / The first three cases are always non-negative, the fourth one can be positive or negative, and the last one is always negative. Therefore, we consider the last two cases avoiding that x ≥ 0 is not maximal see, for more details (23, 20): inf P(Gx − Gw ) ≥ 0 w∈Rn ⇔A = 0/ ∨ (x ∈ A ∧ cT x ≥ max cT w = cT xm ) w∈A (11) 6 K. Shariatmadar et al where, xm is the maximin solution and A := S {x ≥ 0 : yx ≤ z} is an outer feasibility (y,z)∈A space and is calculated as follows: ( [ A:= x∈ Rn≥0 : = = = ( ( x∈ Rn≥0 ) n : (∃(y, z) ∈ A ) : ∑ ykl xl ≤ zk , k = 1, 2, . . . , m l=1 x ∈ Rn≥0 : min ykl ∈Akl x∈ ∑ ykl xl ≤ zk , k = 1, 2, . . . , m l=1 (y,z)∈A ( ) n Rn≥0 ) n zk , k = 1, 2, . . . , m ∑ ykl xl ≤ zmax k ∈Bk l=1 ) n : ∑ ykl xl ≤ zk , k = 1, 2, . . . , m l=1 =: x ∈ Rn≥0 : Y x ≤ Z , (12) / then the maximal solutions become a classical feasibility problem: therefore, if A , 0, x ∈ Rn≥0 : x ∈ A and cT x ≥ cT xm ≡ x ∈ Rn≥0 : Y x ≤ Z and cT x ≥ cT xm . (13) One of the interesting properties in these results is that the solutions in both criteria— maximinity and maximality—do not depend on L. 2.3. Possibility distributions This model is more complex than the two models discussed in Appendix A. However, the importance of the interpretation–discussed in Section. A.1–attached to the uncertainty models, becomes higher. Consider the case in which the uncertainty about Y and Z are described by πY and πZ respectively and the joint possibility distribution is given by minimum operator as π(y, z) := min πY (y), πZ (z) on A := supp(π), where πY (y) := min1≤k≤m min1≤l≤n πYkl (ykl ), πZ (z) := min1≤k≤m πZk (zk ) for a given gamble (functional) h = h(Y, Z) on A–from the definition (A.1)–the upper prevision P is defined as Pπ (h) := Z 1 sup{h(y, z) : π(y, z) ≥ t}dt (14) 0 equivalently, Pπ (h) can be defined just by changing the sup to inf see, e.g. (7). As it is discussed in Section A.1, necessity measure is conjugate to the possibility measure i.e., all results and the theories in this section are held for the necessity measure as well. 2.3.1. Maximin solutions for possibility distribution model By replacing the lower probability P in (A.10) with the lower prevision for possibility distribution Pπ , the maximin solutions become a classical optimisation problem: argmax cT x − L Pπ (IY x≤Z ) (15) x≥0 Manuscripts 7 again L should be chosen in such a way that cT x − L > 0. From (15) the maximin solution is given as follows, argmax cT x − L x∈Rn = argmax x∈Rn = argmax x∈πt∗ Z 1 0 Z 1 Z 1 inf IY x≤Z (y, z)dt 0 π (y,z)≥t cT x − L Iπt∗ (x)dt S(x) cT x − L dt = argmax [1 − S(x)] (cT x − L), x∈πt∗ (16) ∗ where S(x) =: {t : x ∈ πt∗ , x ≥ 0} and, S(x) := inf S(x). πt is the inner feasibility space ∗ in each level t, πt = {x ≥ 0 : yx ≤ z, ∀(y, z) ∈ πt } = x ≥ 0 : yt x ≤ zt . Because x is nonnegative, we can find the inner possibility space in level t by just considering the corresponding upper bounds for the elements of yt —we name yt —and lower bounds for the elements of zt —we name zt —thus the maximin solutions, in this case, are for every given t: argmax [1 − S(x)] cT x − L . (17) x≥0, yt x≤zt Equivalently, the maximin solution for necessity measure are: argmax 1 − S(x) cT x − L . (18) x≥0, yt x≤zt ∗ where S(x) := sup S(x) = sup{t : x ∈ πt∗ , x ≥ o feasibility space in each n 0}. πt is the outer ∗ level t, πt = {x ≥ 0 : yx ≤ z, ∃(y, z) ∈ πt } = x ≥ 0 : yt x ≤ zt . We will extend the framework to give maximal solutions for possibility distribution models in general and we will describe the numerical maximal solutions in section (3.2). 3. Generalisation of numerical solution of LPUU problems So far, we have considered the case in which there is no uncertainty in the goal function. In this section, we show that by using a simple method—hypograph-form, the goal cannot be uncertain in optimisation problems under vacuous interval previsions. We consider an LP problem where we have uncertainty in the goal function as well, max x∈Rn≥0 UT x such that Y x ≤ Z (19) where U is also a random variable taking values u in Rn . By using the hypograph-form see, e.g., (4), we can reformulate it as the following LP problem where there is no uncertainty in the objective function: 8 K. Shariatmadar et al max (x,x0 )∈(Rn≥0 ,R) max x0 Canonical f orm such that Y x ≤ Z ========⇒ (x,x0+ ,x0− )≥0 x0+ − x0− such that Y x ≤ Z x0+ − x0− ≤ U T x. x0 ≤ U T x (20) We show that when the random variables (Y, Z,U) are represented by vacuous interval previsions, the optimal solutions for both problems (19) and the hypograph-form (20) are equivalent. Theorem 1. The optimal solutions for the problems (19) and the hypograph-form (20)— under interval uncertainty model—are equivalent. Proof. First maximinity case: From (9) we have the maximin solution for the problem (20) as: max x+ − x− (x,x0+ ,x0− )∈(Rn≥0 ,R≥0 ,R≥0 ) 0 0 such that Y x ≤ Z x0+ − x0− ≤ U T x (21) with the Factorisation property of P and the maximin definition in (A.8), the solution of the problem (19), can be written as: max P(Gx ) = max P (U T x − L)IY x≤Z + L x≥0 x≥0 = max P(U T x − L)P(Y x ≤ Z) x≥0 = max (P(U T )x − L)P(Y x ≤ Z) x≥0 i h = max (U T x − L)|Y x≤Z = max U T x x≥0 x≥0 such that Y x ≤ Z (22) that means, the hypograph-form of (22) via the hypograph definition in (20) is equivalent to (21). Second Maximality case: from the solution in (13) we can easily give the maximal solution for the hypograph-form problem (20), which is the following outer feasibility space: n o T − − w | x ≥ 0 : Y x ≤ Z and U x ≥ x0+ − x0− and x0+ − x0− ≥ w+ − + T 0 0 (w0 −w0 ≤U w, Y w≤Z) o n T − = x ≥ 0 : Y x ≤ Z and U x ≥ w+ − T 0 − w0 |(w+ 0 −w0 ≤U w, Y w≤Z) n o T ≡ x ≥ 0 : Y x ≤ Z and U x ≥ U T xm , − where (x, w, x0+ , x0− , w+ 0 , w0 ) ≥ 0. Now we show that this set is also the maximal solution for the problem (19). Once again, from maximality and upper vacuous interval prevision definitions in (A.7) and (6) respectively, we have almost the same calculations and Manuscripts 9 the same explicit expression for P(Gx − Gw ) in all fifteen possible cases. It is sufficient T to replace cT with U because of x, w ≥ 0. Therefore, the set of maximal solutions is: T T A = 0/ ∨ (x ∈ A ∧U x ≥ maxw∈A U T w = U T xm ) ≡ (x ≥ 0 : Y x ≤ Z and U x ≥ U T xm ). By converting the LPUU problem to the decision problem, we find the solution (for new LP problem or feasibility problem) where the uncertainty is eliminated from the problem (23, 19, 20). This new LP problem could be calculated by any LP solver numerically. In some cases—e.g. possibility distributions—these solutions are not easy to implement or calculate theoretically. In the next two sections, we will give two methods to calculate equation (15). 3.1. Maximin solution in possibility distribution model Because of the linearity of the goal function—cT x—when the possibility distribution is unimodal, then the solution is unimodal too. For instance, by using a bisection method we can find the maximin solution in finite number of steps; each step containing a linear programming problem to be solved. In equation (15) the lower prevision can be simplified by using the definition in (15) and equation (16). In one dimension (cT = c ∈ R), when the possibility distribution is given by a (unimodal) normal membership function as in the following figure (the solid-line), then the function [1 − S(x)] (cx − L) is unimodal. Because the goal function (cx) is linear, the membership function will have a shift and squeeze (dash-line) and the maximum can be found easily: Fig. 1. unimodal normal membership function and shifted-squeezed normal membership function 3.1.1. Search method–possibility distribution maximin solution Lemma 1. If a possibility (or necessity) distribution π on A is unimodal then the function given in (15) is unimodal too. 10 K. Shariatmadar et al Proof. Since the function cT x − L > 0 is linear, it is sufficient to prove that the function— Pπ (IY x≤Z ) of x—is unimodal. Consider a unimodal possibility distribution π, the corresponding level set πt in the level t with definition {(y, z) ∈ A : π(y, z) ≥ t} and the corresponding set of x ≥ 0 which is shown as πt∗ in the level t with definition {x ≥ 0 : yx ≤ z, ∀(y, z) ∈ πt }. The level sets πt are nested because the possibility distribution π is unimodal. In other words, for a higher level t we get a smaller level set πt which means fewer restrictions on the coefficients y, z and then bigger feasibility space πt∗ , i.e., ∀t1 ,t2 ∈ [0, 1] : t1 ≤ t2 ⇒ πt2 ⊆ πt1 and πt∗1 ⊆ πt∗2 . (23) By using the definition in (14) we have: Pπ (IY x≤Z ) = Z 1 inf IY x≤Z (y, z)dt =: 0 (y,z)∈πt Z 1 inf IS(x) (t)dt = 0 (y,z)∈πt Z 1 dt = 1 − S(x) (24) S(x) where S(x) := {t : x ∈ πt∗ , 0 ≤ t ≤ 1} and S(x) := inf S(x). Since by multiplying the linear function (cT x − L) to the unimodal function, the result is again unimodal, then we can easily find the maximum (the top) using a suitable searching algorithm, e.g., Bisection method via discretising the t-axe∈ [0, 1]. One approach to calculate this optimisation problem (17), is to find a level t so that the maximin solutions are at same level t. We define this particular level as t := tmax and the maximin solutions as πt∗max (again it is the corresponding set of x ≥ 0 in the level tmax ), we find this level as follows: Lemma 2. Suppose that f (t) := max∗ cT x − L [1 − t]. Show that the solution (17) is x∈πt equal to max f (t). t∈[0,1] Proof. Assume that M := max {x≥0:t∈[0,1], yt x≤zt } [1 − S(x)] cT x − L . Therefore, it is enough to prove max f (t) = M. By the property in (23) we have: t∈[0,1] max f (t) = max max∗ cT x − L [1 − t] = t∈[0,1] x∈πt t∈[0,1] = max {x≥0:yt x≤zt , 0≤t≤1} T max ∗ x∈πt , t∈[0,1] cT x − L max [1 − t] t∈[0,1] c x − L [1 − S(x)] = M. In necessity measure case the proof is similar for M = min f (t) t∈[0,1] Therefore, the maximin solution of unimodal possibility distribution is: [1 − S(x)] cT x − L . argmax {x≥0:yt x≤zt , 0≤t≤1} And if the possibility distribution is not unimodal, then, by finding the highest level t max and the lowest level t max in the set of corresponding levels for the maximin solutions— argmax f (t) and in necessity measure case: argmin f (t), in the outer feasibility space—we t∈[0,1] t∈[0,1] Manuscripts 11 can have the maximin solutions as the convex set which is given by, ) ( Co argmax cT x, argmax cT x . x∈πt max (25) x∈πt max 3.1.2. Approximation method–possibility distribution maximin solution In the maximin solution in (15), the lower prevision can be found approximately as Pπ (IY x≤Z ) = := Z 1 0 Z 1 0 ≈ inf{IY x≤Z (y, z) : π(y, z) ≥ t}dt Iπt∗ (x)dt 1 d ∑ Iπk∗ (x) d k=0 where d ∈ N is number of levels (discretisation), and Iπt∗ (x) is defined as follows, Iπt∗ (x) := ( 1, x ∈ πt∗ ⇒ Iπk∗ (x) := 0, otherwise ( 1, x ∈ πk∗ 0, otherwise, where πt∗ := {x : yx ≤ z, ∀(y, z) ∈ πt } = {x : yt x ≤ zt } : the smallest (inner) feasibility space, and πk∗ := {x : yx ≤ z, ∀(y, z) ∈ πk , k = 0, 1, . . . , d} = {x : yk x ≤ zk }, πk and πt are partitions (level sets) on A, k πk := (y, z) : π(y, z) ≥ , k = 0, 1, 2, . . . , d and πt := {(y, z) : π(y, z) ≥ t ∈ [0, 1]} d thus the numerical maximin solutions are: argmax cT x − L Pπ (IY x≤Z ) x≥0 Z 1 Z 1 = argmax cT x − L x≥0 ≈ argmax cT x − L x≥0 inf IY x≤Z (y, z)dt 0 π (y,z)≥t 1 d ∑ Iπk∗ (x). d k=0 Similarly (for the necessity measure), argmax cT x − L x≥0 ≈ argmax cT x − L x≥0 sup IY x≤Z (y, z)dt 0 π (y,z)≥t 1 d ∑ Iπk∗ (x), d k=0 where, πk∗ := {x : yx ≤ z, ∃(y, z) ∈ πk , k = 0, 1, . . . , d} = {x : yk x ≤ zk }. 12 K. Shariatmadar et al 3.2. Maximal solutions in possibility distribution model By combining the general maximal solution in (A.11) with the upper prevision definition in (A.1), the optimal x ≥ 0 are those for which, P(Gx − Gw ) = inf(Gx − Gw ) + Z sup(Gx −Gw ) inf(Gx −Gw ) P({(y, z) ∈ A : Gx (y, z) − Gw (y, z) ≥ t})dt ≥ 0 (26) for all w ≥ 0. To calculate a more explicit expression, first, we need to calculate Gx − Gw in four possible quadrants: A = (x+ ∩ w+ ) ∪ (x+ ∩ w− ) ∪ (x− ∩ w+ ) ∪ (x− ∩ w− ) where x+ ∩ w− := {(y, z) ∈ A : yx ≤ z ∧ yw ̸≤ z}, (27) same definitions for the other three. For any (y, z) ∈ A, the expression Gx −Gw in the equation (26), is calculated as follows, cT x − cT w, (y, z) ∈ x+ ∩ w+ , cT x − L, (y, z) ∈ x+ ∩ w− , Gx (y, z) − Gw (y, z) = L − cT w, (y, z) ∈ x− ∩ w+ , 0, (y, z) ∈ x− ∩ w− . Because of the definition of L we know: cT x − L > 0 is the largest, L − cT w < 0 is the smallest and the sign of cT x − cT w depends on the order of cT x and cT w. So we have two cases: if cT x ≥ cT w then: x+ ∪ w− , L − cT w < t ≤ 0, (28) {(y, z) ∈ A : Gx (y, z) − Gw (y, z) ≥ t} = x+ , 0 < t ≤ cT x − cT w, x+ ∩ w− , cT x − cT w < t ≤ cT x − L, and if cT x ≤ cT w then: {(y, z) ∈ A : Gx (y, z) − Gw (y, z) ≥ t} = + − x ∪ w , L − cT w < t ≤ cT x − cT w, w− , cT x − cT w < t ≤ 0, x+ ∩ w− , 0 < t ≤ cT x − L. (29) In both cases (28) and (29), the integrand in (26) is piecewise constant, and therefore the integral is a weighted sum of three terms as follows: P (cT x − L)IY x≤Z − (cT w − L)IY w≤Z = L − cT w+ ( (cT w − L)P(x+ ∪ w− ) + (cT x − cT w)P(x+ ) + (cT w − L)P(x+ ∩ w− ), cT x ≥ cT w, (cT x − L)P(x+ ∪ w− ) + (cT w − cT x)P(w− ) + (cT x − L)P(x+ ∩ w− ), cT x ≤ cT w. (30) By (Maxitivity) property of the possibility distribution defined in Section A.1: P(x+ ∪ w+ ) := P(Y x ≤ Z ∨Y w ≤ Z) = max P(Y x ≤ Z), P(Y w ≤ Z) (31) Manuscripts 13 therefore, P (cT x − L)IY x≤Z − (cT w − L)IY w≤Z = L − cT w+ ( ) R1 R1 T c w − L max 0 sup Ix+ (y, z)dt, 0 sup Iw− (y, z)dt + π (y,z)≥t π (y,z)≥t R 1 R T T T c x − c w 0 sup Ix+ (y, z)dt + c w − L 01 sup Ix+ ∩w− (y, z)dt π (y,z)≥t π (y,z)≥t ) ( R R 1 1 cT x − L max 0 sup Ix+ (y, z)dt, 0 sup Iw− (y, z)dt + π (y,z)≥t π (y,z)≥t cT w − cT x R 1 sup I − (y, z)dt + cT x − L R 1 sup I + − (y, z)dt w x ∩w 0 0 π (y,z)≥t cT x ≥ cT w cT x ≤ cT w π (y,z)≥t (32) where, Ix+ w+ = Ix+ ∩w+ := I{(y,z):Y x≤Z∧Y w≤Z} , Ix+ := I{(y,z):Y x≤Z} , and Iw− := I{(y,z):Y w̸≤Z} are indicator functional. And the maximal solutions x ≥ 0 are the ones that make the above expression (32) positive for all w ≥ 0. We give two methods, (i) an approximation method and (ii) a more efficient (exact) method to find the maximal solutions in the next two sections. In the method (i), we discretise the interval [0, 1] and approximate the integrals by sum, while in the method (ii), we propose a solution in which we convert the maximal (exact) solutions (32) to a classical feasibility problem (where the uncertainty is eliminated). 3.2.1. Approximation method–possibility distribution–maximal solution In this case, the integrals given in (32) are approximated as follows: Z 1 sup Ix+ w+ (y, z)dt = Z 1 sup{I{yx≤z∧yw≤z:π (y,z)≥t} (y, z)}dt := Z 1 Iπ x+ w+ (x, w)dt 0 π (y,z)≥t 0 0 ≈ (33) t 1 d ∑ Iπkx+ w+ (x, w), d k=0 where the indicator function Iπ x+ w+ (x, w) is defined as, t Iπ x+ w+ (x, w) : = ( = ( t + w+ 1, (x, w) ∈ πtx 0, otherwise. 1, (x, w) ∈ {(x, w) : yx ≤ z ∧ yw ≤ z, ∃(y, z) ∈ πt } 0, otherwise, (34) where πt = {(y, z) : π(y, z) ≥ t ∈ [0, 1]} is partition (level sets) on A. To calculate Iπ x+ w+ (x, w) t from the definition of outer (inner) feasibility space and (34), 14 K. Shariatmadar et al Iπ x+ w+ (x, w) = t ( {(x, w) : yx ≤ z ∧ yw ≤ z, ∃(y, z) ∈ πt } , 0/ 1, 0, 1, = 0, 1, = 0, 1, = 0, ( 1, = 0, otherwise. n n (x, w) : min ∑l=1 yil xl ≤ maxzi ∧ min ∑l=1 yil wl ≤ maxzi , i = 1, 2, . . . , m , 0/ yil ∈Ail zi ∈πt yil ∈Ail zi ∈Bi otherwise. o n (x, w) : ∑nl=1 yil xl ≤ zi ∧ ∑nl=1 yil wl ≤ zi , i = 1, 2, . . . , m , 0/ otherwise. o n (x, w) : ∑nl=1 yil xl ≤ zi ∧ ∑nl=1 yil wl ≤ zi , i = 1, 2, . . . , m , 0/ otherwise. (x, w) : Y t x ≤ Z t ∧Y t w ≤ Z t , 0/ otherwise. The approximation of Iπ x+ w+ (x, w) is as follows, t ( + + 1, (x, w) ∈ πkx w Iπ x+ w+ (x, w) ≈ Iπ x+ w+ (x, w) : = t k 0, otherwise. ( 1, (x, w) ∈ {yx ≤ z ∧ yw ≤ z ∃(y, z) ∈ πk } = 0, otherwise, ( 1, (x, w) : Yk x ≤ Zk ∧Yk w ≤ Zk , 0/ = 0, otherwise, where πk = (y, z) : π(y, z) ≥ dk , k = 0, 1, 2, . . . , d are partitions (level sets) on A. We can do the same calculation for the two other integrals in (32), therefore, the maximal numerical† solutions are, P (cT x − L)IY x≤Z − (cT w − L)IY w≤Z = h n o T c w−L d d max 0, I (x, w) + (x, w) − I − − xR ∑ ∑ k=0 π x w k=0 πk d k i T d cT x ≥ cT w I (x, w) − d + c x−L + − ∑ ∑dk=0 Iπ xR (x, w) k=0 πkx ∩w d k h n o cT x−L d d I max (x, w), 0 + (x, w) − I + + ∑ ∑ w̸ R x w k=0 π k=0 π d k k i ∑d I + − (x, w) + L−cT w d − ∑d I w̸R (x, w) k=0 π x ∩w k=0 d π k cT x ≤ cT w k (35) † The approximation error can be determined depending on the selected approximation method for the integrals and number of the partitions, d. Manuscripts 15 3.2.2. An efficient exact method–possibility distribution maximal solution In this section, an efficient method is given to calculate the upper prevision in (32). First, for simplicity, we define simple notations as: T x+ w− x+ (x, w) := sup{t : (x, w) ∈ πtx + w− } T + w− + T (x) := sup{t : x ∈ πtx } − πtw := {(x, w) : yw ̸≤ z, ∃(y, z) ∈ πt } − (w) := sup{t : w ∈ πtw } − πtx w := {(x, w) : yx ≤ z ∧ yw ̸≤ z, ∃(y, z) ∈ πt } + πtx := {(x, w) : yx ≤ z, ∃(y, z) ∈ πt }. Second, very useful property is given by the following lemma: Lemma 3. Prove that: T x+ w+ (x, w) = min{T x+ (x), T w+ (w)}. Proof. Because of (23), min{T x+ (x), T w+ n o n o (w)} = min sup t ∈ [0, 1] : yt x ≤ zt , sup t ∈ [0, 1] : yt w ≤ zt n o = sup t ∈ [0, 1] : ∀i = 1, 2, . . . m, yi,t x ≤ zi,t ∧ yi,t w ≤ zi,t =T and the same for T x− w− x+ w+ (x, w) (x, w) = max{T To be able to calculate the term T is given in the next section. 3.2.3. Calculating T x+ x+ x− (x), T w− (w)}. (x) in the upper prevision in (32), an exact method (x) We want to calculate the four upper probabilities in the expression (32) in the form of x+ T (x)‡ . (31) gives us only three terms to calculate. Clearly from (23), we have: Z 1 sup Ix+ (y, z)dt =: 0 π (y,z)≥t ‡ The given method is held for the term T x− Z 1 0 IT x+ (x) (t)dt = (x), T w− (x) and T w+ Z T x+ (x) 0 (x) as well. dt = T x+ (x). (36) 16 K. Shariatmadar et al By repeating the same calculation for other integrals in (32) we have, P (cT x − L)IY x≤Z − (cT w − L)IY w≤Z = L − cT w+ x+ R T x+ (x) R T w− (w) R T (x) T x − cT w T dt, dt + c dt c w − L max 0 0 0 + − + cT w − L R T x w (x,w) dt 0 = cT x ≥ cT w x+ R T w− (w) R T w− (w) R T (x) T dt, 0 dt, 0 + cT w − cT x 0 dt c x − L max 0 + w− x R T (x,w) + cT x − L 0 dt cT x ≤ cT w n o x+ x+ w− L − cT w + cT w − L max T (x), T (w) + cT x − cT w T (x) x + w− (x, w) cT x ≥ cT w + cT w − L T n + o − − T w + cT x − L max T x (x), T w (w) + cT w − cT x T w (w) L − c x+ w− (x, w). cT x ≤ cT w + cT x − L T To calculate the function—T gramming problem as: x+ (x) it is just enough to solve a classical linear pro- maximise t such that + πtx maximise t , 0/ ≡ such that yt x ≤ zt 0≤t ≤1 0≤t ≤1 x≥0 x ≥ 0. (37) x+ w− The only remaining big problem is the calculation of P(Y x ≤ Z ∧Y w ̸≤ Z) = T (x, w). This problem can be solved as follows. From equation (32) we assume: R C + D 01 sup Ix+ w− (y, z)dt cT x ≥ cT w π (y,z)≥t P (cT x − L)IY x≤Z − (cT w − L)IY w≤Z =: R1 T T E + F 0 sup Ix+ w− (y, z)dt c x ≤ c w π (y,z)≥t For instance if cT x ≥ cT w (the same for cT x ≤ cT w), then P (cT x − L)IY x≤Z − (cT w − L)IY w≤Z = C + DP(Y x ≤ Z ∧Y w ̸≤ Z) =: Q In Section. 3.2.3 we explained how the expressions C and D could be easily calculated. However, we know by the definition that those x ≥ 0 which make Q non-negative are maximal, so it is easy to find out when the expression Q is positive without calculating the hard upper probability—P(Y x ≤ Z ∧Y w ̸≤ Z)—because D > 0 and 0 ≤ P(Y x ≤ Z ∧Y w ̸≤ Z) ≤ 1. Therefore, for instance (i): if C ≥ 0, then x is maximal. (ii): if C ≥ D or (iii): D = 0, Manuscripts 17 C then the sign of Q is equal to the sign of C. (iv): if C < D, (C < 0), then define t ∗ := − D and calculate the upper and lower bounds for yi j and zi ’s in the corresponding level set πt ∗ to t ∗ (we call them y∗i j , z∗i , y∗i j , z∗i ). If the following feasibility problem is not empty, then x is maximal. Otherwise Q < t ∗ and x is not maximal: ) ( n n ∑ yi j x j ≤ zi H(x, w) := ∧ j=1 = ( n ∑ ∑ yi j w j > zi ; y∗i j ≤ yi j ≤ y∗i j , z∗i ≤ zi ≤ z∗i j=1 y∗i j x j j=1 ≤ z∗i n ∧ ∑ j=1 y∗i j w j > z∗i ) ; x j ≥ 0, w j ≥ 0 . (38) Here, we must highlight the importance and the strong property of the presented result. Any feasibility space like the set (38) is convex§ . The solutions x’s could be found by searching in the convex set (38). Furthermore, we gave a method to calculate the maximin solutions (worst case scenario) and the maximal solutions in an efficient (exact) way. In other words, one could make a better decision and implementation by having a worst case (maximin) solution as well as the set of (maximal) solutions (38)–which includes the maximin solution–regardless of the difficulty and complexity¶ of the LPUU problem under possibility distribution model. 4. Comparison with related literature—alternative existing solutions and numerical examples Let’s now contrast the presented methods with two closely related approaches from literature on two types of uncertainty models. 4.1. Interval previsions Soyster (24) investigates so-called inexact linear programming problems: max cT x such that Y x ≤ b, x ≥ 0 (39) where c, x ∈ Rn , b ∈ Rm , and the uncertain matrix Y is interval relative to a convex subset n K := j=1 K j of matrices, where each K j is a convex subset of Rm . This paper uses the maximinity decision criterion, so this is a special case of Section. 2.2.1 where, in Equation (7) we have now concluded that K = {x ≥ 0 : yx ≤ b}, where y is the matrix with components yi j := maxy∈K yi j . In Section. 3 we gave a very general case that the uncertainty can be represented in all coefficients. However, in many optimisation problems involving interval-based uncertainty—formalised by vacuous previsions in this paper—the work above fits in the large body of literature. × § Since ¶ The it is a feasibility space with linear constraints. computer implementation is under development and will be published in the next work. 18 K. Shariatmadar et al Most of the work uses maximinity as the optimality criterion; the results and methods we derive in this paper for the maximality case in Section. 2.2.2 is novel and makes the state of the art richer. We also present several approaches and methods to calculate and implement the analytical solutions. 4.2. Possibility distributions Jamison and Lodwick (12) worked on Fuzzy linear programming problems. Their approach is based on the approach by Bellman and Zadeh (2). We translate and explain the approach for this discussion relevant to our context. As we presented in Section. A.3, their idea is first to move the original problem into an unconstrained optimisation problem: maximise cT x − Lyx̸≤z Iyx̸≤z , using however a variable penalty factor L := Lyx̸≤z , where L is a function of x on Rn that gives the larger a penalty the more severely the constraint is broken. Then a fuzzy number—a possibility distribution on R—G̃x := cT x − Lỹx̸≤z̃ Iỹx̸≤z̃ (x) is associated with every x in Rn using the extension principle; these can be seen as fuzzy gains. They define the solutions tothe problem to be those x with a maximal mid-point average: 1 R1 2 0 max Gx (t) − min Gx (t) dt, where Gx (t) := {α ∈ R : G̃x (α) ≥ t} is the level set at t. This optimality criterion lies qua idea in between maximinity and maximaxity, but not qua execution, as the way in which we use possibility distributions to express uncertainty, differs markedly. Jiu-ying and Shu-Ping (13) worked on a method for the fuzzy linear program in which all the coefficients are only trapezoidal fuzzy numbers (TrFNs). However, in our paper the uncertainty model is not limited and could be any type (of possibility/necessity measure). Furthermore, in their work, according to the order relationship of TrFNs, first, the trapezoidal fuzzy linear program is transformed into the interval objective program and second it is combined with the ranking order relation between intervals and the acceptance degree of the violated fuzzy constraints. Finally, the interval objective program is further transformed into the bi-objective linear program which is approximated by the proposed goal programming approach. The proposed method is considered the acceptance degree of the violated fuzzy constraints and proposes an approximation method although we represent an efficient exact method. 4.3. Numerical example In this section we consider a numerical example to illustrate the results. Assume the following numerical example, max x1 + x2 ( Y1 x1 +Y2 x2 ≤ Z s.t x1 , x2 ≥ 0. (40) In the next work, an efficient computer code will be developed. In this section we use an existing solvers for the numerical results. Manuscripts 19 4.3.1. Maximin numerical example Interval case: Consider the example (40) where Y1 ∈ [9, 10], Y2 ∈ [7, 8] and Z ∈ [11, 12]. From (9) the maximin solution is calculated and shown in the following figure, The maximin solution is the star (the triangle is the inner feasibility space). Possibility measure case: Recall: the maximin solution is the x that can be obtained by solving max max(cT x − L)(1 − t). t∈[0,1] x∈πt∗ Consider the example (40) where Y1 = (9, 9.5, 10), Y2 = (7, 7.5, 8) and Z = (11, 11.5, 12) 1.6 Level 1 Level 0.5 Level 0 1.4 1.2 1 0.8 0.6 0.4 0.2 0 0 0.2 0.4 0.6 0.8 1 1.2 Three feasibility spaces (triangles) given by the three α-cuts. For a particular t we can find f (t) = max∗ (cT x − L)(1 − t) x∈πt Using the same example as in the vacuous case, but with symmetric triangular possibility distributions for all random variables, and using L = −1.35, we find the maximin solution is (0, 1.373). 4.3.2. Maximal numerical example Interval case: Consider the example (40) where Y1 ∈ [9, 10], Y2 ∈ [7, 8] and Z ∈ [11, 12]. From (13), the maximal solutions are: 11 2 x ∈ R : x ∈ O ∧ x1 + x2 ≥ 8 20 K. Shariatmadar et al where O= ( 2 x∈R : ( 9x1 + 7x2 ≤ 12 x1 , x2 ≥ 0. ) and so the maximal solutions are 9x1 + 7x2 ≤ 12 2 x ∈ R : x1 + x2 ≥ 11 8 x , x ≥ 0. 1 2 The maximal solutions are highlighted, the star is the maximin solution. Possibility measure case: Recall: the maximin solution is the x that can be obtained by solving max max(cT x − L)(1 − t). t∈[0,1] x∈πt∗ Consider the example (40) where Y1 = (9, 9.5, 10), Y2 = (7, 7.5, 8), Z = (11, 11.5, 12), and L = −1.35. For two particular points x and w, we can calculate whether or not H(x, w) ≥ 0 in (38), that is, whether or not x is dominated by w. Our method for calculating this requires solving at most one linear program per row of the constraint matrix Y , although many pairs of points will require no linear programming at all. We create a grid of points and determine which of these are maximal. This is impractical for large problems. Manuscripts 21 Maximal solutions - Possibility distributions (start is maximin solution) 5. Conclusion In this paper, we considered a general linear programming problem where at least one of the elements of the coefficient matrices in the constraints and/or coefficients of goal function is uncertain. We proposed theoretical solutions for the generic LPUU problem with interval and fuzzy sets uncertainty models as well as the two approximation and efficient numerical methods to calculate the theoretical solutions in all criteria–Maximinity & Maximality cases. We have discussed the way to reformulate the generic LPUU problem into a well-posed decision problem according to the interval and possibility distribution uncertainty models. We have then modelled the uncertainty and represented it using coherent lower and upper previsions and gave the general form of the resulting decision problems for two typical optimality criteria—maximinity and maximality. By solving these two decision problems we have obtained (23) (i) in maximinity: classical linear programming problems (without uncertainty) in linear and vacuous previsions cases, and a classical optimisation problem in possibility distribution case, and (ii) in maximality: classical feasibility problems in all three uncertainty models. Using maximinity results is a less complicated mathematical problem with respect to the maximality criterion, i.e., Model Interval/Probability Fuzzy set Maximin theoretical solutions Classical LP problem Classical optimisation problem Maximal theoretical solutions Classical feasibility problem Classical feasibility problem However, the maximality criterion can also result in quite complicated expressions as we encountered when dealing with a possibility distribution (27) model. By calculating those solutions we obtain classical LP problems, classical optimisation problems, or classical convex feasibility problems where any classical LP problem’s solver, optimisation solver or feasibility problem’s solver could be used to calculate or implement the theoretical solutions as follows, 22 K. Shariatmadar et al Model Interval/Probability Fuzzy set Maximin calculation Any LP solver Any optimisation solver Maximal calculation Any convex feasibility problem solver Any convex feasibility problem solver Some numerical solutions and (efficient) methods to implement the theoretical solutions are presented. The paper proposes an efficient way to calculate the solutions for the two uncertainty models in maximality criterion. Our next intent is to work towards efficient algorithms (computer codes) to implement the numerical results and compare the complexity with the existing algorithms (Interval and Fuzzy arithmetic). Acknowledgement The authors would like to thank the reviewers for the constructive criticism and remarks as well as all supports by Frederik Debrouwere and Kathleen Egan. Appendix A. A.1. Coherent lower and upper previsions Suppose an unknown variable (Y, Z) =: V, which can take values (y, z) =: v in a set Rm×n × Rm =: Y and a decision maker who wants to make decisions about a problem that is a function of V (and some other variables). The uncertainty of the decision maker working with this V is described using an uncertainty model which allows him to derive logical conclusions about V or a function of V and make the decisions in the problem involving V . Classical uncertainty models are the probability distributions. In this case, we call the unknown variable V a random variable. It has been shown (30) that working with expectation operators is the same as working with probability measures or probability distributions. In this paper, the same terminology of Walley (27) is used where the expectation operator is called linear prevision. A linear prevision is a functional P such that P : G (Y ) −→ R where G (Y ) is a linear space on Y . The linear prevision P satisfies in the following three coherence conditions: (Positivity) P(g) ≥ inf g; (Homogeneity) P(λ g) = λ P(g); (Additivity) P(g + h) = P(g) + P(h) for all bounded real-valued functions g, h in G (Y ) and all λ ∈ R. The functional g = g(V ) is interpreted see, e.g. (9) as a gamble about V and its linear prevision (PV ) as a fair price to exchange this gamble. A decision maker is willing to sell the gamble g(V ) for any price higher than PV (g) and buy it for any lower price. When a linear prevision P is defined for an event A ∈ Y then it is also denoted by P. In this case, P is called probability measure on the set of all events 2Y , i.e., P : 2Y −→ [0, 1]. The relationship between them is given by P(A) := P(IA ) where IA ∈ 2Y is the indicator of A—it takes the value 1 on A and is 0 elsewhere. The three coherence conditions for a probability measure P are: (Positivity) P(A) ≥ 0; (Unit Norm) P(Y ) = 1; (Additivity) P(A ∪ B) = P(A) + P(B) where A, B ⊆ Y , A ∩ B = 0. / Manuscripts 23 Another important uncertainty model is the normalised possibility distribution. Assume a possibility distribution π such that π : Y −→ [0, 1], with normalisation condition sup π = 1. A possibility measure Π is defined for an event A ⊆ Y by, Π(A) = supy∈A π(y) and can be characterised by the following axioms: (Positivity) Π(A) ≥ 0; (Normality) Π(Y ) = 1; (Maxitivity) Π( A ) = supA∈A Π(A) where A ⊆ 2Y . If A = 0/ then by definition, the infimum is 1 and the supremum is 0. The necessity measure N conjugate to Π is defined by N(A) = 1 − Π (Ac ) for events A, and more generally by N(g) = −Π (−g) for gambles g. Necessity and possibility measures N and Π —and therefore also the operators derived from them—can be characterized by the following conditions: (Positivity) N(A) ≥ 0, Π (A) ≥ 0; (Normality) N(0) / = 0, Π (Y ) = T S 1; (Maxitivity) N( A ) = infA∈A N(A), Π ( A ) = supA∈A Π (A), where A ⊆ 2Y , and if A = 0/ the infimum is one and the supremum is zero by definition. Furthermore, by using the Choquet integral, the possibility measure can be defined for any functions h on Y see, e.g. (7) by: S Π(h) =: Pπ (h) = Z 1 sup{h(y) : π(y) ≥ α}dα, (A.1) 0 and similarly, the necessity measure can be defined by: N(h) =: Pπ (h) = Z 1 inf{h(y) : π(y) ≥ α}dα (A.2) 0 Both types of uncertainty model, in the incarnations described above, are special cases of the much more general coherent upper and lower previsions see, e.g., (27, 14, for details and terminology). These conjugate operators P and P, defined for all gambles g on Y and related by P(g) = −P(−g), are characterized by the following three conditions:† (Positivity) P(h) ≥ inf h or P(h) ≤ sup h; (Positive Homogeneity) P(λ g) = λ P(g) or P(λ g) = λ P(g); (Super/Sub-additivity) P(g + h) ≥ P(g) + P(h) or P(g + h) ≤ P(g) + P(h) where g, h ∈ G (Y ) and λ > 0. Their restriction to (indicators of) events are called coherent lower and upper probabilities. In the behavioural interpretation of (27), who follows the lead of (9) in this regard, lower and upper previsions for gambles are again seen as prices: respectively the agent’s supremum acceptable buying price and infimum acceptable selling price. When the lower prevision coincides with the upper prevision, they are linear previsions see, e.g., (27). Possibility operators are a special type of coherent upper previsions and necessity operators are similarly special coherent lower previsions for more details about their relation, see (28, 7, 6, 29, 15). In this paper, we use the theory of coherent lower (and upper) previsions as a unifying framework for linear previsions and possibility/necessity operators. All the uncertainty models we deal with fall into either category, but thanks to this framework, we can treat constrained (linear) optimisation problems with uncertain variables described by † The assumption that P and P are defined on the whole of G (Y ) is not a trivial one in general: so-called natural extension from a partial specification requires solving a linear programming problem (27)Chapter 3. However, for the cases looked at in this paper, natural extension just requires calculating a (Choquet) integral in the most complex case, which is far less computationally demanding. 24 K. Shariatmadar et al both types in the constraint specification in a unified way and with a unified interpretation. It also leaves the door open to similarly deal with problems involving other types of uncertainty models under the coherent lower prevision umbrella. The lower (and upper) previsions can also be defined for indicators of events which are exactly equivalent to the lower (and upper) probability of the same events, i.e., P(IA ) = P(A). In one special case, when the lower prevision coincides with the upper prevision, then the third condition (Super/Sub-additivity) becomes (Additivity) condition and we have linear previsions see, e.g. (27). In addition, it has shown (28) that working with a coherent lower prevision P is equivalent to working with a convex closed set of linear previsions (or probabilities) M (P), which is set of dominating linear previsions by P: M (P) := {P : ∀h ∈ G (Y ), P(h) ≥ P(h)} . (A.3) And vice versa, P is the lower envelope of the set M (P): P(h) := min {P(h) : P ∈ M (P)} , ∀h ∈ G (Y ). (A.4) There are some other properties for coherent lower (and upper) previsions which are used further on in this paper: (Constant additivity) P(h + µ) = P(h) + µ; (Mixed super-additivity) P(g + h) ≥ P(g) + P(h); (Factorisation) P( f1 f2 ) = P f1 P( f2 ) where g, h ∈ G (Y ), f1 > 0, f1 and f2 are independent see, e.g. (8) and µ ∈ R. A.2. Imprecise decision making Consider a case that one may choose a decision x between a number of choices, acts, or decisions in a set X (where in this paper X := Rn ), the outcome of each decision is unknown and is a function of the random variable V taking values v. For each possible decision x there is a gain function Gx on Y , i.e., if one makes a decision x, then the outcome of this decision has utility Gx (v) where v is the outcome of the random variable V. In this paper, we assume that for each decision x there is a corresponding bounded function (gamble) Gx where Gx : Y −→ R. If the uncertainty about V is described by a coherent lower prevision P, then a binary relation (irreflexive and transitive) on the set X of all decisions can be defined as follows: one says that decision x1 is better than decision x2 and we write x1 ≻ x2 if and only if he is willing to pay some strictly positive price in order to exchange Gx1 for Gx2 i.e., x1 ≻ x2 ⇔ P(Gx1 − Gx2 ) > 0, (A.5) according to the definition, the relation ≻ gives us a very useful interpretation, a strict partial order on X . For instance, when the uncertainty about V is represented by a linear prevision P then from the A.1 property and equation (A.4) we have x1 ≻ x2 ⇔ ∀P ∈ M (P), P(Gx1 ) > P(Gx2 ), (A.6) Manuscripts 25 that means, the action x1 is better than x2 if and only if x1 has a strictly higher expected utility than x2 for all P that dominates the lower prevision P (which is similarly robustness property). In the next section, two decision criteria are described—maximinity and maximality. A.2.1. Maximality Consider a case that one seeks decisions x—so called maximal decisions/solutions—that are undominated in pairwise comparison with all other decisions, i.e., no decision z is consider better than x: x is maximal ⇔ ∄z ∈ X , z ≻ x ≡ ∀z ∈ X , z ⊁ x ⇔ ∀z ∈ X , P(Gz − Gx ) ≤ 0 ⇔ ∀z ∈ X , P(Gx − Gz ) ≥ 0 ≡ inf P(Gx − Gz ) ≥ 0. z∈X (A.7) A.2.2. Γ-maximin (Maximinity) Maximin solutions derive from worst-case reasoning, i.e., they are the decisions that have the highest lower expected utility, x is maximin or gamma maximin ⇔ x ∈ argmax P(Gz ) (A.8) z∈X Similarly, maximaxity solutions—best-case reasoning—can be found by just replacing the lower prevision P to the upper prevision P in (A.8). Proposition 1. Any maximin solutions are also maximal. Proof. Because of the properties of upper prevision in subsection A.1, we have: P(Gx − Gz ) ≥ P(Gx ) + P(−Gz ) = P(Gx ) − P(Gz ) In both, maximinity and maximality criteria, P(Gx ) and inf P(Gx − Gz ) are functions z∈X of x in X , therefore, we need to calculate and find that, in maximinity: for which x ∈ X the function—P(Gx )—has the highest value, and in maximality: the function— inf P(Gx − Gz )—is not negative. z∈X For further information and details in decision making with imprecise probabilities, we refer to (26). A.3. Reformulation of the problem as an imprecise decision problem In Section 1 we discussed the linear programming problem (1) which is about maximising a linear function over a (convex) set. By adding uncertainty in the constraints, we have a slightly more difficult problem (3) which is not a well-posed problem: maximising a linear function over an uncertain set. In order to be able to get a well-posed problem, we reformulate the problem (3) to a decision problem under uncertainty. 26 REFERENCES First, we need to define a utility (gain) function (gamble) Gx for each decision x ≥ 0, Gx := (cT x − L)IY x≤Z + L (A.9) IY x≤Z (v) is an indicator function which is equal to one if x is in the feasibility space and is zero otherwise for any realisations v = (y, z) that the random variable V = (Y, Z) assumes. It is clear that for each decision x ≥ 0 and any outcome or realisation (y, z) when x is feasible (or equivalently IY x≤Z (v) = 1) then we have a reward equal to cT x otherwise we have to be punished with value L (or equivalently IY x≤Z (v) = 0). In other words, ( cT x yx ≤ z, Gx (y, z) = L yx ̸≤ z. L ∈ R is small enough and is interpreted as a penalty value for violating the constraints. A real number, strictly smaller than inf cT x (in maximisation) should be chosen to make sure that breaking the constraint is penalised. If there are penalty values Li ∈ R for each i-th constraint, then we can define L := max1≤i≤m |Li |. With this reformulation, it is clear that both maximums for the objective function cT x and gain function Gx are equivalent. Second, in order to consider the uncertainty about V , we use an optimality criterion. In this paper, we consider two optimality criteria: maximinity/worst-case reasoning and maximality/partial ordering which we discussed in Section. A.2. A.3.1. Set of maximin and maximal solutions for the LPUU problem By applying the maximinity criterion in (A.8) to the gain function defined in (A.9) and two Constant additivity and Positive Homogeneity properties of the lower prevision, we find the set of maximin solutions which is given by, argmax P(Gx ) = argmax P (cT x − L)IY x≤Z + L x≥0 x≥0 = argmax(cT x − L)P(Y x ≤ Z). (A.10) x≥0 By applying the maximality criterion in (A.7) to the gain function which is defined in (A.9), we have x ≥ 0 is maximal if and only if, (A.11) inf P (cT x − L)IY x≤Z − (cT w − L)IY w≤Z ≥ 0. w≥0 References 1. BAZARAA S., M., AND JARVIS , J. J. Introduction to operations research, 2nd ed. Wiley, 1977. 2. B ELLMAN , R. E., AND Z ADEH , L. A. Decision-making in a fuzzy environment. Management Science 17, 4 (1970), B–141–B–164. REFERENCES 27 3. B ETTINGER , P., B OSTON , K., S IRY, J. P., AND G REBNER , D. L. Chapter 7 - linear programming. In Forest Management and Planning (Second Edition), P. Bettinger, K. Boston, J. P. Siry, and D. L. Grebner, Eds., second edition ed. Academic Press, 2017, pp. 153 – 176. 4. B OYD , S. P., AND VANDENBERGHE , L. Convex optimization. Cambridge university press, Cambridge, UK, 2004. 5. DANTZIG , G. B. Linear programming under uncertainty. Management Science 1, 3/4 (1955), 197–206. 6. DE C OOMAN , G. Integration and conditioning in numerical possibility theory. Annals of Mathematics and Artificial Intelligence 32, 1 (2001), 87–123. 7. DE C OOMAN , G., AND A EYELS , D. Supremum preserving upper probabilities. Information Sciences 118 (1999), 173–212. 8. DE C OOMAN , G., M IRANDA , E., AND Z AFFALON , M. Independent natural extension. Artificial Intelligence 175, 12-13 (2011), 1911 – 1950. 9. DE F INETTI , B. Theory of Probability. Wiley, 1974–1975. 2 Volumes. 10. G ASS , S. I. Linear programming: methods and applications, 5th ed. Dover publication, 2003. 11. H ILLIER , F. S., AND L IEBERMAN , G. J. Introduction to operations research, 7th ed. McGraw-Hill, 2001. 12. JAMISON , K. D., AND L ODWICK , W. A. Fuzzy linear programming using a penalty method. Fuzzy Sets and Systems 119, 1 (2001), 97–110. 13. J IU - YING , D., AND S HU -P ING , W. A new trapezoidal fuzzy linear programming method considering the acceptance degree of fuzzy constraints violated. KnowledgeBased Systems 148 (2018), 100 – 114. 14. M IRANDA , E. A survey of the theory of coherent lower previsions. International Journal of Approximate Reasoning 48 (2008), 628–658. 15. M IRANDA , E., AND DE C OOMAN , G. Epistemic independence in numerical possibility theory. International Journal of Approximate Reasoning 32 (2003), 23–42. 16. M ORET, S., B IERLAIRE , M., AND M AR ÉCHAL , F. Strategic energy planning under uncertainty: a mixed-integer linear programming modeling framework for large-scale energy systems. In 26th European Symposium on Computer Aided Process Engineering, Z. Kravanja and M. Bogataj, Eds., vol. 38 of Computer Aided Chemical Engineering. Elsevier, 2016, pp. 1899 – 1904. 17. N EMHAUSER , G., R INNOOY K AN , A., AND T ODD , M. Handbooks in Operations Research and Management Science, 1: Optimization, 2nd ed. North-Holland/Elsevier, 1989. 18. O LIVEIRA , C., AND A NTUNES , C. H. Multiple objective linear programming models with interval coefficients–an illustrated overview. European Journal of Operational Research 181, 3 (2007), 1434–1463. 19. Q UAEGHEBEUR , E., H UNTLEY, N., S HARIATMADAR , K., AND D E C OOMAN , G. Maximin and maximal solutions for linear programming problems with possibilistic uncertainty. In Communications in Computer and Information Science (2012), vol. 4299, Springer, pp. 430–439. 28 REFERENCES 20. Q UAEGHEBEUR , E., S HARIATMADAR , K., AND D E C OOMAN , G. Constrained optimization problems under uncertainty with coherent lower previsions. FUZZY SETS AND SYSTEMS 206 (2012), 74–88. 21. ROCKAFELLAR , R. T. Optimization under uncertainty. LECTURE NOTES (2001), 118. 22. S AHINIDIS , N. V. Optimization under uncertainty: state-of-the-art and opportunities. Computers and Chemical Engineering (2004), 13. 23. S HARIATMADAR , K., Q UAEGHEBEUR , E., AND D E C OOMAN , G. Linear programming under vacuous and possibilistic uncertainty. In ISIPTA ’11: Program & Abstracts (2011), p. 31. 24. S OYSTER , A. L. Convex programming with set-inclusive constraints and applications to inexact linear programming. Operations Research 21, 5 (9–10 1973), 1154–1157. 25. TAHA , H. A. Operations research: An introduction, 8th ed. Pearson Prentice Hall, 2007. 26. T ROFFAES , M. C. M. Decision making under uncertainty using imprecise probabilities. International Journal of Approximate Reasoning 45, 1 (may 2007), 17–29. 27. WALLEY, P. Statistical Reasoning with Imprecise Probabilities, vol. 42 of Monographs on Statistics and Applied Probability. Chapman and Hall, London, 1991. 28. WALLEY, P., AND DE C OOMAN , G. Coherence of rules for defining conditional possibility. International Journal of Approximate Reasoning 21 (1999), 63–107. 29. WALLEY, P., AND DE C OOMAN , G. A behavioral model for linguistic uncertainty. Information Sciences 134, 1-4 (2001), 1–37. 30. W HITTLE , P. Probability via Expectation, 3rd ed. Springer, 1992. 31. W INSTON , W. L. Operations Research: Applications and Algorithms, 4th ed. Duxbury Press, 2003.