Skip to main content
Business LibreTexts

5.5: Combining Linear Optimization with Markov Deterioration Models

  • Page ID
    21135
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    The previous section illustrated the use of expected component conditions for use in optimization and decision making. It is also possible to include a distribution of possible conditions over a period of time. The most common means of making this synthesis is to combine optimization with Markov deterioration models. A number of bridge and pavement management systems are based on this synthesis; for example, see AASHTO (2016) or Golabi (1997). It is unlikely that an infrastructure manager would formulate an optimization problem synthesis such as these, but managers regularly use the software programs embedding the synthesized optimization.

    Table 5.5.1 presents an example of a Markov process transition matrix and possible actions for a concrete component. Five condition states are defined, with 1 representing good condition and 5 representing failure of the component. For components in state 1, recommended management action is to do nothing. In each year in State 1, there is a 3% chance of deterioration to state 2. For components in states 2 and 3, do-nothing or patch are possible actions with the probability consequences as shown. There are substantial chances that the patching may not be effective, as the components might remain in their initial state or deteriorate further (although with a low probability of such deterioration). For components in state 4, do-nothing, rehabilitation or replacement are possible actions. With do-nothing action and initial state 4, there is a 13% chance of failure over the course of a year.

    Table 5.5.1: Illustrating a Markov Transition Probability Matrix with Different Management Actions.
    Initial State Action \(Pr\text{(state 1)}\) \(Pr\text{(state 2)}\) \(Pr\text{(state 3)}\) \(Pr\text{(state 4)}\) \(Pr\text{(state 5)}\)
    1 Do Nothing 0.97 0.03 0.0 0.0 0.0
    2 Do Nothing 0.0 0.97 0.03 0.0 0.0
    2 Patch 0.62 0.34 0.04 0.0 0.0
    3 Do Nothing 0.0 0.0 0.92 0.08 0.0
    3 Patch 0.52 0.35 0.10 0.03 0.0
    4 Do Nothing 0.0 0.0 0.0 0.87 0.13
    4 Rehabilitate 0.68 0.27 0.05 0.0 0.0
    4 Replaca 0.99 0.01 0.0 0.0 0.0

    An initial analysis step with a table of transition probabilities such as this could be to minimize the long-term cost of maintaining the concrete component. Of course, doing nothing at each stage would minimize cost, except that eventually the component would fail. Presumably, a manager would attempt to minimize cost subject to avoiding transitioning to state 5. Decision variables would be a particular action given a state. The objective function would be to minimize expected condition (or perhaps the probability of failure). With a planning horizon and an initial state, the changes in probabilities over time can be traced as a linear function of the decision variables. A budge constraint might also be imposed (as in Eq. 5.2.4) added over all the bridge components being managed.

    We will illustrate this optimization approach with a small problem. Suppose a set of identical components can have three possible condition states: 1 – good, 2 – average and 3 – poor. One maintenance activity can be undertaken, which will move the component from any state to state 1 at a cost of ci. Table 5.5.2 shows the transition probabilities for the component with do-nothing and maintenance. Finally, there is a budget available for the year and a known state si for each component.

    Table 5.5.2 - Illustrative Transition Probabilities and Actions for a Small Problem
    Initial State Action \(Pr\text{(state 1)}\) \(Pr\text{(state 2)}\) \(Pr\text{(state 3)}\)
    1 Do Nothing 0.8 0.2 0.0
    1 Maintenance 1.0 0.0 0.0
    2 Do Nothing 0.0 0.8 0.2
    2 Maintenance 1.0 0.0 0.0
    3 Do Nothing 0.0 0.0 1.0
    3 Maintenance 1.0 0.0 0.0

    Following the formulation approach discussed in the previous section:

    • Let us define our decision variable as \(x_i = 0\) if do-nothing and \(x_i = 1\) if maintenance is performed. (If there were more than two actions possible, then we could add a subscript as in Eq. 5.2.2 for each component and each possible action, and the decision variable \(x|{ij}\) would be 0 if the action \(j\) was not undertaken on component \(I\) and one if action \(j\) was taken on component \(i\)).
    • Let us assume that the objective is to minimize the average component condition.
    • The only constraint is the budget constraint for all the actions. (If more than one action is possible, however, we would have to add a constraint similar to Eq. 5.2.3 for each component, however).

    The resulting problem is:

    \(\text{Minimize Average Condition State}\)

    \[=\sum_{s=1}(.8+2 * .2) *\left(1-x_{i}\right)+\sum_{s=2}(2 * 0.8+3 * 0.2) *\left(1-x_{i}\right)+\sum_{s=3} 3 *\left(1-x_{i s}\right)+\sum_{i} x_{i}\]

    \(\text {Subject to } \sum_{i} c_{i} * x_{i} \leq B, x_{i} \text { binary }\)

    Where the objective function has four terms: (1) resulting condition of components in state 1 with no action, (2) resulting condition of components starting in state 2 with no action, (3) resulting condition of components in state 3 with no action (they stay in state 3), and (4) components with maintenance moving to state 1. The constraints are the overall budget and the restriction of the \(x_i\) to zero or one. As noted above, variation would add additional potential actions (using notation \(x_{iS}) and different resulting conditions.


    This page titled 5.5: Combining Linear Optimization with Markov Deterioration Models is shared under a CC BY-SA license and was authored, remixed, and/or curated by Donald Coffelt and Chris Hendrickson.

    • Was this article helpful?