 Research
 Open access
 Published:
Optimal monitoring and attack detection of networks modeled by Bayesian attack graphs
Cybersecurity volumeÂ 6, ArticleÂ number:Â 22 (2023)
Abstract
Early attack detection is essential to ensure the security of complex networks, especially those in critical infrastructures. This is particularly crucial in networks with multistage attacks, where multiple nodes are connected to external sources, through which attacks could enter and quickly spread to other network elements. Bayesian attack graphs (BAGs) are powerful models for security risk assessment and mitigation in complex networks, which provide the probabilistic model of attackersâ€™ behavior and attack progression in the network. Most attack detection techniques developed for BAGs rely on the assumption that network compromises will be detected through routine monitoring, which is unrealistic given the evergrowing complexity of threats. This paper derives the optimal minimum mean square error (MMSE) attack detection and monitoring policy for the most general form of BAGs. By exploiting the structure of BAGs and their partial and imperfect monitoring capacity, the proposed detection policy achieves the MMSE optimality possible only for linearGaussian state space models using Kalman filtering. An adaptive resource monitoring policy is also introduced for monitoring nodes if the expected predictive error exceeds a userdefined value. Exact and efficient matrixform computations of the proposed policies are provided, and their high performance is demonstrated in terms of the accuracy of attack detection and the most efficient use of available resources using synthetic Bayesian attack graphs with different topologies.
Introduction
The increased connectivity of networks and smart devices allow for effective operations of complex networks while significantly weakening network securityÂ (Lallie etÂ al. 2020; Ou etÂ al. 2006; Wang etÂ al. 2018; Al Ghazo etÂ al. 2019; AlAraji etÂ al. 2022; Nguyen etÂ al. 2017). In particular, the operation of critical infrastructures such as manufacturing, energy, communication, water, and transportation networks increasingly rely on networked devices, generating significant vulnerabilities in many areas of society.
Attack graphs are a useful model to characterize the interactions and dependencies between vulnerabilities across the network componentsÂ (Noel and Jajodia 2014; Singhal and Ou 2017; Stan etÂ al. 2020; Noel and Jajodia 2017; Capobianco etÂ al. 2019; Agmon etÂ al. 2019; Malzahn etÂ al. 2020; Albanese etÂ al. 2012; Homer etÂ al. 2013; Yu etÂ al. 2015; MunozÂ Gonzalez and Lupu 2016). These graphs model how attackers can exploit combinations of vulnerabilities to penetrate networks. Bayesian attack graphs (BAGs) are extensions of attack graphs, where the Bayesian network probabilistically models attackersâ€™ behavior and progression of attacks across the networkÂ (Poolsappasit etÂ al. 2011; MuĂ±ozGonzĂˇlez etÂ al. 2017; Sembiring etÂ al. 2015; Miehling etÂ al. 2015; Hu etÂ al. 2017; Matthews etÂ al. 2020; Sahu and Davis 2021; Frigault etÂ al. 2017; Chen etÂ al. 2021; Chockalingam etÂ al. 2017; Sun etÂ al. 2018; Liu etÂ al. 2019). BAGs are directed graphs consisting of nodes that represent the status of compromises at various network components and edges that represent exploit probabilities among the components.
Most existing attack detection techniques developed for BAGs rely on the simplified assumption that the networkâ€™s compromises are certainly detectable through routine monitoringÂ (Li etÂ al. 2020; Chadza etÂ al. 2020; Holgado etÂ al. 2017; Thanthrige etÂ al. 2016; Ramaki etÂ al. 2015). However, given the evergrowing types of attacks and the intelligence of attackers to hide the exploit, this assumption is unrealistic and leads to the unreliability of detection. Meanwhile, the existing detection methods often built upon heuristicsÂ (Poolsappasit etÂ al. 2011; Alhomidi and Reed 2013; HusĂˇk etÂ al. 2018) or approximationsÂ (Liu and Liu 2016; Wang etÂ al. 2013; Ma etÂ al. 2022). These methods do not yield the optimality expected for these structured graphs, such as minimum mean square error (MMSE) or componentwise optimality. This paper derives the exact optimal MMSE attack detection method for a general form of BAGs with partial and imperfect monitoring and arbitrary network vulnerabilities. The binary structure of the nodes on the graph (denoting the compromised status of network components) is taken into account to achieve the same MMSE optimality as the Kalman filter for the linear Gaussian state space modelÂ (Liang etÂ al. 2019; Bai etÂ al. 2017). We demonstrate that the proposed detection method also holds the componentwise maximum aposteriori optimality, which differs from the commonly used maximum aposteriori solution obtained for the entire nodes.
The second contribution of this paper is to derive an exact optimal policy to select a subset of monitoring nodes at any given time to enhance the performance of the detection process. In practice, a few nodes in the network can be routinely monitored due to resource limitations and reducing potential disruptions to network operations. Intelligent selection of these nodes plays a crucial role in accurately detecting attacks over the network. For instance, monitoring a fixed set of nodes could significantly degrade detection performance at unobserved components. Therefore, it is critical to sequentially and strategically select nodes for monitoring and making the best use of available resources.
Several monitoring approaches have been developed for Bayesian attack graphs, including Monte Carlo and probabilistic methods. The Monte Carlo or treebased approachesÂ (Noel and Jajodia 2008; Krisper etÂ al. 2019; Poolsappasit etÂ al. 2011) simulate the most likely attack paths and sequentially select monitoring nodes located on these paths. The probabilistic vulnerability assessment approachesÂ (Dantu etÂ al. 2004; Nipkow etÂ al. 2012; Frigault and Wang 2008) measure the expected increase in the probability of compromise at various nodes and select those with the highest overall vulnerabilities. These methods mostly rely on heuristics for their selection or some simulated attack paths, which makes them inefficient in securing complex networks with uncertain monitoring and limited available resources. Meanwhile, existing techniques take into account the network vulnerability of nodes for selecting monitoring nodes rather than accurate detection and identifying invisible compromises in the network.
This paper presents an optimal monitoring policy that supports the optimal detection policy and ensures the selection of monitoring nodes that are most likely to be incorrectly detected. The proposed monitoring method selects the optimal subset of nodes for monitoring sequentially based on the highest expected predictive mean squared error (MSE). Instead of selecting nodes that are already compromised or uncompromised, we have developed fixedresource and adaptiveresource monitoring policies that select a subset of nodes sequentially to ensure the best detectability of attacks across the entire network. Depending on the networkâ€™s vulnerabilities or the sensitivity of its components, the appropriate monitoring policy can prioritize network detectability at specific parts of the network rather than all components. We introduce efficient and exact matrixform solutions for attack detection and network monitoring policies and demonstrate the performance of the methods using several synthetic Bayesian attack graphs.
The article is organized as follows. First, the Bayesian attack graph model is briefly described. Then, the optimal attack detection and monitoring policies are derived, and their matrixform implementations are introduced. Finally, the numerical examples and concluding remarks are provided.
Bayesian attack graphs (BAGs)
Bayesian attack graphs are a powerful class of models for the probabilistic representation of attackersâ€™ behavior and the progression of attacks on networks. The attackers aim to take over the entire network by exploiting reachable vulnerabilities, while each exploit only succeeds with a certain probability. A BAG is a directed graph where the nodes of the graph represent the compromisesâ€™ status at each network component (i.e., 1 for compromised nodes and 0 for not compromised nodes), and edges represent the likelihood that a compromised node could successfully expose a neighboring component.
A BAG is defined as a tupleÂ (Hu etÂ al. 2020)
where \({\mathcal N}=\{1,\cdots ,n\}\) represents n elements (nodes) of the network, \({\mathcal {T}}\) is the set of node types, \({\mathcal {E}}\) is the set of directed edges between the nodes, and \({\mathcal {P}}\) is the set of exploit probabilities. The nodes are random variables taking in \(\{0,1\}\), where 0 and 1 indicate that a given component is not compromised and compromised, respectively. For simplicity and without loss of generality, each node is assumed to be one of the following two types: \({\mathcal {T}}_i \in \{\text {AND}, \text {OR}\}\), where \({\mathcal {T}}_i\) represents the type of the ith component. The edge \((i,j)\in {\mathcal {E}}\) represents if node j could be compromised through node i. \({\mathcal {P}}\) consists of the set of exploit probabilities associated with edges, where \(\rho _{ij}\in {\mathcal {P}}\) represents the probability that the node j can be compromised through node i, given that node i is already compromised. These exploit probabilities are often computed according to the NISTâ€™s Common Vulnerability Scoring System (CVSS), which characterizes the severity of vulnerabilities through numerical scoresÂ (Radack etÂ al. 2007).
Node i is an inneighbor of node j if \((i,j)\in {\mathcal {E}}\). The inneighbor set of node j can be formally defined as: \(D_j=\{i\in {\mathcal N}(i,j)\in {\mathcal {E}}\}\). The nodes connected to outside sources are susceptible to external attacks. The external attack on node j can be expressed in terms of the exploit probability \(\rho _j\). As mentioned before, there are two types of nodes; an AND node (e.g., admin servers) could get compromised only if all of its inneighbor nodes are compromised, while an OR node (e.g., SQL servers) could get compromised through a single (or more) compromised inneighbor(s).
An example of the Bayesian attack graph is shown in Fig.Â 1. The graph consists of 20 nodes; the nodes that are exposed to external attacks include \(\{2, 5, 6, 8\}\). \(\text {AND}\) nodes illustrated as double encircled nodes are \(\{2, 3, 6, 9, 10, 13, 18, 19\}\), and OR nodes are \(\{1, 4, 5, 7, 8, 11, 12, 14, 15, 16, 17, 20\}\). Exploit probabilities are labeled only for node 1 for simplicity.
Optimal attack detection for BAGs
Hidden Markov model (HMM) representation of BAG
The BAG can be seen as a special case of a hidden Markov model with binary state variables. The state vector consists of the status of compromises at all n nodes in the graph. This vector is represented by \({\textbf {x}}_k=[{\textbf {x}}_k(1),...,{\textbf {x}}_k(n)]\), where \({\textbf {x}}_k(i)\) takes either 0 or 1; \({\textbf {x}}_k(i)=1\) indicates that the ith component is compromised at time step k, and reverse for \({\textbf {x}}_k(i)=0\). \({\textbf {x}}_k=[0,0,\cdots ,0]^T\) represents a network without any compromise, whereas \({\textbf {x}}_k=[1,1,\cdots ,1]^T\) represents network with all nodes being compromised. Therefore, the state vector can take \(2^n\) different possible values, denoted by \(\{{\textbf {x}}^1,\cdots ,{\textbf {x}}^{2^n}\}\). The HMM representation of BAG, consisting of the state and observation processes, is described below.
State process The state process represents the probabilistic propagation of compromises at all nodes. This process can be expressed through the conditional probability distribution of states. The state process is governed by the probability of external attacks, exploit probabilities among nodes, and their types. For instance, the AND nodes are more robust against a single inneighbor threat since the exploits at all inneighbor nodes are required to give a chance for an AND node to be compromised. On the other hand, the OR nodes can be compromised if a single inneighbor node is compromised. In the same way, large exploit and external attack probabilities increase the networkâ€™s vulnerability.
The conditional probability that the jth node is compromised at time step k, given the nodesâ€™ state at time step \(k1\), i.e., \({\textbf {x}}_{k1}\), can be expressed for AND and OR nodes as:

AND nodes:
$$\begin{aligned} P(&{\textbf {x}}_k(j)=1{\textbf {x}}_{k1})= \\&{\left\{ \begin{array}{ll} \rho _{j} + (1\rho _{j}) \underset{{i\in D_j}}{\prod } 1_{{\textbf {x}}_{k1}(i)=1} \,\rho _{ij} & \text { if }{\textbf {x}}_{k1}(j)=0,\\ 1 & \text { if }{\textbf {x}}_{k1}(j)=1, \end{array}\right. } \end{aligned}$$(1) 
OR Nodes:
$${ \begin{aligned}&P({\textbf {x}}_k(j)=1{\textbf {x}}_{k1})=\\&{\left\{ \begin{array}{ll} \rho _{j} + (1\rho _{j}) \!\left[ 1\underset{{i\in D_j}}{\prod } (11_{{\textbf {x}}_{k1}(i)=1}\rho _{ij})\right] & \text { if }{\textbf {x}}_{k1}(j)=0 \text {, }\\ 1 & \text { if }{\textbf {x}}_{k1}(j)=1, \end{array}\right. } \end{aligned} }$$(2)
where \(1_{b=1}\) returns 1 if \(b=1\), and 0 otherwise. Note that the conditional probabilities in (1) and (2) consider both the external (i.e., \(\rho _j\)) and internal (i.e., \(\rho _{ij}\)) attacks. Meanwhile, using the binary nature of each state variable, the probability that the jth state variable is 0 can be computed as: \(P({\textbf {x}}_k(j)=0{\textbf {x}}_{k1})=1P({\textbf {x}}_k(j)=1{\textbf {x}}_{k1})\).
Observation process: This process represents the way network components are monitored for potential threats. In practice, routine network monitoring is key in assuring network security and possibly detecting compromises in the network. The monitoring process is often laborintensive, timeconsuming, and costly, which might also interrupt or delay the network operations. Hence, a small subset of nodes can be monitored at any given time. Most available detection techniques for BAGs assume that possible network compromise at any given node is certainly identified if the node is selected for routine monitoring. However, given the complexity of attacks/attackers, this assumption is likely to be violated, resulting in significant security risks in detecting attacks. For instance, the monitoring might flag a node as not compromised while the node is compromised with an advanced difficulttodetect attack.
Let \({{\textbf {a}}}_{k1}=\{i_1,...,i_m\}\) be the indexes of m nodes to be monitored at time step k, where \(\{i_1,...,i_m\}\subset {\mathcal N}\) and \(m<n\). As indicated in the subscripts, the nodes should be selected at time step \(k1\) for monitoring at time step k. The observation resulting from \({{\textbf {a}}}_{k1}\) is denoted by \({{\textbf {y}}}_k\), where \({{\textbf {y}}}_k(i)\) is the observation from node \({{\textbf {a}}}_{k1}(i)\).
We consider the following model for the observation process: (1) if the selected node for monitoring is not compromised, the observation will flag not compromised with a probability of 1; (2) if the selected node is compromised, the true compromised node will be detected with probability \((1q)\) and will be flagged as not compromised with probability q, where \(0\le q\le 1\). Therefore, if the observation from a node is 1 (i.e., flagged as â€ťcompromisedâ€ť), it is definitely intruded; however, observing 0 (i.e., flagged as â€ťnot compromisedâ€ť) does not provide certain information about the status of compromises in the monitored node. This stochastic observation model can significantly enhance the reliability and performance of attack detection. The observation process described above can be expressed at time step k as:
for \(i=1,...,m\). Small values of q model an advanced monitoring system where most threats can be identified, whereas larger values of q correspond to the less advanced monitoring systems or domains susceptible to more complex threats. It should be noted that the rest of the paper holds for any arbitrary observation process of form \({{\textbf {y}}}_k\sim P({{\textbf {y}}}{\textbf {x}}_k, {{\textbf {a}}}_{k1})\), other than (3).
Optimal MMSE attack detection for BAGs
Accurate attack detection is crucial for effectively identifying compromises in network components and taking necessary steps to secure the network against potential threats. Attack detection is often challenging due to the probabilistic nature of attack progression and partial and imperfect monitoring of network components. The existing attack detection methods do not fully account for imperfect monitoring of networks and are built upon commonly used criteria for finitestate HMMs, such as maximum aposteriori or maximum likelihoodÂ (Liu and Liu 2016; Wang etÂ al. 2013; Ma etÂ al. 2022). Inspired by the Kalman filtering approachÂ (Welch etÂ al. 1995), which provides the exact optimal minimum mean square error (MMSE) state estimation solution for linear and additiveGaussian state space models, this paper derives the exact optimal MMSE attack detection solution for the general form of BAGs with arbitrary distributions. It should be noted that the proposed detectors, described below, are the only exact MMSE detection techniques for the entire nonlinear and nonGaussian state space modelsÂ (SĂ¤rkkĂ¤ 2013).
Let \({{\textbf {a}}}_{0:k1}=({{\textbf {a}}}_0,...,{{\textbf {a}}}_{k1})\) be the selected monitoring nodes with associated observations \({{\textbf {y}}}_{1:k}=({{\textbf {y}}}_1,,...{{\textbf {y}}}_k)\) between time step 1 to k. The attack detection problem consists of estimating the state values of all nodes at time step r given \(\{{{\textbf {a}}}_{0:k1},{{\textbf {y}}}_{1:k}\}\). Note that depending on the objective, the detection time r can be the current (i.e., \(r=k\)), prior (i.e, \(r<k\)), or future (i.e., \(r>k\)) time step. A detected attack \({\hat{{\textbf {x}}}}_{rk}=[{\hat{{\textbf {x}}}}_{rk}(1),...,{\hat{{\textbf {x}}}}_{rk}(n)]^T\) represents the estimated value of the true attacks (i.e., compromises) at all nodes \({\textbf {x}}_r=[{\textbf {x}}_r(1),...,{\textbf {x}}_r(n)]^T\) at time step k. The optimal attack detector can be obtained by minimizing the following mean squared error (MSE):
where \(\.\_2\) is the \(L_2\) norm vector and \(\Psi :=\{0,1\}^n\) is the set of all \(2^n\) possible compromise estimators.
Note that, for a Boolean vector \({{\textbf {z}}}\), the \(L_1\) and \(L_2\) norms are the same, i.e., \(\{{\textbf {z}}}\_2 = \{{\textbf {z}}}\_1 =\sum _{i=1}^{n} {{\textbf {z}}}(i)\). Thus, the minimization in (4) can be written as:
where the last expression is obtained by exchanging the summation and expectation. Each term contains an independent estimator for a given node; thus, the optimal MMSE attack detector needs to minimize \({\mathbb {E}}[{\textbf {x}}_r(i){\hat{{\textbf {x}}}}_{rk}(i){{\textbf {a}}}_{0:k1},{{\textbf {y}}}_{1:k}]\), for all \(i=1,\ldots ,n\). Given the binary nature of each state variable, the minimizer can be computed as:
for \(i=1,\ldots ,n\), where \(\overline{{{\textbf {v}}}}(i) = 1\) if \({{\textbf {v}}}(i)>1/2\) and 0 otherwise, for any vector \({{\textbf {v}}}\in [0,1]^n\) and \(i=1,\ldots ,n\).
Substituting (6) into (5) leads to the following optimal MMSE attack detector at time step r:
The expected error of the attack detector in terms of the MSE can be computed as:
The ith element in summation in the last line of equation (8) can be expressed as:
Now, substituting (9) into (8) leads to
where the last expression in (10) is obtained by using \(\min \{a,1a\} = 1/2a1/2\), for \(0\le a \le 1\). Note that the \(0\le C_{rk}^{\text {MS}}\le n/2\), where the values close to 0 correspond to a small expected error of optimal attack detector, whereas large values correspond to a less confident detection process (i.e., larger expected error).
The following theorem summarizes the results of the optimal MMSE attack detector for the general form of BAGs.
Theorem 1
Let \({{\textbf {a}}}_{0:k1}\) be selected monitoring nodes with associated observation \({{\textbf {y}}}_{1:k}\) between time step 1 to k from a Bayesian attack graph. The exact optimal MMSE attack detector at time step r can be achieved as:
with the normalized optimal expected MSE
As noted before, the theorem provides the optimal detection for past, current, and future, depending on whether \(r<k\), \(r=k\), or \(r>k\). In the next section, we will describe how the optimal attack prediction can help monitor vulnerable components of the network.
Exact matrixbased computation of optimal MMSE attack detector
This section introduces an algorithm for the exact computation of the optimal MMSE attack detector for BAGs. We put all possible network compromises in a single \(n\times 2^n\) matrix as:
where \({\textbf {x}}^1\) to \({\textbf {x}}^{2^n}\) are arbitrary enumerations of possible network compromises, e.g., \({\textbf {x}}^1=[0,0,0,...,0]^T, {\textbf {x}}^{2^n}=[1,1,1,...,1]^T\). Consider the following state conditional distribution vectors:
for \(i=1,\ldots ,2^n\) and \(k=1,2,\ldots\). Let \({\varvec{\Pi }}_{00}\) be the initial attack distribution. This distribution depends on the last time the nodes in the network have been reimaged or monitored; for instance, \({\varvec{\Pi }}_{00}=[1,0,...,0]^T\) can be used for networks with recently reimaged nodes, and \({\varvec{\Pi }}_{00}=[1/2^n,...,1/2^n]^T\) can be used if not enough information about compromises at various nodes exists (i.e., each node with 0.5 probability being compromised). Note that more complex initial distributions can be used, such as larger compromise probabilities for nodes exposed to direct external attacks.
Let the transition matrix \(M_k\) of size \(2^n \times 2^n\) be the transition matrix of the Markov chain at time step k as:
for \(i,j = 1,\ldots ,2^n\); where
Note that \(1_{{\mathcal N}_l=\text {AND}}\) is 1 if node l is an AND node, and the transition probabilities in (15) and (16) are obtained according to the conditional probabilities for AND and OR nodes in (1) and (2), respectively. Meanwhile, the subscript k in \(M_k\) denotes that the transition matrix in (15) can be timedependent in general, such as domains with changing exploit probabilities or network structure. Additionally, given that \({{\textbf {y}}}_k\) is the observation vector obtained from nodes \({{\textbf {a}}}_{k1}\) at time k, we define the update vector, \(T_k({{\textbf {y}}}_k,{{\textbf {a}}}_{k1})\), as:
for \(i = 1,\ldots ,2^n\), where the last expression in (17) is derived according to the observation process in (3).
The computation of the predictive posterior probability, \({\varvec{\Pi }}_{kk1}\), can be achieved using the previous posterior probability \({\varvec{\Pi }}_{k1k1}\) and the transition matrix \(M_k\) through:
The posterior distribution of states, \({\varvec{\Pi }}_{kk}\), upon observing \({{\textbf {y}}}_k\) at nodes \({{\textbf {a}}}_{k1}\) can be achieved through the following Bayesian recursionÂ (Kumar and Varaiya 2015; SĂ¤rkkĂ¤ 2013):
where \(\circ\) is Hadamard product, and \(T_k({{\textbf {y}}}_k,{{\textbf {a}}}_{k1})\) is defined in (17).
Using (13) and (14), one can write:
The optimal MMSE attack detector in (11) for \(r=k\) can be computed as:
with the expected error of the optimal detection according to (12) as:
Optimal monitoring policy for BAGs
The proposed attack detection policy in the previous section provides the optimal MMSE solution for detecting network compromises. However, detection accuracy is highly dependent on the available information, i.e., the monitored nodes and the observations. Given the complexity and partial observability of network compromises, accurate detection requires the best use of available monitoring resources. In fact, monitoring should provide the most valuable information about network compromises to enhance the accuracy of detection, especially in sensitive domains where inaccurate attack detection could put the network at risk. It is worth mentioning that the selected monitoring nodes not only provide information about the compromised status at those nodes but also valuable information about all neighboring nodes and the nodes with a feasible path to the currently selected nodes. Therefore, a holistic networkbased approach for selecting monitoring nodes is essential. Toward this, the proposed monitoring policy, described below, is derived to optimally support the proposed attack detection policyâ€™s performance.
This paper proposes a systematic and optimal solution to enhance the detection accuracy by sequentially monitoring the network components. Let \(\{{{\textbf {a}}}_{0:k1},{{\textbf {y}}}_{k}\}\) be the selected monitoring nodes and observations up to time step k. The goal is to select the best m nodes, i.e., \(a_{k}\subset {\mathcal N}\), that maximize the attack detection accuracy in the next step. This can be expressed using the prediction capability of the optimal MMSE attack detection discussed in TheoremÂ 1. Let \({\hat{{\textbf {x}}}}_{k+1k}^{\textrm{MS}}\) be the optimal MMSE attack predictor at time step \(k+1\) given the observation up to time step k. Then, the optimal subset of nodes yielding the highest attack prediction error can be formulated through the following optimization problem:
where the expectation is with respect to unobserved state \({\textbf {x}}_{k+1}\). The solution to the optimization in (21) guarantees to achieve the minimum expected MSE error (or the highest detection accuracy) in the next time step. Meanwhile, the policy in (21) can also be interpreted as monitoring a subset of nodes most likely to be missdetected in the next step. This assures optimal use of available resources for monitoring the vulnerable parts of networks given the latest information. The optimal MMSE predictor \({\hat{{\textbf {x}}}}_{k+1k}^{\textrm{MS}}\) can be obtained according to Theorem 1 as: \({\hat{{\textbf {x}}}}_{k+1k}^{\textrm{MS}}=\overline{{\mathbb {E}}\left[ {\textbf {x}}_{k+1}(i) {{\textbf {a}}}_{0:k1},{{\textbf {y}}}_{1:k}\right] }\). Using Theorem 1, the expression in (21) can also be further simplified as:
The last expression can be interpreted as selecting nodes with the expected predictive value closer to 1/2. The minimum value of \(\left \,{\mathbb {E}}\left[ {\textbf {x}}_{k+1}(i){{\textbf {a}}}_{0:k1},{{\textbf {y}}}_{1:k}\right] \frac{1}{2}\,\right\) is 0, which represents scenarios that the attack detection error is predicted to be the largest at node i in the next time step.
Using the current posterior distribution as \({\varvec{\Pi }}_{kk}\), the exact vectorform computation of the last expression in (22) can be expressed as:
Regarding the computational complexity of the policy in (23), one should note that the search space in the argument of \(\mathop {argmin}\limits\) does not demand searching over all m combinations of n nodes. In fact, one can compute the expected predictive error for all nodes as: \(s_i=(A{\varvec{\Pi }}_{k+1k})_i\frac{1}{2}\), for \(i=1,..,n\); then, m nodes with the minimum \(s_i\) can be selected for monitoring purpose. Meanwhile, the predictive posterior probability \({\varvec{\Pi }}_{k+1k}\) can be simply computed through current posterior probability \({\varvec{\Pi }}_{kk}\) in realtime.
For domains with flexible available resources, the number of nodes for monitoring can be selected adaptively at any given time. In this scenario, the size of \({{\textbf {a}}}_{k1}\) could be set according to the extent of network vulnerabilities and the targeted detection accuracy. Assuming the objective is to keep the missdetection rate for all nodes below 100\(\alpha\)%, where \(0\le \alpha \le 0.5\). This can be achieved by monitoring all nodes that their expected predictive errors exceed \(\alpha\) as:
If the expected predictive errors for all nodes fall below \(\alpha\), the monitoring can be skipped in the next step; however, if expected predictive errors for several nodes are higher than \(\alpha\), up to m of those nodes should be monitored in the next time step. The expected predictive error for each node takes a value between 0 and 1/2; thus, a smaller value of \(\alpha\) employs more extensive monitoring to assure accurate detectability of the entire network. Furthermore, if accurate detection is necessary at certain parts of the network, a smaller \(\alpha\) can be used for corresponding nodes.
The detailed steps of the proposed optimal MMSE attack detection and monitoring policy for BAGs are provided in Algorithm 1. The algorithm progresses sequentially; a new monitoring set is selected, and the corresponding observations are used for detection in the next step. The algorithmâ€™s computational complexity is of order \(O(2^{2n})\) due to the transition matrix involved in updating the attack posterior distribution. The size of the transition matrix grows exponentially with the number of components in the network. As a result, it is not possible to compute the attack posterior distribution exactly, preventing the applicability of the proposed monitoring and detection policies in large BAGs. Therefore, our future work will focus on developing scalable particle filtering approaches capable of approximating these optimal monitoring and detection policies. The binary structure of the state variables in BAGs will be exploited to achieve approximate MMSE optimality while remaining computationally efficient.
Numerical experiments
The numerical experiments in this section evaluate the performance of the proposed attack detection and monitoring policies. The five methods considered for our comparison are: (1) All nodes monitoring, (2) Proposed Adaptive Resource Monitoring; (3) Proposed Fixed Resource Monitoring; (4) Random Monitoring, and (5) Fixed Nodes Monitoring. The first algorithm represents the baseline results, where all nodes are monitored at all time steps. The results obtained by this method specify the lower bound error and higher bound accuracy achievable by other methods with limited monitoring resources. For the third, fourth, and fifth methods, the number of monitoring nodes is m at any given time, whereas, for the second method, the maximum number of monitoring nodes is set to be m. In the fixed node monitoring policy, a fixed set of random nodes are used for monitoring purposes throughout the process. In the random policy, a random subset of m nodes is selected at each time step for monitoring purposes. All the results represented in the numerical experiments are averaged over 100 independent runs obtained for trajectories of length 10. Three important metrics used for performance assessments are average accuracy, error, and total error of attack detection, which can be expressed as:
where \({\textbf {x}}_k^t\) and \({\hat{{\textbf {x}}}}_k^t\) are the true and detected compromises at time step k in the tth trajectory respectively.
Experiment 1â€”10Nodes BAG
In this part of the experiments, we consider detecting attacks in the network used inÂ Hu etÂ al. (2020) and shown in Fig.Â 2. The BAG consists of 10 nodes, resulting in \(2^{10}=1,\!024\) different possible states (i.e., network compromises). A uniform prior is considered for the initial network compromise, i.e., \({\varvec{\Pi }}_{10}(i)=1/2^{10}, i=1,...,2^{10}\). The measurement noise is set as \(q=0.2\), and the maximum desired detection error is set as \(\alpha =0.15\). The network vulnerabilities indicated by \(\rho _{ij}\) can be represented by:
Three nodes are susceptible to external attacks, represented through the following parameters:
\(\rho _1\!=\!0.6900, \rho _2\!=\!0.6200, \rho _3\!=\!0.5300\). In the first experiment, the number of monitoring nodes m is set as 2. The average detection accuracy and error are shown in Fig.Â 3. As expected, the highest accuracy rate is obtained by the baseline method, where all nodes are monitored at all time steps. The accuracy of the proposed adaptive resource and the proposed fixed resource monitoring policies are closer to the baseline and empirically converge to the baseline as time progresses. The results of the fixed nodes and random node monitoring policies are significantly lower than the proposed methods, which demonstrates the importance of intelligent node monitoring for enhancing attack detection accuracy. In particular, after 10 timesteps, the average accuracy of attack detection by the proposed policies is above 86%, which is much higher than 72% obtained by the random, and 46% obtained by the fixed nodes monitoring policies. Similar results can be seen in Fig.Â 3b in terms of the average error of attack detection obtained by various methods.
The average number of monitored nodes under various policies is shown in Fig.Â 4. For the random monitoring policy, all nodes are almost monitored equally, whereas, under the proposed policies, nodes 8, 10, 9, and 5 have been monitored more often. These imbalanced monitoring of nodes come with better accuracy of detection, represented in Fig.Â 3. One can see a significantly less number of monitored nodes under the proposed adaptive resource monitoring policy compared to the fixed resource monitoring policy. In particular, nodes 1, 2, 3, 4, 6, and 7 are selected significantly less under the proposed adaptive resource policy. Despite much lower monitoring under this policy, similar performance is obtained by the proposed adaptive resource monitoring policy compared to the fixed resource monitoring policy (see Fig.Â 3). The results imply that the proposed monitoring policies can monitor nodes that enhance the detection of the entire networkâ€™s compromises.
In this part of the experiment, we analyze the impact of the number of monitoring nodes on the performance of the proposed policies. FigureÂ 5a represents the average total error obtained by various methods with respect to the available number of monitoring resources, i.e., m. The minimum average error is obtained by the proposed policies in all conditions. The total error decreases for all methods as more monitoring resources are available; in particular, for \(m=10\), the error of all methods becomes the same, as all nodes can be monitored given the available resource (except for the adaptive resource monitoring that might use fewer monitoring nodes). FigureÂ 5b represents the used resources (i.e., the total number of monitored nodes) by all policies. As more resources become available, the average number of monitoring nodes increases for all methods. However, the proposed adaptive resource monitoring policy uses significantly fewer monitoring resources while yielding the same average error as the proposed fixed resource monitoring policy and a much lower average error than the other two policies. This comes from the capability of the proposed adaptive resource monitoring policy to properly use available resources if the expected detection error exceeds the desired detection error \(\alpha =0.15\). Therefore, considering average error and used resources, the best results are obtained for the proposed adaptive resource monitoring policy.
To better analyze the proposed methodâ€™s efficiency in using available resources, we represent the average accuracy and monitored nodes with respect to the time step. FigÂ 6 contains the results of proposed adaptive resource and fixed resource monitoring policies for \(m=1, 4\), and 9. As shown in FigÂ 6(a), the average detection accuracy is similar for both policies for any given m and increases as more information becomes available. When larger resources are available (i.e., larger m), the performance of both policies converges to the baseline approach (all nodes monitored). FigureÂ 6b compares the average number of monitored nodes employed by both policies for various m values. The proposed fixed resource policy monitors a fixed number of m nodes at any given time, whereas the number of monitored nodes decreases significantly as more information becomes available. A larger number of nodes monitored in the first step comes from the uniform prior distribution of compromises; however, as time progresses and more information is acquired, the monitored nodes significantly reduce and converge to an average of 1.2 in all conditions. Therefore, comparing the accuracy and the employed resources on the left and right side of Fig.Â 6, one can see that the adaptive resource policy reduces resource consumption without significantly impacting detection quality.
The impact of the monitoring or measurement noise on the performance of the proposed policies is analyzed in this section. The measurement noise represents the likelihood of miss identifying compromises in the network. A larger measurement noise models a lessadvanced monitoring process or the existence of new or difficulttodetect attacks. FigureÂ 7a represents the average detection error obtained for 100 trajectories of length 10 with respect to measurement noise. As expected, the average error increases for all methods as the level of noise increases. This is due to the inaccuracy of identifying potential attacks during monitoring, which degrades detection accuracy. As expected, the minimum average error is obtained by the baseline policy. For a specific case of \(q=1\), which represents the extreme case of missmonitoring all compromises in the network, the maximum total error is achieved for all methods. However, for smaller values of measurement noise, the proposed fixed resource and adaptive resource policies yield significantly smaller average errors than other policies. This again demonstrates the capability of the proposed policies in effectively monitoring nodes under easytodetect and difficulttodetect attacks. FigureÂ 7b demonstrate the average total number of monitored nodes for the proposed methods. Similar to previous results, the average number of monitored nodes is much smaller by the proposed adaptive resource policy. The reduction becomes less visible for larger measurement noise since more monitoring is needed to achieve the desired detection accuracy.
In this part of the experiment, we compare the performance of the proposed monitoring policy with that of the treebased monitoring approachÂ (Noel and Jajodia 2008) and the probabilistic vulnerability assessment approachÂ (Dantu etÂ al. 2004). The treebased approach simulates the most probable attack path and selects monitoring nodes with the highest vulnerabilities on the path. The probabilistic vulnerability assessment approach selects nodes with the highest expected increase in the compromise probability, which represents the most vulnerable nodes in the network. FigureÂ 8 shows the performance of attack detection under various monitoring policies for \(m=2\) and \(q=0.2\). The proposed monitoring policy outperforms the other methods and achieves the highest detection accuracy and minimum detection error. Both the treebased and probabilistic monitoring policies cannot fully detect the system even under larger data. This is due to the fact that these methods aim to select nodes with the highest vulnerability, whereas the proposed monitoring policy can optimally allocate resources by monitoring nodes that are most likely to be missdetected in the next time step.
Experiment 2â€”13Nodes BAG
For the second part of our experiments, we analyze the attack detection and monitoring for a network depicted in Fig.Â 9. This network consists of 13 nodes, leading to \(2^{13}=8,\!192\) possible compromise states. A uniform prior state distribution is considered for our experiments with \(q=0.2\), \(\alpha =0.15\), and \(m=1\). The network consists of five external attacks with the following parameters: \(\rho _1\!=\!0.60, \rho _2\!=\!0.50, \rho _3\!=\!0.40, \rho _4\!=\!0.70, \rho _6\!=\!0.30\). The network internal vulnerabilities \(\rho _{ij}\) can be represented by:
The average detection accuracy with respect to the time step obtained by various policies is shown in Fig.Â 10. The highest accuracy is obtained by the proposed policies, which ultimately converges to the baseline method as more data becomes available. It should be noted that the maximum number of monitoring nodes is set to be \(m=1\) in this network with 13 nodes. Therefore, the convergence of the proposed policiesâ€™ average accuracy to the baseline policy (with all nodes monitored) represents the capability of the proposed policies in the intelligent node selection. Finally, by comparing the results for fixed nodes and random monitoring policies, one can understand that nonsystematic monitoring does not reveal network vulnerabilities and can lead to a huge error in attack detection.
The impact of the maximum desired detection error in the adaptive resource monitoring policy is analyzed here. The parameter \(\alpha\) indicates the maximum acceptable detection error for any given node. The proposed policy monitors up to m nodes with the expected predictive error exceeding the \(\alpha\) value. The average result for \(m=1\) and \(\alpha\) ranging between 0 and 0.5 is presented in FigÂ 11. As shown in FigÂ 11a, the average detection error increases as the value of \(\alpha\) increases. The reason is that a larger \(\alpha\) value represents a more acceptable detection error, which consequently appears in terms of a larger detection error. The results of all monitoring policies (except adaptive resource monitoring) are shown as horizontal lines in Fig.Â 11a. These policies do not rely on \(\alpha\).
FigureÂ 11b compares the average number of monitored nodes obtained by both policies. The proposed adaptive resource monitoring uses smaller resources than the fixed resource monitoring policy. For very small values of \(\alpha\), the average number of monitored nodes by both policies are similar, but as the value of \(\alpha\) increases, the average number of monitored nodes decreases significantly for the proposed adaptive monitoring policy. Finally, as shown in the results obtained over the 10node BAG, selecting a reasonable value for \(\alpha\) according to the sensitivity of the missdetection (e.g., \(\alpha =0.15\)) often leads to a good balance between the accuracy and the use of available resources. Finally, one could choose \(\alpha\) specific for any given nodes in domains where detecting attack at specific nodes has higher priority over other nodes.
Conclusion
In this paper, we developed optimal monitoring and attack detection methods for the general form of Bayesian attack graphs (BAGs). Our approach takes into account sparse and imperfect monitoring techniques, which differ from most existing attack detection techniques. The proposed policies yield the exact minimum mean square error (MMSE) optimality by exploiting the binary structure of nodes in the graph. Optimal sequential monitoring is achieved by selecting a subset of nodes that lead to the highest detectability of network compromises or, equivalently, the least network vulnerability. The exact matrixform algorithms for the proposed monitoring and detection policies were introduced in this paper. The performance of the proposed methods was demonstrated using comprehensive numerical experiments. Our future work will focus on scaling the proposed attack detection and network monitoring policies to large networks and deriving policies for intelligently defending the network against potential attacks.
Availability of data and materials
Not applicable.
Abbreviations
 BAG:

Bayesian attack graph
 MSE:

Mean squared error
 MMSE:

Minimum mean square error
 HMM:

Hidden Markov model
References
Agmon N, Shabtai A, Puzis R (2019) Deployment optimization of IoT devices through attack graph analysis. In: Proceedings of the 12th conference on security and privacy in wireless and mobile networks, pp 192â€“202
Al Ghazo AT, Ibrahim M, Ren H, Kumar R (2019) A2G2V: automatic attack graph generation and visualization and its applications to computer and SCADA networks. IEEE Trans Syst Man Cybern: Syst 50(10):3488â€“3498
AlAraji Z, Syed Ahmad SS, Abdullah RS et al (2022) Attack prediction to enhance attack path discovery using improved attack graph. Karbala Int J Mod Sci 8(3):313â€“329
Albanese M, Jajodia S, Noel S (2012) Timeefficient and costeffective network hardening using attack graphs. In: IEEE/IFIP international conference on dependable systems and networks (DSN 2012). IEEE, pp 1â€“12
Alhomidi M, Reed M (2013) Risk assessment and analysis through populationbased attack graph modelling. In: World Congress on Internet Security (WorldCIS2013). IEEE, pp 19â€“24
Bai CZ, Gupta V, Pasqualetti F (2017) On Kalman filtering with compromised sensors: Attack stealthiness and performance bounds. IEEE Trans Autom Control 62(12):6641â€“6648
Capobianco F, George R, Huang K, Jaeger T, Krishnamurthy S, Qian Z, Payer M, Yu P (2019) Employing attack graphs for intrusion detection. In: Proceedings of the new security paradigms workshop, pp 16â€“30
Chadza T, Kyriakopoulos KG, Lambotharan S (2020) Analysis of hidden Markov model learning algorithms for the detection and prediction of multistage network attacks. Futur Gener Comput Syst 108:636â€“649
Chen YY, Xu B, Long J (2021) Information security assessment of wireless sensor networks based on Bayesian attack graphs. J Intell Fuzzy Syst 41(3):4511â€“4517
Chockalingam S, Pieters W, Teixeira A, Gelder Pv (2017) Bayesian network models in cyber security: a systematic review. In: Nordic conference on secure IT systems. Springer, pp 105â€“122
Dantu R, Loper K, Kolan P (2004) Risk management using behavior based attack graphs. In: International Conference on Information Technology: Coding and Computing, 2004. Proceedings. ITCC 2004., 1:445â€“449. IEEE
Frigault M, Wang L (2008) Measuring network security using Bayesian networkbased attack graphs. In: 2008 32nd Annual IEEE International Computer Software and Applications Conference, pp. 698â€“703. IEEE
Frigault M, Wang L, Jajodia S, Singhal A (2017) Measuring the overall network security by combining CVSS scores based on attack graphs and Bayesian networks 1â€“23
Holgado P, VillagrĂˇ VA, Vazquez L (2017) Realtime multistep attack prediction based on hidden Markov models. IEEE Trans Dependable Secure Comput 17(1):134â€“147
Homer J, Zhang S, Ou X, Schmidt D, Du Y, Rajagopalan SR, Singhal A (2013) Aggregating vulnerability metrics in enterprise networks using attack graphs. J Comput Secur 21(4):561â€“597
Hu Z, Zhu M, Liu P (2020) Adaptive cyber defense against multistage attacks using learningbased POMDP. ACM Transactions on Privacy and Security (TOPS) 24(1):1â€“25
HusĂˇk M, KomĂˇrkovĂˇ J, BouHarb E, ÄŚeleda P (2018) Survey of attack projection, prediction, and forecasting in cyber security. IEEE Commun Surv Tutor 21(1):640â€“660
Hu Z, Zhu M, Liu P (2017) Online algorithms for adaptive cyber defense on Bayesian attack graphs. In: Proceedings of the 2017 workshop on moving target defense, pp 99â€“109
Krisper M, Dobaj J, Macher G, Schmittner C (2019) Riskee: a risktree based method for assessing risk in cyber security. In: Systems, Software and Services Process Improvement: 26th European Conference, EuroSPI 2019, Edinburgh, UK, September 18â€“20, 2019, Proceedings 26, pp. 45â€“56. Springer
Kumar PR, Varaiya P (2015) Stochastic systems: Estimation, identification, and adaptive control. SIAM
Lallie HS, Debattista K, Bal J (2020) A review of attack graph and attack tree visual syntax in cyber security. Comput Sci Rev 35:100219
Li T, Liu Y, Liu Y, Xiao Y, Nguyen NA (2020) Attack plan recognition using hidden Markov and probabilistic inference. Comput Secur 97:101974
Liang C, Wen F, Wang Z (2019) Trustbased distributed Kalman filtering for target tracking under malicious cyber attacks. Information Fusion 46:44â€“50
Liu Sc, Liu Y (2016) Network security risk assessment method based on HMM and attack graph model. In: 2016 17th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/distributed Computing (SNPD), pp. 517â€“522. IEEE
Liu J, Liu B, Zhang R, Wang C (2019) Multistep attack scenarios mining based on neural network and Bayesian network attack graph. In: International conference on artificial intelligence and security. Springer, pp 62â€“74
Ma Y, Wu Y, Yu D, Ding L, Chen Y (2022) Vulnerability association evaluation of internet of thing devices based on attack graph. Int J Distrib Sens Netw 18(5):15501329221097816
Malzahn D, Birnbaum Z, WrightHamor C (2020) Automated vulnerability testing via executable attack graphs. In: 2020 international conference on cyber security and protection of digital services (Cyber Security). IEEE, pp 1â€“10
Matthews I, Mace J, Soudjani S, van Moorsel A (2020) Cyclic Bayesian attack graphs: a systematic computational approach. In: 2020 IEEE 19th international conference on trust, security and privacy in computing and communications (TrustCom). IEEE, pp 129â€“136
Miehling E, Rasouli M, Teneketzis D (2015) Optimal defense policies for partially observable spreading processes on Bayesian attack graphs. In: Proceedings of the second ACM workshop on moving target defense, pp 67â€“76
MunozÂ Gonzalez L, Lupu E (2016) Bayesian attack graphs for security risk assessment. IST153 Workshop on Cyber Resilience
MuĂ±ozGonzĂˇlez L, Sgandurra D, BarrĂ¨re M, Lupu EC (2017) Exact inference techniques for the analysis of Bayesian attack graphs. IEEE Trans Dependable Secure Comput 16(2):231â€“244
Nguyen HH, Palani K, Nicol DM (2017) An approach to incorporating uncertainty in network security analysis. In: Proceedings of the hot topics in science of security: symposium and bootcamp, pp 74â€“84
Nipkow T et al (2012) Advances in probabilistic model checking. Software Safety and Security: Tools for Analysis and Verification 33(126)
Noel S, Jajodia S (2008) Optimal ids sensor placement and alert prioritization using attack graphs. J Netw Syst Manage 16:259â€“275
Noel S, Jajodia S (2014) Metrics suite for network attack graph analytics. In: Proceedings of the 9th Annual Cyber and Information Security Research Conference, pp 5â€“8
Noel S, Jajodia S (2017) A suite of metrics for network attack graph analytics. Network Security Metrics 141â€“176
Ou X, Boyer WF, McQueen MA (2006) A scalable approach to attack graph generation. In: Proceedings of the 13th ACM Conference on Computer and Communications Security, pp 336â€“345
Poolsappasit N, Dewri R, Ray I (2011) Dynamic security risk management using Bayesian attack graphs. IEEE Trans Dependable Secure Comput 9(1):61â€“74
Radack SM et al (2007) The Common Vulnerability Scoring System (CVSS)
Ramaki AA, KhosraviFarmad M, Bafghi AG (2015) Real time alert correlation and prediction using Bayesian networks. In: 2015 12th International Iranian Society of Cryptology Conference on Information Security and Cryptology (ISCISC), pp 98â€“103. IEEE
Sahu A, Davis K (2021) Structural learning techniques for Bayesian attack graphs in cyber physical power systems. In: 2021 IEEE Texas power and energy conference (TPEC). IEEE, pp 1â€“6
SĂ¤rkkĂ¤ S (2013) Bayesian filtering and smoothing. Cambridge university press (3)
Sembiring J, Ramadhan M, Gondokaryono YS, Arman AA (2015) Network security risk analysis using improved MulVAL Bayesian attack graphs. Int J Electr Eng Inform 7(4):735
Singhal A, Ou X (2017) Security risk analysis of enterprise networks using probabilistic attack graphs. Network Security Metrics 53â€“73
Stan O, Bitton R, Ezrets M, Dadon M, Inokuchi M, Yoshinobu O, Tomohiko Y, Elovici Y, Shabtai A (2020) Extending attack graphs to represent cyberattacks in communication protocols and modern it networks. IEEE Trans Depend Secure Comput
Sun X, Dai J, Liu P, Singhal A, Yen J (2018) Using Bayesian networks for probabilistic identification of zeroday attack paths. IEEE Trans Inf Forensics Secur 13(10):2506â€“2521
Thanthrige USK, Samarabandu J, Wang X (2016) Intrusion alert prediction using a hidden Markov model. arXiv:1610.07276
Wang X, Cheng M, Eaton J, Hsieh CJ, Wu F (2018) Attack graph convolutional networks by adding fake nodes. arXiv:1810.10751
Wang S, Zhang Z, Kadobayashi Y (2013) Exploring attack graph for costbenefit security hardening: a probabilistic approach. Comput Secur 32:158â€“169
Welch G, Bishop G et al (1995) An introduction to the Kalman filter
Yu T, Sekar V, Seshan S, Agarwal Y, Xu C (2015) Handling a trillion (unfixable) flaws on a billion devices: Rethinking network security for the internetofthings. In: Proceedings of the 14th ACM workshop on hot topics in networks, pp 1â€“7
Acknowledgements
Not applicable.
Funding
This work has been supported in part by the National Science Foundation award IIS2202395, ARMY Research Office award W911NF2110299, and Oracle Cloud credits and related resources provided by the Oracle for Research program.
Author information
Authors and Affiliations
Contributions
AK developed the proposed detection and monitoring policies, performed the experiments, and wrote the manuscript. MI proposed the initial idea of the proposed policies and oversaw the research. Both authors have read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisherâ€™s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kazeminajafabadi, A., Imani, M. Optimal monitoring and attack detection of networks modeled by Bayesian attack graphs. Cybersecurity 6, 22 (2023). https://doi.org/10.1186/s4240002300155y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s4240002300155y