 Research
 Open access
 Published:
An empirical study of reflection attacks using NetFlow data
Cybersecurity volume 7, Article number: 13 (2024)
Abstract
Reflection attacks are one of the most intimidating threats organizations face. A reflection attack is a special type of distributed denialofservice attack that amplifies the amount of malicious traffic by using reflectors and hides the identity of the attacker. Reflection attacks are known to be one of the most common causes of service disruption in large networks. Large networks perform extensive logging of NetFlow data, and parsing this data is an advocated basis for identifying network attacks. We conduct a comprehensive analysis of NetFlow data containing 1.7 billion NetFlow records and identified reflection attacks on the network time protocol (NTP) and NetBIOS servers. We set up three regression models including the Ridge, Elastic Net and LASSO. To the best of our knowledge, there is no work that studied different regression models to understand patterns of reflection attacks in a large network. In this paper, we (a) propose an approach for identifying correlations of reflection attacks, and (b) evaluate the three regression models on real NetFlow data. Our results show that (a) reflection attacks on the NTP servers are not correlated, (b) reflection attacks on the NetBIOS servers are not correlated, (c) the traffic generated by those reflection attacks did not overwhelm the NTP and NetBIOS servers, and (d) the dwell times of reflection attacks on the NTP and NetBIOS servers are too small for predicting reflection attacks on these servers. Our work on reflection attacks identification highlights recommendations that could facilitate better handling of reflection attacks in large networks.
Introduction
The capacity of large networks has grown significantly in order to sustain the level of performance required by machine learning, scientific and engineering applications. In this context, the security of the network has become critical to meet the expectations of its users. Analyzing network attacks requires awareness of the sequence of events encountered by the network component. While recent works have focused on analyzing attacks on specific network components (Chordiya et al. 2018; Bian et al. 2019), answering how an attack on a network occurs requires an integrated approach towards correlationbased log mining (Friedberg et al. 2015; Noble and Adams 2016). Correlation analysis has been widely used to detect intrusions in large networks (Shin and Jeong 2006; Haas and Fischer 2019; Cheng et al. 2021; Negi et al. 2021; Zadnik et al. 2022), with its strengths in aggregating several alerts with low false positives and low false negatives, resulting in tremendous improvements in detection accuracy. A recent study empirically evaluated the Pearson and SpearmanRank correlation algorithms using NetFlow data obtained from an enterprise network (Chuah et al. 2021). From their study, they observed that reflection attacks on the Secure Shell (SSH) and Domain Name Service (DNS) servers exist in the NetFlow data and those attacks are not correlated.
Several recent largescale Distributed DenialofService (DDoS) attacks studies have provided valuable insights into DDoS attacks (Gondim et al. 2020; Sarmento et al. 2021; Kopp et al. 2021; Anagnostopoulos et al. 2022). These studies have shown that DDoS attacks are regularly executed on many network protocols. In a DDoS attack, a large volume of network packets is generated to flood a target host without using an intermediary. In contrast to DDoS attacks, which do not mask the sender’s source IP address, a reflection attack is a special type of DDoS attack that uses any TCP or UDPbased service as a reflector and masks the sender’s source IP address (Joshi 2008). A spoofed network packet, where the source IP address is replaced by the IP address of another device, is typically used to send the response to the victim. Thus, an attacker can magnify the amount of malicious traffic and obscure the sources of the attack traffic to cause significant disruption to the operation of a large network. As such, it is important to identify correlations of reflection attacks, as it is to identify the dwell time between these attacks. We define the dwell time as the time elapsed between the start time of one reflection attack and the start time of the next reflection attack. When network attack prediction schemes are supported by knowledge of the dwell times of an attack, it can help the network administrators in using network attack mitigation schemes to respond to an impending attack (Liu et al. 2018). When the dwell times of an attack are small, a network attack mitigation scheme which scatters the attack traffic can be used to absorb the attack.
Several recent works have developed Pearson correlationbased methods that identified DDoS attacks (Chawla et al. 2016), identified reflection attacks (Chuah et al. 2021), detected activities of groups of bots (Hostiadi and Ahmad 2022), and detected network intrusions (Heryanto et al. 2022). Chawla et al. (2016) proposed a framework that used Pearson correlation to identify DDoS attacks and flash events. Hostiadi and Ahmad (2022) proposed a new model that detected correlations of activities of group of bots. Their model consists of four phases: (a) data preprocessing, (b) data segmentation, (c) feature extraction and (d) bot group detection. They implemented (a) the Mean Absolute Error metric that measures the similarity of activities between two groups of bots, and (b) the Pearson correlation algorithm that finds relationships between the activities of two bot groups. Heryanto et al. (2022) implemented a feature selection workflow that uses Pearson correlation to identify important network metrics for detecting intrusions. Chuah et al. (2021) applied Pearson correlation and SpearmanRank correlation to identify the dates of reflection attacks. Although these works showed that the Pearson correlation algorithm can identify relationships between malicious activities, it has some limitations that we address in this paper. First, Pearson correlation only identifies relationships between two samples. Second, several correlated samples can be produced and all the correlated samples must be manually analyzed before an attack can be identified, which is not desirable because it is a timeconsuming process that incurs a significant delay in identifying correlations of a network attack. Therefore, we use the power of regression models, which belong to a type of supervised learning that learns a relationship between a dependent variable and multiple independent variables. We train the Ridge, Elastic Net and LASSO regression models on NetFlow data to obtain the regression coefficients for all independent variables, and determine the applicability of these regression models in identifying correlations of reflection attacks.
In this paper, we conduct an empirical analysis of reflection attacks in a large enterprise network, carefully compare the Ridge, LASSO and Elastic Net regression models and present several new findings. The correlation of reflection attacks on the NTP server and correlation of reflection attacks on the NetBIOS server are new ones and have not been reported in an earlier paper (Chuah et al. 2021). We validate our approach on 1.7 billion NetFlow records obtained from a large enterprise network operated by Los Alamos National Laboratories, and apply statistical validation methods to ensure that the results are accurate.
The main contributions of this paper are given as follows:

We identify reflection attacks in a large enterprise network and provide estimates of NetFlow records which are not correlated with the reflection attack.

We analyze the NetFlow records which are associated with a reflection attack to drill down into their specific activity. Based upon the insights gained from our correlation analysis, we discuss how these findings can be used to improve the network’s security against reflection attacks.

We extract the NetFlow records associated with the reflection attack and obtain their dwell times.
Our initial assumption is that reflection attacks are correlated in the NetFlow data. We compared the Ridge, Elastic Net and LASSO regression models and are surprised to learn that the regression coefficients learned by all three regression models are close to 0 or equal to 0. Furthermore, the dwell times of reflection attacks ranged from 0 to 198 s, multiple source and destination devices were associated with reflection attacks on the NTP and NetBIOS servers, and a small percentage of network traffic was generated by the reflection attack.
The remainder of this paper is organized as follows: First, we review the related works in section "Related work". Then, we describe the network model and NetFlow data in sect "Network model and data". We present the motivation and describe the details of our approach in section "Identifying correlations of reflection attacks". Our evaluation on the NetFlow data obtained from an enterprise network is presented in section "Evaluation on an enterprise network" . We discuss the results and limitations of our approach in sections "Discussion" and "Threats to validity" respectively, and we conclude with a summary and future work in section "Conclusion and future work".
Related work
We divide the related works into two categories: (a) machine learningbased intrusion detection systems that detected DDoS attacks, and (b) correlation analysisbased intrusion detection systems that detected DDoS attacks.
Machine learningbased intrusion detection systems
We focused on very recent works that developed Intrusion Detection Systems (IDS) which integrated machine learning techniques to detect DDoS attacks. In Singh Samra and Barcellos (2023), the authors proposed a natural language processing (NLP)based approach called DDoS2Vec that learns the characteristics of DDoS attacks. They evaluated their approach on one year’s worth of flow samples obtained from an Internet Exchange Provider and compared the performance of DDoS2Vec with Word2Vec, Dos2Vec and Latent Semantic Analysis. In Benmohamed et al. (2023), the authors proposed a novel approach that stacks multiple deep neural networks (DNN) to detect DDoS attacks. They evaluated their approach on a benchmark Cybersecurity dataset and compared the performance of their method with existing machine learning models. In Dasari and Devarakonda (2023), the authors implemented an approach that compared multiple Support Vector Machine (SVM) kernels that are trained with uncorrelated features to detect reflection amplification DDoS attacks on the Simple Network Management Protocol (SNMP) and DNS servers. In Kshirsagar and Kumar (2022), the authors proposed a feature reduction method that integrated Information Gain (IG) and correlationbased feature selection techniques to detect reflection and standard DDoS attacks. They evaluated their method on two public Cybersecurity datasets and compared the performance of their approach with stateoftheart feature selection methods. In Ahuja et al. (2022), the authors developed a decision treebased IDS that uses the J48 classifier to detect reflection amplification DDoS attacks, and evaluated their method on the CICDDoS2019 Cybersecurity dataset. In Najafimehr et al. (2022), the authors proposed a novel technique that combined clustering and classification machine learning algorithms. Their technique consists of three phases. In the first phase, the DBSCAN clustering algorithm is used to separate DDoS traffic from normal traffic. In the second phase, the Euclidean distance metric is used to calculate the features in each cluster. In the third phase, a classification model is built and a label that indicates whether a cluster contains DDoS traffic or normal traffic is assigned to each cluster. They evaluated their method on two public Cybersecurity datasets and compared the performance of their classifier with the Decision Tree, Random Forest, Naive Bayes and Support Vector Machine classifiers. In Singh and Taggu (2021), the authors compared the Random Forest (RF), Naive Bayes (NB), Logistics Regression (LR), KNearest Neighbour (KNN) and Multilayer Perceptron (MLP) algorithms to filter normal traffic from DDoS traffic, and evaluated these algorithms on two public DDoS datasets. In Cil et al. (2021), the authors proposed a deep learning model to detect DDoS attacks. They evaluated their model on the CICDDoS2019 dataset that includes traces of DDoS attacks on several network protocols, and showed that a three layer deep neural network model achieved the highest detection accuracy. Further, in Hachmi et al. (2019), the authors proposed an approach called MOPIDS. It is based on a MultiObjective Optimization (MOO) process composed of: (a) clustering alerts generated by multiple IDS to decrease the set of alerts, (b) filtering alerts to create a set of potential false alarms, (c) grouping similar alerts produced by the different IDS and (d) classifying an alert as a false positive or false negative. The performance of MOPIDS was evaluated using accuracy, true positive rate and true negative rate metrics on three public Cybersecurity datasets that contain denialofservice traces. In Lin et al. (2015), the authors proposed a novel approach called CANN. It is based on KMeans clustering that sums two distances: (a) the distance between a data sample and its cluster center and (b) the distance between a data sample and its nearest neighbour. A new 1dimensional distance based feature is created and used by the KNearest Neighbour classifier to classify each data sample into normal or abnormal. The performance of CANN was evaluated using accuracy, detection rate and true positive rate metrics on a public Cybersecurity dataset that contains traces of DDoS attacks. In Kaja et al. (2019), the authors proposed a twostage machine learning architecture that uses (a) the KMeans clustering algorithm to detect an attack, and (b) Decision Trees (DT), Random Forest, Adaptive Boosting (AB) and Naive Bayes algorithms to classify several types of attacks. They evaluated their architecture on a public Cybersecurity dataset that includes traces of denialofservice attacks, and showed that decision trees and random forest algorithms achieved the highest classification accuracy. In Shahraki et al. (2020), the authors compared the performance of Modest Adaboost (MA), Real Adaboost (RA) and Gentle Adaboost (GA) on five public Cybersecurity and DDoS datasets. They showed that (a) the error rate of Modest Adaboost is higher compared to the error rates of Gentle and Real Adaboost and (b) Gentle and Real Adaboost have the same error rate performance. In Elsayed et al. (2021), the authors proposed a method that uses L2 regularization and dropout techniques to improve the performance of Convolution Neural Networks (CNN) for IDS. They showed that their method achieved the highest precision, recall and F1scores compared to several popular machine learning techniques, and evaluated their method on a Cybersecurity dataset that contains traces of DDoS attacks. In MoraGimeno et al. (2021), the authors proposed a modified System Call Graph (SCG) that uses a Deep Neural Network (DNN) to integrate information from different detection techniques. They evaluated their approach on three Cybersecurity datasets that include traces of DDoS attacks, and showed that their model achieved high detection rates and low false positives.
Correlation analysisbased intrusion detection systems
We focused on very recent works that developed Intrusion Detection Systems which integrated correlation techniques to detect DDoS attacks. In Gottwalt et al. (2019), the authors introduced a new feature selection method called CorrCorr. It uses the Multivariate Correlation (MC) and AdditionBased Correlation (ABC) methods to generate feature correlations and normal network traffic profiles from which anomalies that deviate from the normal profile are detected. They evaluated their method on two public Cybersecurity datasets that include traces of DDoS attacks. In Ghosh et al. (2022), the authors present a tool that efficiently correlates crosshost attacks across multiple hosts. Their tool uses tagged provenance graphs that models the techniques and operational procedures used by an attacker. They define a novel Graph Similaritybased Alert Correlation (GSAC) technique that determines the entities that are associated with alerts generated on different hosts, and evaluated their tool on two public Cybersecurity datasets that contains attack traces on multiple hosts. In Cheng et al. (2018), the authors proposed a distributed denialofservice attack detection method that combines the Enhanced Random Forest (ERF) ensemble learning method and an Optimized Genetic Algorithm (OGA). In More and Gosavi (2016), the authors demonstrated an approach that utilizes Multivariate Correlation analysis to identify DDoS attacks in realtime. In Haas et al. (2019), the authors presented a correlationbased approach that transformed clusters of alerts into graph structures and computed signatures of repeated network patterns to characterize clusters of alerts. They evaluated their approach on realworld attack scenarios that include DDoS attacks. In Ramaki et al. (2015), the authors proposed an efficient framework for correlating alerts in early warning systems. Their framework combines statistical and stream mining techniques to extract sequences of alerts that are part of multistep attack scenarios, and evaluated on two DDoS attack scenarios. In Lei et al. (2021), the authors proposed a hybrid model that integrated MultiFeature Correlation (MFC) and a deep neural network. They evaluated their model on the UNSWNB15, AWID, CICIDS 2017 and CICIDS 2018 Cybersecurity datasets which include traces of DDoS attacks.
Summary
To the best of our knowledge, there is no work that compared multiple regression models to identify correlations of reflection attacks on the NTP servers and identify correlations of reflection attacks on the NetBIOS servers. A summary of the main attributes of the reviewed works is given in Table 1. Differently to the works in Table 1, we (a) developed an approach that evaluates the ability of the LASSO, Ridge and Elastic Net regression models in identifying correlations of reflection attacks on the NTP servers and correlations of reflection attacks on the NetBIOS servers, (b) identify the devices and network traffic associated with the NTP and NetBIOS servers reflection attacks, and (c) identify the dwell times between reflection attacks on the NTP and NetBIOS servers.
Network model and data
In this section, we present the network model to which our approach is applicable in section "Network model". Then, we describe the NetFlow data in section "NetFlow data".
Network model
Our approach is based on a generic client–server network model as depicted in Fig. 1 (Halsall 1996). The network consists of client devices, servers and routers. Client devices are separate computers that access a service made available by a server. The server is another computer that the client accesses the service by way of the network. Traffic between these networks are managed by the router, which forwards packets to their destination Internet Protocol (IP) addresses. The workflow for an Intrusion Detection System (IDS) consists of two phases, as depicted in Fig. 2 (Mancini and Pietro 2008). In phase 1, network packet data between a client and a server, two clients or two servers are collected by the Router, and then the data is sent to the Data store which aggregates the data. Once the data is aggregated, in phase 2 the Analysis console retrieves the data and analyzes it to identify an attack.
NetFlow data
The NetFlow data is collected on most networks (Lin et al. 2018). An example of a NetFlow record is given as follows:

6652800, 4666, Comp107130, Comp584379, 6, Port04167, 443, 130, 82, 71556, 55117
In this NetFlow record, there are 11 fields. The first field contains the start time (6652800). The second field contains the duration of the communications between the source and destination devices (4666). The third and fourth fields contain the source (Comp107130) and destination (Comp584379) devices, respectively. The fifth field contains the network protocol number (6, i.e., TCP). The sixth and seven fields contain the source (Port04167) and destination (443) ports, respectively. The eighth and ninth fields contain the number of packets (130) and bytes (82) sent by the source device, respectively. The tenth and eleventh fields contain the number of packets (71556) and bytes (55117) sent by the destination device, respectively.
Identifying correlations of reflection attacks
To execute a reflection attack, an attacker uses a tactic called IP (Internet Protocol) spoofing which replaces the real sender’s source IP address with the IP address of another device, as depicted in Fig. 3. This causes the target device to respond to the request and send the answer to the victim host IP address. For example, a firewall may be configured to allow port 137 (i.e., NetBIOS) traffic so that computers on a local area network can communicate with network hardware and transmit data across the network. An attacker can take advantage of such a rule in the firewall and use some NetBIOS servers as intermediaries to execute a reflection attack on other NetBIOS servers.
Thus, our objective is to determine if reflection attacks are “correlated” or “not correlated”. By “correlated”, we mean the NetFlow records which are assigned the largest positive regression coefficients by the regression model. By “not correlated”, we mean the NetFlow records which are assigned regression coefficients close to 0 by the regression model. In this paper, we aim to identify correlations of reflection attacks in the NetFlow data. The research problem that we address in this paper is given as follows: Given (a) the NetFlow data, (b) a network protocol number, and (c) a range of dates:

Identify the NetFlow records which are assigned the largest positive regression coefficients or the smallest regression coefficients by the regression model.

Identify the devices which are associated with the reflection attack and obtain the amount of traffic which is generated by the attack.

Identify the time elapsed between the start times of two adjacent NetFlow records which are associated with the reflection attack.
As such, the workflow we propose consists of three phases, as depicted in Fig. 4. The first phase in the workflow is Data preprocessing. It extracts the features in the NetFlow data and organizes the features into data structures. After the data structures are generated, the second phase of Regression models training applies different regression algorithms to learn the regression coefficients of multiple features given a target feature. This phase corresponds to “identifying” correlations of reflection attacks from the NetFlow data. Then, the third phase of Regression models validation applies statistical validation techniques to determine whether the regression model’s estimated values are close to the observed values in the data. Next, we present the details for each of the three modules in the workflow.
Data preprocessing
In the data preprocessing phase, the goal is to present the NetFlow data in a structured format so that the data can be easily processed by data analysis algorithms (Tan et al. 2006). To attain this, we need to address three issues: (a) the NetFlow data contains vectors of network traffic, (b) the NetFlow records are unlabelled, and (c) the magnitude, range and unit of the feature values are different. By unlabelled, we mean that there are no NetFlow records labelled as “malicious” or “benign” in the NetFlow data.
Data formatting
The NetFlow data is captured in a way such that the network traffic is represented by four vectors corresponding to all the NetFlow records for one day. The four vectors are: (a) the number of packets sent by the source device, (b) the number of bytes sent by the source device, (c) the number of packets sent by the destination device, and (d) the number of bytes sent by the destination device. To address this issue, we construct a featurecount data matrix. In this data matrix, the columns represent the NetFlow records and the rows represent the samples of a vector in the NetFlow records. To construct the featurecount data matrix, we implemented a function in the Data preprocessor module. The process is given as follows:

Obtain the number of NetFlow records and initialize a featurecount data matrix, where the columns and rows of the data matrix are equal to the number of NetFlow records.

Fill the diagonals in the data matrix with the vector value contained in number of packets sent by the source device, number of packets sent by the destination device, number of bytes sent by the source device or number of bytes sent by the destination device corresponding to the respective NetFlow record.

Fill all the remaining cells in the data matrix with zero.
Data scaling
In data scaling, the values in the data are transformed so that the values fit within a specific scale. The values in the NetFlow data vary in terms of the magnitude, range and unit. The size in the number of packets sent by a source device is typically lower than the size in the number of bytes sent by that source device. Furthermore, the range of values for the number of packets sent and the number of bytes sent are different. Thus, in order for a regression model to interpret the features on the same scale, we need to perform data scaling.
There are two standard methods for scaling data values (Agresti and Franklin 2009): (a) normalization, and (b) standardization. Data normalization scales the data values into a range of [0, n]. In contrast, data standardization scales the data values to have a mean of 0 and a standard deviation of 1. Data normalization is useful when the data is needed in the bounded intervals. However, it is difficult to identify an outlier. In contrast, data standardization produces useful information about outliers, which makes the regression model less sensitive to outliers (Agresti and Franklin 2009). Thus, we scale the values in the featurecount data matrix so that the values are centered around the mean with a unit standard deviation.
Regression models training
After the featurecount data matrix is generated, we need to identify (a) which NetFlow records are correlated during the reflection attack, and (b) which NetFlow records are not correlated during the reflection attack. To attain this, we use the Ridge, LASSO and Elastic Net regression models to obtain the regression coefficients for multiple NetFlow records given a target NetFlow.
Multiple linear regression (MLR) and polynomial regression (PLR) are standard regression algorithms that are widely used to model complex relationships with many variables (Schroeder et al. 2016). Multiple linear regression models the relationship between the dependent variable and two or more independent variables using a straight line. In contrast, polynomial regression models the relationship between the dependent variable and two or more independent variables as a \(n\)th degree polynomial. However, MLR and PLR models are susceptible to overfitting on the training data, which causes the model to perform poorly on new data. Differently to MLR and PLR models, which do not use regularization, the LASSO, Ridge and Elastic Net regression models use regularization to constrain the regression coefficients and improve the model’s accuracy. Regularization is achieved by penalizing variables that have a large coefficient value. The LASSO, Ridge and Elastic Net regression models functions are given in Table 2. The LASSO regression model includes a penalty term called L1norm (Tibshirani 1996). It sets the regression coefficients of some of the independent variables to zero. The Ridge regression model includes a penalty term called L2norm. Differently to L1norm, L2norm shrinks the regression coefficients of all the independent variables towards zero (Hoerl and Kennard 2000). The Elastic Net regression model includes both L1norm and L2norm penalty terms (Zou et al. 2005).
Handling bias in regularized regression models
To perform regression analysis, we need to address two issues: (a) handle bias in the regularized regression model, and (b) select the penalty parameter. In statistics, bias is anything that leads to a systematic difference between the observed values in the data and the estimates which are produced by a regression model (Schroeder et al. 2016). The LASSO, Ridge and Elastic Net regression models add a penalty term in the cost function. The penalty term penalizes a regression model with large regression coefficients, which reduces the model’s variance. For example, if the number of packets sent by NTP server A ranges from 80,000 to 120,000 per minute and the number of packets sent by NTP server B ranges from 800 to 1200 per minute, the regression coefficient for the number of packets sent by NTP server B of 1 packet change will be a much larger coefficient in regard to its change in the number of packets sent compared to a 1 packet change in the number of packets sent by NTP server A. If a larger regression coefficient for NTP server B is obtained, then the regularized regression model will penalize NTP server B’s regression coefficient. As a result, a biased model can be produced. To resolve this issue, we standardize all the values in the featurecount data matrix. Then, we input the standardized featurecount data matrix into the LASSO, Ridge and Elastic Net regression models, train the regression model and obtain the fitted regression model.
Selecting the penalty parameter
The penalty parameter (\(\lambda\)) is a value that controls the amount of shrinkage of the regression coefficients in the LASSO, Ridge and Elastic Net regression models (Schroeder et al. 2016). When \(\lambda = 0\), no regression coefficients are removed. When \(\lambda\) increases, more regression coefficients are removed. When \(\lambda = \infty\), all the regression coefficients are removed. To select the best value for \(\lambda\), we use a general approach called kfold cross validation (Tan et al. 2006). It extracts a portion of the data and sets it aside to be used as a test set. The remaining portions of the data are used as the training set. The regression model is trained on the training dataset. Then, the test dataset is used to test the regression model. 10fold crossvalidation is typically used to obtain the best \(\lambda\) value (Tan et al. 2006). We implemented a function in the Regression models trainer module to perform 10fold cross validation and select the penalty parameter. The process is given in Algorithm 1:
Regression models validation
Once the regression model is trained, we need to assess the model’s accuracy. There are two standard metrics for measuring how close the values estimated by the regression model and the observed values in the data are. The metrics are (Agresti and Franklin 2009): (a) coefficientofdetermination (\(R^{2}\)), and (b) Root Mean Squared Error (RMSE). The coefficientofdetermination is the proportion of variation in the dependent variable that is predictable from the independent variables. Differently to \(R^{2}\), RMSE is the average difference between the regression model’s estimated values and the observed values. A RMSE value ranges between 0 and infinity. If the RMSE value is close to 0, it shows that the regression model replicated the observed values accurately. However, it becomes difficult to interpret a large RMSE value. In contrast to the RMSE value, the \(R^{2}\) value ranges between 0 and 1. If \(R^{2} = 0\), it shows that the regression model’s estimated values are different from the observed values. If \(R^{2} = 1\), it shows that the regression model’s estimated values match the observed values. Thus, we use the \(R^{2}\) statistic to obtain the accuracy of the Ridge, LASSO and Elastic Net regression models.
Accounting for inflation in \(R^{2}\)
The \(R^{2}\) statistic is at least weakly increasing when more independent variables are added to the regression model. If redundant independent variables were included in the regression model, the \(R^{2}\) value remains the same or increases. Consequently, the \(R^{2}\) statistic alone cannot determine if the independent variables are useful. To resolve this issue, we obtain the adjusted \(R^{2}\) value (Yin and Fan 2001). It determines whether adding more independent variables actually increases the regression model’s fit. We implemented a function in the Regression models validator module to calculate the adjusted \(R^{2}\). The formula for calculating the adjusted \(R^{2}\) is (Agresti and Franklin 2009): \(1  \{(1  R^{2})(n  1) \div (n  p  1)\}\), where n is the number of NetFlow records associated with the reflection attack and p is the total number of independent variables.
Evaluation on an enterprise network
We conduct our study of reflection attacks on an enterprise network operated by Los Alamos National Laboratories. The network hosts 60,000 devices and provides storage and user account services. The NetFlow data is collected in the network (Turcotte et al. 2018). One day’s worth of NetFlow data contains 220,000,000 NetFlow records on average. All the NetFlow records are unlabeled. It was reported that the NetFlow data contains compromised devices (Zhenzheng et al. 2018), but the times and number of compromised devices are not known. Thus, we randomly select eight days worth of NetFlow data for analysis.
Phase 1: Identify correlations of reflection attacks
To ascertain whether reflection attacks are correlated or not correlated, first we obtain the NetFlow records which are associated with a reflection attack. We implemented a function in our workflow to scan the NetFlow data and extract NetFlow records containing the same source and destination port numbers. We applied the function to the eight days of NetFlow data and identified reflection attacks on several network protocols, though we focused on a subset of attacks as reflection attacks on the NTP and NetBIOS servers. DDoS attacks on the NTP and NetBIOS servers have been widely reported (Kopp et al. 2021; Sarmento et al. 2021). For each day, we assigned the first NetFlow record as the dependent variable and assigned the remaining NetFlow records as independent variables. Then, we trained the Ridge, LASSO and Elastic Net regression models on the four attributes in the NetFlow data separately and obtained the fitted regression models. The four attributes are: (a) number of packets sent by the source device, (b) number of bytes sent by the source device, (c) number of packets sent by the destination device, and (d) number of bytes sent by thedestination device.
Reflection attack on the NTP server
First, we obtain the \(R^{2}\) and adjusted \(R^{2}\) values for the Elastic Net, Ridge and LASSO regression models trained on the number of packets sent by the source device attribute. The adjusted \(R^{2}\) shows if adding more NetFlow records in the LASSO, Ridge and Elastic Net regression models increases the \(R^{2}\) value. To obtain the adjusted \(R^{2}\) value, we set \(p = 1\) and \(n =\) the number of NetFlow records associated with the NTP server reflection attack. The \(R^{2}\) and adjusted \(R^{2}\) values are given in Table 3. From Table 3, we observed that (a) the \(R^{2}\) and adjusted \(R^{2}\) values for the Ridge regression model ranged from 0.01 to 0.03 on days 1, 3, 4, 5, 7 and 8, 0.05 on day 2 and 0.07 on day 6, (b) the \(R^{2}\) and adjusted \(R^{2}\) values for the Elastic Net regression model ranged from 0.01 to 0.03 on days 1, 4, 5, 6, 7 and 8, 0.04 on day 3 and 0.07 on day 2, and (c) the \(R^{2}\) and adjusted \(R^{2}\) values for the LASSO regression model ranged from 0.01 to 0.02 on days 1, 2, 3, 4, 6 and 7, 0.03 on day 8 and 0.06 on day 5. We obtained the \(R^{2}\) and adjusted \(R^{2}\) values for the Ridge, Elastic Net and LASSO regression models trained on the number of bytes sent by the source device, number of packets sent by the destination device, and number ofbytes sent by the destination device attributes. Their \(R^{2}\) and adjusted \(R^{2}\) values ranged from 0.01 to 0.07 over the eight days.
On all the eight days, the \(R^{2}\) and adjusted \(R^{2}\) values for the Ridge, Elastic Net and LASSO regression models are close to 0, indicating that the accuracy of all three regression models are the same. Furthermore, the range of \(R^{2}\) and adjusted \(R^{2}\) values in all three regression models trained on the four attributes separately are the same. Moreover, the \(R^{2}\) and adjusted \(R^{2}\) values for the Ridge, LASSO and Elastic Net regression models are the same, indicating that more independent variables added to all the three regression models did not increase the regression model’s fit to the observed data. Thus, the number of packets sent by the source device attribute can be used as the primary attribute. 
Next, we obtain the residuals from the Elastic Net, Ridge and LASSO regression models trained on the number of packets sent by the source device attribute. A residual is the difference between the regression model’s estimated value and the observed value in the data. Residual analysis belongs to a class of techniques for evaluating the goodnessoffit of a fitted regression model. If a regression model is a good fit to the observed data, all its residual values will be close to 0 or equals to 0. If a regression model is not a good fit to the observed data, some of its residual values will not be close to 0. To obtain the proportion of residuals, we implemented a function in the Regression models validator module. The process for obtaining the proportion of residuals is given as follows: (a) obtain the residual value for each sample in the regression model, (b) obtain the percentage of all unique residual values, and (c) obtain the cumulative distribution of the percentage of unique residual values. The proportion of residuals in the Elastic Net regression model for day 1 is shown in Fig. 5. From Fig. 5, we observed that (a) the residuals range from 0 to 80, and (b) a proportion of the residuals are greater than 0. When the residuals are greater than 0, it shows that the values estimated by the Elastic Net regression model differ from the observed values in the data. We obtained the proportion of residuals in the Elastic Net regression model for days 2 to 8. On all the 7 days, their residuals range from 0 to 80 and a proportion of those residuals are greater than 0. Next, we obtained the residuals from the Ridge and LASSO regression models for days 1 to 8. On all the eight days, their residuals ranged from 0 to 80 and a proportion of those residuals are greater than 0.
Next, we determine which one of three regression models best fit the data. To achieve this, we apply a standard technique called the general Fstatistic. The Fstatistic is used to compare statistical models that have been fitted to a dataset, in order to identify the statistical model that best describes the population from which the data is sampled (Agresti and Franklin 2009). First, we define the null and alternate hypotheses. The null hypothesis is that the sumofsquares error (SSE) of one regression model is close to the SSE of a different regression model. The alternate hypothesis is that the SSE of one regression model differs significantly from the SSE of a different regression model. The formula for computing the general linear Fstatistic is (Agresti and Franklin 2009): \((\frac{SSE(M_{1})  SSE(M_{2})}{df_{M1}  df_{M2}}) \div \frac{SSE(M_{2})}{df_{M2}}\), where \(M_{1}\) and \(M_{2}\) are two different regression models, \(df_{M1}\) and \(df_{M2}\) are the degrees of freedom associated with regression models \(M_{1}\) and \(M_{2}\) respectively. When \(F^{*} \ge 3.95\), we reject the null hypothesis in favour of the alternate hypothesis. We implemented a function in the Regression model validator module to obtain the Fstatistic. We apply the general linear Fstatistic on the Elastic Net, Ridge and LASSO regression models and obtained the \(F^{*}\) value. A summary of Ftests on the Elastic Net, Ridge and LASSO regression models is given in Table 4. From Table 4, we observed that from days 1 to 8 the \(F^{*}\) value is 0. Since \(F^{*} \le 3.95\), we fail to reject the null hypothesis.
On all the eight days, a proportion of residuals in all three regression models are greater than 0, indicating that the values estimated by all three regression models differ from the observed values in the data. The residual values in all three regression models ranged from 0 to 80. Furthermore, the \(F^{*}\) value for all three regression models is 0. When (a) the \(F^{*}\) value is 0, (b) the range of residuals in all three regression models are the same, and (c) a proportion of residuals in all three regression models are greater than 0, the Elastic Net regression model can be used as the main model. 
Next, we obtain the regression coefficients for all NetFlow records in the Elastic Net regression model. A summary of regression coefficients is given in Table 5. From Table 5, we observed that all regression coefficients obtained for days 1 to 8 are close to 0 or equal to 0. We obtained the regression coefficients of all NetFlow records from the Ridge and LASSO regression models for days 1 to 8. On all the eight days, the regression coefficients of all NetFlow records in the Ridge and LASSO regression models are close to 0 or equal to 0.
On days 1 to 8, the regression coefficients of all NetFlow records associated with the NTP server reflection attack are close to 0 or equal to 0, indicating that reflection attacks on the NTP servers are not correlated. 
Reflection attack on the NetBIOS server
As was done with the NTP servers, we obtain the \(R^{2}\) and adjusted \(R^{2}\) values for the Elastic Net, Ridge and LASSO regression models trained on the number of packets sent by the source device attribute. We set \(p = 1\) and \(n =\) the number of NetFlow records associated with the NetBIOS server reflection attack. The \(R^{2}\) and adjusted \(R^{2}\) values are given in Table 6. From Table 6, we observed that (a) the \(R^{2}\) and adjusted \(R^{2}\) values for the Ridge regression model ranged from 0.01 to 0.02 on days 1 to 8, (b) the \(R^{2}\) and adjusted \(R^{2}\) values for the Elastic Net regression model ranged from 0.01 to 0.02 on days 1 to 8, and (c) the \(R^{2}\) and adjusted \(R^{2}\) values for the LASSO regression model ranged from 0.01 to 0.02 on days 1 to 5, 7 and 8 and 0.04 on day 6. We obtained the \(R^{2}\) and adjusted \(R^{2}\) values from the Ridge, Elastic Net and LASSO regression models trained on the number of bytes sent by the source device, number of packets sent by the destination device, and number of bytes sent by the destination device attributes. Their \(R^{2}\) and adjusted \(R^{2}\) values ranged from 0.01 to 0.05 over the eight days.
On all the eight days, the \(R^{2}\) and adjusted \(R^{2}\) values from the Ridge, Elastic Net and LASSO regression models are close to 0, indicating that the accuracy of all the three regression models is the same. Furthermore, the range of \(R^{2}\) and adjusted \(R^{2}\) values from all the three regression models trained on the four attributes separately are the same. Moreover, the \(R^{2}\) and adjusted \(R^{2}\) values for the Ridge, LASSO and Elastic Net regression models are the same, indicating that more independent variables added to all the three regression models did not increase the regression model’s fit to the observed data. Thus, the number of packets sent by the source device attribute can be used as the primary attribute. 
Next, we obtain the residuals in the Elastic Net, Ridge and LASSO regression models trained on the number of packets sent by the source device attribute. The proportion of residuals in the Elastic Net regression model for day 1 is shown in Fig. 6. From Fig. 6, we observed that (a) the residuals range from 0 to 80, and (b) a proportion of the residuals are greater than 0. When the residuals are greater than 0, it shows that the values estimated by the Elastic Net regression model differ from the observed values in the data. We obtained the proportion of residuals in the Elastic Net regression model for days 2 to 8. On all the 7 days, their residuals range from 0 to 80 and a proportion of those residuals are greater than 0. Next, we obtained the residuals in the Ridge and LASSO regression models for days 1 to 8. On all the eight days, the residuals in the Ridge and LASSO regression models ranged from 0 to 80 and a proportion of those residuals are greater than 0.
Next, we determine which one of three regression models best fit the data. We apply the general linear Fstatistic on the Elastic Net, Ridge and LASSO regression models and obtained the \(F^{*}\) value. A summary of Ftests on the Elastic Net, Ridge and LASSO regression models is given in Table 7. From Table 7, we observed that from days 1 to 8 the \(F^{*}\) value is 0. Since \(F^{*} \le 3.95\), we fail to reject the null hypothesis.
On all the eight days, a proportion of the residuals from the Elastic Net, Ridge and LASSO regression models are greater than 0, indicating that the values estimated by all the three regression models differ from the observed values in the data. The residuals in all the three regression models ranged from 0 to 80. Furthermore, the \(F^{*}\) value for the Elastic Net, Ridge and LASSO regression models is 0. When (a) the \(F^{*}\) value is 0, (b) the range of residuals in all the three regression models are the same, and (c) a proportion of residuals in all the three regression models are greater than 0, the Elastic Net regression model can be used as the main model. 
Next, we obtain the regression coefficients for all NetFlow records in the Elastic Net regression model. A summary of the regression coefficients is given in Table 8. From Table 8, we observed that all the regression coefficients obtained for days 1 to 8 are close to 0 or equal to 0. We obtained the regression coefficients of all NetFlow records in the Ridge and LASSO regression models for days 1 to 8. On all the eight days, the regression coefficients of all NetFlow records in the Ridge and LASSO regression models are close to 0 or equal to 0.
On days 1 to 8, the regression coefficients of all NetFlow records associated with the NetBIOS server reflection attack are close to 0 or equal to 0, indicating that the reflection attack on the NetBIOS servers are not correlated. 
Phase 2: Identify the devices and amount of traffic generated by the reflection attacks on NTP and NetBIOS servers
The first phase of our analysis is characterized by the identification of correlations of reflection attacks on the NTP servers and correlations of reflection attacks on the NetBIOS servers. We observed that (a) reflection attacks on the NTP servers are not correlated, and (b) reflection attacks on the NetBIOS servers are not correlated. Our next objective is to identify the devices and the amount of traffic generated by the NTP servers and NetBIOS servers reflection attacks. To realize this, we obtain the source and destination devices which are associated with those reflection attacks.
Therefore, we count the number of unique source and destination devices associated with the NTP servers reflection attacks. A summary of source and destination devices is given in Table 9. From Table 9, we observed that from day 1 to day 8, multiple source and destination devices are associated with reflection attacks on NTP servers.
Next, we obtain (a) the number of packets and bytes sent by these source and destination devices associated with these NTP servers reflection attacks, and (b) the total number of packets and bytes transmitted in the network. The total number of packets, number of malicious packets and percentage of malicious packets are shown in Fig. 7. From Fig. 7a, we observed that the percentage of malicious packets sent by these source devices range from 0.22 to 20.44% over eight days. From Fig. 7b, we observed that the percentage of malicious packets sent by these destination devices range from 2.56 to 52.01% over eight days. The total number of bytes, number of bytes contained in the malicious packets and percentage of bytes in those malicious packets are shown in Fig. 8. From Fig. 8a, we observed that the percentage of bytes in those malicious packets sent by these source devices is 0% on all eight days. From Fig. 8b, we observed that the percentage of bytes in the malicious packets sent by these destination devices is 0% on all eight days. This result shows that the malicious packets associated with the reflection attack on these NTP servers contained 0byte payloads.
While the percentage of malicious packets sent by these source devices ranged from 0.22 to 20.44% and the percentage of malicious packets sent by these destination devices ranged from 2.56 to 52.01% over eight days, all the malicious packets contained 0byte payloads, indicating that the reflection attack did not overwhelm these NTP servers. 
As was done with the NTP servers, we count the number of unique source and destination devices which are associated with the NetBIOS server reflection attack. A summary of source and destination devices is given in Table 10. From Table 10, we observed that from day 1 to day 8, multiple source and destination devices are associated with reflection attacks on NetBIOS servers.
Next, we obtain (a) the number of packets and bytes sent by these source and destination devices associated with the reflection attack on these NetBIOS servers, and (b) the total number of packets and bytes transmitted in the network. The total number of packets, number of malicious packets and percentage of malicious packets are shown in Fig. 9. From Fig. 9a, we observed that the percentage of malicious packets sent by these source devices range from 0.64 to 14.63% over eight days. From Fig. 9b, we observed that the percentage of malicious packets sent by these destination devices range from 6.34 to 45.65% over eight days. The total number of bytes, number of bytes contained in the malicious packets and the percentage of bytes in those malicious packets are shown in Fig. 10. From Fig. 10a, we observed that the percentage of bytes in those malicious packets sent by these source devices is 0% on all eight days. From Fig. 10b, we observed that the percentage of bytes in those malicious packets sent by these destination devices is 0% on all eight days. This result shows that the malicious packets associated with the reflection attack on these NetBIOS servers contained 0byte payloads.
While the percentage of malicious packets sent by these source devices ranged from 0.64 to 14.63% and the percentage of malicious packets sent by these destination devices ranged from 6.34 to 45.65% over eight days, those malicious packets contained a 0byte payload, indicating that the reflection attack did not overwhelm these NetBIOS servers.
Phase 3: Identify the dwell times of reflection attacks on NTP and NetBIOS servers
The second phase of our analysis is characterized by the identification of devices associated with the NTP and NetBIOS server reflection attacks and the network traffic generated by those attacks. We observed that (a) multiple source and destination devices are associated with those reflection attacks, and (b) a small percentage of network traffic is generated by the NTP and NetBIOS server reflection attacks. Our next objective is to identify the dwell time of NTP and NetBIOS server reflection attacks. To achieve this, we obtain the time elapsed between the start times of adjacent NetFlow records associated with the reflection attack.
The dwell time for reflection attacks on the NTP server on days 1 to 8 are shown in Fig. 11. From Fig. 11a–h, we observed that the dwell time ranged from 0 to 68 s over eight days.
The dwell times of NTP server reflection attacks ranged from 0 to 68 s over eight days, indicating that the time elapsed between reflection attacks on these NTP servers are small. 
As was done with the NTP servers, we obtain the dwell time for reflection attacks on the NetBIOS server. The dwell time on days 1 to 8 are shown in Fig. 12. From Fig. 12a–h, we observed that the dwell time ranged from 0 to 198 s over eight days.
The dwell time of NetBIOS server reflection attacks ranged from 0 to 198 s over eight days, indicating that the time elapsed between reflection attacks on these NetBIOS servers are small. 
Discussion
From these results, we have shown that the LASSO, Ridge and Elastic Net regression models are unsuitable as a means for identifying correlations of reflection attacks. Our analysis of the NetFlow data from a large enterprise network helps to bring awareness to the extent to which reflection attacks are correlated. The fact that reflection attacks on these NTP servers are not correlated and reflection attacks on these NetBIOS servers are not correlated is not obvious. For example, the regression coefficients in the Elastic Net, LASSO and Ridge regression models from day 1 to day 8 are close to 0 or equal to 0. We summarize our findings and recommendations in Table 11.
We observed that the network traffic generated by these reflection attacks did not overwhelm the NTP and NetBIOS server on all eight days. While network administrators are less concerned with a reflection attack that did not lead to a loss of network service, it is better to equip the network with reflection attack detectors to reduce the network service downtime. These recommendations are suitable for various networks as well, since complex networks including but not limited to peertopeer networks and InternetofThings networks can also benefit from NetFlow data analysis.
Threats to validity
We have identified the following threats to validity: (a) internal validity and (b) external validity.
Internal validity is concerned with the extent to which a causeandeffect relationship established in an empirical study cannot be explained by other factors (Slack et al. 2001). Those factors are comprised of: (a) the quality of the NetFlow data which can lead to variations in the network traffic over time, and thus it could mislead our correlation analysis, (b) the choice of dates of NetFlow data which can lead to selection bias, and (c) the types of data analyzed in our study. Regarding the quality of the NetFlow data, Turcotte et al. (2018) showed that the volume of network packets captured in the NetFlow data matched the expected traffic in the enterprise network operated by Los Alamos National Laboratories, which assured that the NetFlow data is high quality. With respect to the dates of NetFlow data, we may have missed reflection attacks which are not correlated. To address this issue, we randomly selected eight days worth of NetFlow data for our analysis. On the types of data analyzed in our study, we did not consider the Windows server logs (Dwyer et al. 2013), hardware performance data (Yu et al. 2019) or behaviour logs (Shalaginov et al. 2016) as they are beyond the scope of this paper, nor perform deep packet inspection because it would require substantial resources out of the reach of this paper. Having said that, we showed that reflection attacks on the NTP and NetBIOS servers exist in the NetFlow data and those reflection attacks are not correlated.
External validity is concerned with the extent to which the results presented in an empirical study can be generalized to other settings (Slack et al. 2001). Some data centers do not collect detailed security incident reports, and some may not release the security logs due to privacy concerns (Miranskyy et al. 2016). Our conclusions are based on the NetFlow data of a large enterprise network, and the results we presented in our study may not generalize to other network models. Consequently, this makes our statistical analysis difficult to confirm. Having said that, network monitoring tools are currently being deployed on these networks (Ghafir et al. 2016). Therefore, validating our analysis has become attainable.
Conclusion and future work
An approach based on correlating NetFlow records in the NetFlow data is proposed to identify correlations of reflection attacks. We showed that reflection attacks on the NTP and NetBIOS servers exist in the NetFlow data and evaluated the Ridge, Elastic Net and LASSO regression models. We applied the kfold cross validation and coefficientofdetermination and ensured accurate results. From our study, we learned that (a) reflection attacks on the NTP servers are not correlated, (b) reflection attacks on the NetBIOS servers are not correlated, (c) the dwell time between reflection attacks on the NTP and NetBIOS server is short, and (d) a small percentage of network traffic is generated by reflection attacks on the NTP and NetBIOS server.
In our future work, we plan to apply our approach on NetFlow data from more networks, and identify correlations of reflection attacks other than reflection attacks on the NTP and NetBIOS servers.
Availibility of data and materials
The NetFlow data analyzed during this study is available for downloading at https://csr.lanl.gov/data/2017/. The codes used in this study are available from the corresponding author upon request.
References
Agresti A, Franklin C (2009) Statistics: the art and science of learning from data. Prentice Hall, Upper Saddle River
Ahuja V, Kotkar M, Bhongade R, Kshirsagar D (2022) Reflection based distributed denial of service attack detection system. In: Proceedings of the 6th IEEE international conference on computing, communication, control and automation (ICCUBEA). IEEE, pp 1–5. https://doi.org/10.1109/ICCUBEA54992.2022.10011055
Anagnostopoulos M, Lagos S, Kambourakis G (2022) Largescale empirical evaluation of DNS and SSDP amplification attacks. J Inf Secur Appl 66:103168. https://doi.org/10.1016/j.jisa.2022.103168
Benmohamed E, Thaljaoui A, El Khediri S, Aladhadh S, Alohali M (2023) DDoS attacks detection with half autoencoderstacked deep neural network. Int J Coop Inf Syst 235:25. https://doi.org/10.1142/S0218843023500259
Bian H, Bai T, Salahuddin MA, Limam N, Daya AA, Boutaba R (2019) Host in danger? Detecting network intrusions from authentication logs. In: Proceedings of 15th international conference on network and service management (CNSM), pp 1–9. https://doi.org/10.23919/CNSM46954.2019.9012700
Chawla S, Sachdeva M, Behal S (2016) Discrimination of DDoS attacks and flash events using Pearson’s product moment correlation method. Int J Comput Sci Inf Secur (IJCSIS) 14(10):382
Cheng J, Li M, Tang X, Sheng VS, Liu Y, Guo W (2018) Flow correlation degree optimization driven random forest for detecting DDoS attacks in cloud computing. Secur Commun Netw. https://doi.org/10.1155/2018/6459326
Cheng Q, Wu C, Zhou S (2021) Discovering attack scenarios via intrusion alert correlation using graph convolutional networks. IEEE Commun Lett 25(5):1564–1567. https://doi.org/10.1109/LCOMM.2020.3048995
Chordiya AR, Majumder S, Javaid AY (2018) Maninthemiddle (MITM) attack based hijacking of HTTP traffic using open source tools. In: Proceedings of IEEE international conference on electro/information technology (EIT), pp 0438–0443. https://doi.org/10.1109/EIT.2018.8500144
Chuah E, Suri N, Jhumka A, Alt S (2021) Challenges in identifying network attacks using netflow data. In: Proceedings of IEEE international symposium on network computing and applications (NCA). https://doi.org/10.1109/NCA53618.2021.9685305
Cil AE, Yildiz K, Buldu A (2021) Detection of DDoS attacks with feed forward based deep neural network model. Expert Syst Appl 169:114520. https://doi.org/10.1016/j.eswa.2020.114520
Dasari KB, Devarakonda N (2023) Evaluation of svm kernels with multiple uncorrelated feature subsets selected by multiple correlation methods for reflection amplification DDoS attacks detection, pp 99–111. https://doi.org/10.1007/9789811967917_6
Dwyer J, Truta TM (2013) Finding anomalies in windows event logs using standard deviation. In: Proceedings of the IEEE international conference on collaborative computing: networking, applications and worksharing, pp 563–570. https://doi.org/10.4108/icst.collaboratecom.2013.254136
Elsayed MS, Jahromi HZ, Nazir MM, Jurcut AD (2021) The role of CNN for intrusion detection systems: an improved CNN learning approach for SDNs. In: Proceedings of the international conference on future access enablers of ubiquitous and intelligent infrastructures. Springer, pp 91–104. https://doi.org/10.1007/9783030784591_7
Friedberg I, Skopik F, Settanni G, Fiedler R (2015) Combating advanced persistent threats: from network event correlation to incident detection. Comput Secur 48:35–57. https://doi.org/10.1016/j.cose.2014.09.006
Ghafir I, Prenosil V, Svoboda J, Hammoudeh M (2016) A survey on network security monitoring systems. In: Proceedings of the IEEE international conference on future internet of things and cloud workshops (FiCloudW), pp 77–82. https://doi.org/10.1109/WFiCloud.2016.30
Ghosh SK, Satvat K, Gjomemo R, Venkatakrishnan VN (2022) Ostinato: crosshost attack correlation through attack activity similarity detection. In: Badarla VR, Nepal S, Shyamasundar RK (eds) Information systems security. Springer, Cham, pp 1–22. https://doi.org/10.1007/9783031236907_1
Gondim JJC, de Oliveira Albuquerque R, Sandoval Orozco AL (2020) Mirror saturation in amplified reflection distributed denial of service: a case of study using SNMP, SSDP, NTP and DNS protocols. Future Gener Comput Syst 108:68–81. https://doi.org/10.1016/j.future.2020.01.024
Gottwalt F, Chang E, Dillon T (2019) CorrCorr: a feature selection method for multivariate correlation network anomaly detection techniques. Comput Secur 83:234–245. https://doi.org/10.1016/j.cose.2019.02.008
Haas S, Fischer M (2019) On the alert correlation process for the detection of multistep attacks and a graphbased realization. ACM SIGAPP Appl Comput Rev. https://doi.org/10.1145/3325061.3325062
Haas S, Wilkens F, Fischer M (2019) Efficient attack correlation and identification of attack scenarios based on networkmotifs. In: Proceedings of the IEEE international performance computing and communications conference (IPCCC), pp 1–11. https://doi.org/10.1109/IPCCC47392.2019.8958734
Hachmi F, Boujenfa K, Limam M (2019) Enhancing the accuracy of intrusion detection systems by reducing the rates of false positives and false negatives through multiobjective optimization. J Netw Syst Manag 27(1):93–120. https://doi.org/10.1007/s109220189459y
Halsall F (1996) Data communications, computer networks and open systems. AddisonWesley, New York
Heryanto A, Stiawan D, Bin Idris MY, Bahari MR, Hafizin AA, Budiarto R (2022) Cyberattack feature selection using correlationbased feature selection method in an intrusion detection system. In: Proceedings of the international conference on electrical engineering, computer science and informatics (EECSI), pp 79–85. https://doi.org/10.23919/EECSI56542.2022.9946449
Hoerl AE, Kennard RW (2000) Ridge regression: biased estimation for nonorthogonal problems. Technometrics 42(1):80–86. https://doi.org/10.1080/00401706.1970.10488634
Hostiadi DP, Ahmad T (2022) Hybrid model for bot group activity detection using similarity and correlation approaches based on network traffic flows analysis. J King Saud Univ Comput Inf Sci 34(7):4219–4232. https://doi.org/10.1016/j.jksuci.2022.05.004
Joshi J (2008) Network security: know it all. Morgan Kaufmann, Burlington, p 368
Kaja N, Shaout A, Ma D (2019) An intelligent intrusion detection system. Appl Intell 49(9):3235–3247. https://doi.org/10.1007/s10489019014361
Kopp D, Dietzel C, Hohlfeld O (2021) DDoS never dies? An IXP perspective on DDoS amplification attacks. In: Passive and active measurement. Springer, Cham, pp 284–301. https://doi.org/10.1007/9783030725822_17
Kshirsagar D, Kumar S (2022) A feature reduction based reflected and exploited DDoS attacks detection system. J Ambient Intell Hum Comput 1:13. https://doi.org/10.1007/s12652021029075
Lei S, Xia C, Li Z, Li X, Wang T (2021) HNN: a novel model to study the intrusion detection based on multifeature correlation and temporal–spatial analysis. IEEE Trans Netw Sci Eng 8(4):3257–3274. https://doi.org/10.1109/TNSE.2021.3109644
Lin WC, Ke SW, Tsai CF (2015) CANN: an intrusion detection system based on combining cluster centers and nearest neighbors. Knowl Based Syst 78:13–21. https://doi.org/10.1016/j.knosys.2015.01.009
Lin H, Yan Z, Chen Y, Zhang L (2018) A survey on network securityrelated data collection technologies. IEEE Access 6:18345–18365. https://doi.org/10.1109/ACCESS.2018.2817921
Liu Z, Jin H, Hu Y, Bailey M (2018) Practical proactive DDoSattack mitigation via endpointdriven innetwork traffic control. IEEE/ACM Trans Netw 26(4):1948–1961. https://doi.org/10.1109/TNET.2018.2854795
Mancini LV, Pietro R (2008) Intrusion detection systems. Springer, Berlin, p 250
Miranskyy A, HamouLhadj A, Cialini E, Larsson A (2016) Operationallog analysis for big data systems: challenges and solutions. IEEE Softw 33(2):52–59. https://doi.org/10.1109/MS.2016.33
MoraGimeno FJ, MoraMora H, Volckaert B, Atrey A (2021) Intrusion detection system based on integrated system calls graph and neural networks. IEEE Access 9:9822–9833. https://doi.org/10.1109/ACCESS.2021.3049249
More KK, Gosavi PB (2016) A real time system for denial of service attack detection based on multivariate correlation analysis approach. In: Proceedings of the 2016 international conference on electrical, electronics, and optimization techniques (ICEEOT), pp 1125–1131. https://doi.org/10.1109/ICEEOT.2016.7754860
Najafimehr M, Zarifzadeh S, Mostafavi S (2022) A hybrid machine learning approach for detecting unprecedented DDoS attacks. J Supercomput 78(6):8106–8136. https://doi.org/10.1007/s1122702104253x
Negi CS, Kumari N, Kumar P, Sinha SK (2021) An approach for alert correlation using ArcSight SIEM and open source NIDS. In: Nath V, Mandal JK (eds) Proceeding of fifth international conference on microelectronics, computing and communication systems. Springer, Singapore, pp 29–40. https://doi.org/10.1007/9789811602757_3
Noble J, Adams NM (2016) Correlationbased streaming anomaly detection in cybersecurity. In: Proceedings of the IEEE international conference on data mining workshops (ICDMW), pp 311–318. https://doi.org/10.1109/ICDMW.2016.0051
Ramaki AA, Amini M, Ebrahimi Atani R (2015) RTECA: real time episode correlation algorithm for multistep attack scenarios detection. Comput Secur 49:206–219. https://doi.org/10.1016/j.cose.2014.10.006
Sarmento AG, Yeo KC, Azam S, Karim A, Al Mamun A, Shanmugam B (2021) Applying big data analytics in DDos forensics: challenges and opportunities. In: Jahankhani H, Jamal A, Lawson S (eds) Cybersecurity, privacy and freedom protection in the connected world. Springer, Cham, pp 235–252. https://doi.org/10.1007/9783030685348_15
Schroeder LD, Sjoquist DL, Stephan PE (2016) Understanding regression analysis: an introductory guide, vol 57. Sage Publications, Thousand Oaks
Shahraki A, Abbasi M (2020) Haugen: boosting algorithms for network intrusion detection: a comparative evaluation of real adaboost, gentle adaboost and modest adaboost. Eng Appl Artif Intell 94:4. https://doi.org/10.1016/j.engappai.2020.103770
Shalaginov A, Franke K, Huang X (2016) Malware beaconing detection by mining largescale DNS logs for targeted attack identification. Int J Comput Syst Eng 10(4):743–755. https://doi.org/10.5281/zenodo.1123927
Shin MS, Jeong KJ (2006) Alert correlation analysis in intrusion detection. In: International conference on advanced data mining and applications. Springer, pp 1049–1056. https://doi.org/10.1007/11811305_114
Singh Samom P, Taggu A (2021) Distributed denial of service (DDoS) attacks detection: a machine learning approach. In: Applied soft computing and communication networks: proceedings of ACN 2020, pp 75–87. Springer. https://doi.org/10.1007/9789813361737_6
Singh Samra R, Barcellos M (2023) Ddos2vec: flowlevel characterisation of volumetric DDoS attacks at scale. Proc ACM Netw 1(CoNEXT3):1–25. https://doi.org/10.1145/3629135
Slack MK, Draugalis J, Jolaine R (2001) Establishing the internal and external validity of experimental studies. Am J Health Syst Pharm 58(22):2173–2181. https://doi.org/10.1093/ajhp/58.22.2173
Tan PN, Steinbach M, Kumar V (2006) Introduction to data mining. AddisonWesley, New York
Tibshirani R (1996) Regression shrinkage and selection via the lasso. J R Stat Soc Ser B (Methodol) 58(1):267–288. https://doi.org/10.1111/j.25176161.1996.tb02080.x
Turcotte MM, Kent AD, Hash C (2018) Unified host and network data set. Data Sci Cyber Secur 16:4. https://doi.org/10.1142/9781786345646_001
Yin P, Fan X (2001) Estimating R^{2} shrinkage in multiple regression: a comparison of different analytical methods. J Exp Educ 69(2):203–224. https://doi.org/10.1080/00220970109600656
Yu M, Halak B, Zwolinski M (2019) Using hardware performance counters to detect control hijacking attacks. In: Proceedings of the IEEE international verification and security workshop (IVSW). https://doi.org/10.1109/IVSW.2019.8854399
Zadnik M, Wrona J, Hynek K, Cejka T, Husák M (2022) Discovering coordinated groups of IP addresses through temporal correlation of alerts. IEEE Access 10:82799–82813. https://doi.org/10.1109/ACCESS.2022.3196362
Zhenzheng H, He Q, Chuah E et al (2018) Developing data science tools for improving enterprise cybersecurity. In: The Alan turing institute data study group final report. https://doi.org/10.5281/zenodo.3558251
Zou H, Hastie T (2005) Regularization and variable selection via the elastic net. J R Stat Soc Ser B (Stat Methodol) 67(2):301–320. https://doi.org/10.1111/j.14679868.2005.00503.x
Acknowledgements
We would like to thank the anonymous reviewers for their constructive feedback, which helped improve our paper significantly.
Funding
This work is conducted, in part, under the auspices of the EU Horizon 2020 Research and Innovation program under Grant Agreement No. 830927 (CONCORDIA) and by Security Lancaster under the EPSRC Grant EP/V026763/1.
Author information
Authors and Affiliations
Contributions
The author(s) read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Ethics approval and consent to participate
Not applicable.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Chuah, E., Suri, N. An empirical study of reflection attacks using NetFlow data. Cybersecurity 7, 13 (2024). https://doi.org/10.1186/s42400023002037
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s42400023002037