Skip to main content

Table 5 A summary of the performance of the proposed subword DL models: SW-CNN and SW-LSTM and their character counterparts

From: Use of subword tokenization for domain generation algorithm classification

 

Average precision

Average F1

Average recall

Proposed SW-CNN

Word-looking DGA (average of 11 classes)

0.9355

0.9364

0.9391

Improvement char-CNN (Ren et al. 2020)

9.59%

9.70%

9.89%

Random-looking DGA (average of 39 classes)

0.7059

0.6728

0.6931

Improvement over char-CNN (Ren et al. 2020)

1.70%

1.89%

0.64%

Proposed SW-LSTM

   

Word-looking DGA (average of 11 classes)

0.9473

0.9436

0.9409

Improvement char-LSTM (Cucchiarelli et al. 2021)

13.64%

13.07%

11.65%

Random-looking DGA (average of 39 classes)

0.7031

0.6731

0.6921

Improvement char-LSTM (Cucchiarelli et al. 2021)

0.59%

1.40%

0.52%