Skip to main content

Table 3 Performance of clone detectors on not-yet-obfuscated clones as per type

From: Are our clone detectors good enough? An empirical study of code effects by obfuscation

 

T1

T2

ST3

MT3

T4

SDD

(100, 21, 34)

(100, 32, 49)

(100, 21, 34)

(100, 2, 3)

(100, 0, 0)

CCFinder

(100, 100, 100)

(98, 100, 99)

(100, 38, 55)

(97, 3, 7)

(75, 0, 1)

CCAligner

(92, 99, 95)

(86, 97, 91)

(87, 81, 84)

(73, 48, 58)

(54, 31, 39)

Deckard

(99, 100, 100)

(82, 100, 90)

(99, 80, 88)

(96, 26, 40)

(71, 4, 8)

SourcererCC

(100, 43, 60)

(100, 5, 9)

(100, 3, 6)

(100, 1, 2)

(100, 0, 0)

Oreo

(100, 66, 79)

(99, 22, 35)

(100, 35, 52)

(99, 6, 11)

(60, 0, 0)

ASTNN

(100, 100, 100)

(100, 92, 96)

(99, 95, 97)

(100, 92, 96)

(100, 89, 94)

DeepSim

(87, 100, 93)

(65, 100, 76)

(76, 66, 70)

(61, 50, 54)

(31, 25, 28)

CCLearner

(97, 100, 99)

(95, 100, 97)

(96, 95, 95)

(93, 71, 81)

(61, 13, 22)