ENHANCING ADVERSARIAL ROBUSTNESS IN MACHINE LEARNING-BASED MALWARE DETECTION VIA ACTIVATION FUNCTION DESIGN
Abstract
In recent years, machine learning (ML) has significantly enhanced the efficiency of
malware detection systems. Despite achieving high performance, these models now face a
growing threat from adversarial attacks. Adversarial malware samples can be intricately
crafted to deceive detection models, resulting in misclassifications of malicious programs,
thereby allowing them to bypass security systems. Various techniques have been developed
to generate adversarial malware specifically designed to evade different ML-based detection
systems. This threat underscores the urgent need for solutions that enhance the resilience of
malware detection models against adversarial attacks. The paper evaluates and proposes an
empirical cost-efficient adversarial defense strategy recommendation via activation function
design, that does not require computationally intensive methods such as adversarial training,
while boosting the inherent resilience of ML-based malware detection models against
black-box attacks. Results show that specific combinations, in particular Rectified Linear
Unit (ReLU) and Tanh, can significantly boost robustness without additional training or
inference setup. This work provides an empirical design aspect for building intrinsically
robust ML-based malware detection systems.