Abstract
Support vector machines (SVMs) are one of the most widely used supervised learning algorithms for classification problems. Recent years have witnessed an increasing interest in distributed variants of SVMs, in which the (labeled) training data is distributed across different nodes. While a number of algorithms have been developed in this regard, they all make the simplified assumption that every node in the network operates as intended. In many applications, however, it is common for some of the nodes to undergo failures due to faulty equipment, cyber attacks, etc., and inject faulty data into the network. This kind of failure, termed Byzantine failure, is impossible to protect against using existing distributed SVM algorithms. This paper revisits the problem of distributed SVM under the possibility of Byzantine failures in the network. In this regard, it proposes a novel algorithm for distributed SVM that remains resilient to Byzantine failures as long as the number of faulty nodes in the network is not too large. Numerical results on real-world data confirm the superiority of the proposed algorithm over existing approaches.