This book provides a systematic study on the security of deep learning. With its powerful learning ability, deep learning is widely used in CV, FL, GNN, RL, and other scenarios. However, during the process of application, researchers have revealed that deep learning is vulnerable to malicious attacks, which will lead to unpredictable consequences. Take autonomous driving as an example, there were more than 12 serious autonomous driving accidents in the world in 2018, including Uber, Tesla and other high technological ...
Read More
This book provides a systematic study on the security of deep learning. With its powerful learning ability, deep learning is widely used in CV, FL, GNN, RL, and other scenarios. However, during the process of application, researchers have revealed that deep learning is vulnerable to malicious attacks, which will lead to unpredictable consequences. Take autonomous driving as an example, there were more than 12 serious autonomous driving accidents in the world in 2018, including Uber, Tesla and other high technological enterprises. Drawing on the reviewed literature, we need to discover vulnerabilities in deep learning through attacks, reinforce its defense, and test model performance to ensure its robustness. Attacks can be divided into adversarial attacks and poisoning attacks. Adversarial attacks occur during the model testing phase, where the attacker obtains adversarial examples by adding small perturbations. Poisoning attacks occur during the model training phase, wherethe attacker injects poisoned examples into the training dataset, embedding a backdoor trigger in the trained deep learning model. An effective defense method is an important guarantee for the application of deep learning. The existing defense methods are divided into three types, including the data modification defense method, model modification defense method, and network add-on method. The data modification defense method performs adversarial defense by fine-tuning the input data. The model modification defense method adjusts the model framework to achieve the effect of defending against attacks. The network add-on method prevents the adversarial examples by training the adversarial example detector. Testing deep neural networks is an effective method to measure the security and robustness of deep learning models. Through test evaluation, security vulnerabilities and weaknesses in deep neural networks can be identified. By identifying and fixing these vulnerabilities, the security and robustness of the model can be improved. Our audience includes researchers in the field of deep learning security, as well as software development engineers specializing in deep learning.
Read Less
Add this copy of Attacks, Defenses and Testing for Deep Learning to cart. $232.48, like new condition, Sold by GreatBookPrices rated 4.0 out of 5 stars, ships from Columbia, MD, UNITED STATES, published 2024 by Springer Nature.
Choose your shipping method in Checkout. Costs may vary based on destination.
Seller's Description:
Fine. Contains: Illustrations, black & white, Illustrations, color. XX, 399 p. 128 illus., 126 illus. in color. Intended for professional and scholarly audience. In Stock. 100% Money Back Guarantee. Brand New, Perfect Condition, allow 4-14 business days for standard shipping. To Alaska, Hawaii, U.S. protectorate, P.O. box, and APO/FPO addresses allow 4-28 business days for Standard shipping. No expedited shipping. All orders placed with expedited shipping will be cancelled. Over 3, 000, 000 happy customers.
Add this copy of Attacks, Defenses and Testing for Deep Learning to cart. $1,196.94, very good condition, Sold by BetterBookDeals rated 3.0 out of 5 stars, ships from NIAGARA FALLS, NY, UNITED STATES, published 2025 by Springer.