-
I'm running large-scale adversarial sample generation on many images/videos using Pytorch and AutoPGD. However, generation still takes a very long time (1-5+ days on 22388 samples depending on model), despite running on top-of-the-line GPUs (a100, h100, etc) with plenty of memory. Below are my configurations for both the PyTorch Classifier and AutoPGD attack. Perhaps it has something to do with numpy arrays being CPU data and my model being on the GPU? Are these runtimes for a decently large model reasonable? What are some ways I can improve the generation speed given access to lots of compute? In addition, is there any way to integrate with a pytorch dataloader for just adversarial generation, or do I have to keep collating the dataloaders I'm given into a big numpy array?
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
an update: I upgraded to ART 1.19 and everything is much smoother now that I can take advantage of multiprocessing with AutoAttack. Should have considered this sooner. Thanks for maintaining the library! |
Beta Was this translation helpful? Give feedback.
an update: I upgraded to ART 1.19 and everything is much smoother now that I can take advantage of multiprocessing with AutoAttack. Should have considered this sooner. Thanks for maintaining the library!