Results 231 to 240 of about 172,371 (266)

Robustness tests for biomedical foundation models should tailor to specifications. [PDF]

open access: yesNPJ Digit Med
Xian RP   +7 more
europepmc   +1 more source

Assessing Capability Complexity Using Enterprise Architecture Framework

open access: yesSystems Engineering, EarlyView.
ABSTRACT This study proposes a structured and quantitative methodology to evaluate the holistic complexity of system‐of‐systems (SoSs), employing the Zachman Architecture Framework (ZAF) as its foundational analytical tool. A five‐phase analytical procedure is developed and empirically validated, encompassing: (1) refinement of complexity measures, (2)
Javad Bakhshi, Mahmoud Efatmaneshnik
wiley   +1 more source

ConvNet-Generated Adversarial Perturbations for Evaluating 3D Object Detection Robustness. [PDF]

open access: yesSensors (Basel)
Abraha TM   +4 more
europepmc   +1 more source

Principal Component Adversarial Example

IEEE Transactions on Image Processing, 2020
Despite having achieved excellent performance on various tasks, deep neural networks have been shown to be susceptible to adversarial examples, i.e., visual inputs crafted with structural imperceptible noise. To explain this phenomenon, previous works implicate the weak capability of the classification models and the difficulty of the classification ...
Yonggang Zhang   +4 more
openaire   +2 more sources

Learning Universal Adversarial Perturbation by Adversarial Example

Proceedings of the AAAI Conference on Artificial Intelligence, 2022
Deep learning models have shown to be susceptible to universal adversarial perturbation (UAP), which has aroused wide concerns in the community. Compared with the conventional adversarial attacks that generate adversarial samples at the instance level, UAP can fool the target model for different instances with only a single perturbation, enabling us to
Maosen Li   +4 more
openaire   +1 more source

Home - About - Disclaimer - Privacy