Robustness tests for biomedical foundation models should tailor to specifications. [PDF]
Xian RP +7 more
europepmc +1 more source
Assessing Capability Complexity Using Enterprise Architecture Framework
ABSTRACT This study proposes a structured and quantitative methodology to evaluate the holistic complexity of system‐of‐systems (SoSs), employing the Zachman Architecture Framework (ZAF) as its foundational analytical tool. A five‐phase analytical procedure is developed and empirically validated, encompassing: (1) refinement of complexity measures, (2)
Javad Bakhshi, Mahmoud Efatmaneshnik
wiley +1 more source
Privacy-preserving cyberthreat detection in decentralized social media with federated cross-modal graph transformers. [PDF]
Premkumar D, Nachimuthu SK.
europepmc +1 more source
ConvNet-Generated Adversarial Perturbations for Evaluating 3D Object Detection Robustness. [PDF]
Abraha TM +4 more
europepmc +1 more source
DynaLiRD: A dataset for dynamic line rating of overhead transmission lines, utilizing meteorological data and grid parameters based on the IEEE 738-2012 standard. [PDF]
Alam N, Rahman MA, Islam MR.
europepmc +1 more source
SecuFL-IoT: an adaptive privacy-preserving federated learning framework for anomaly detection in smart industrial networks. [PDF]
Alqazzaz A.
europepmc +1 more source
Adversarial selective domain adaptation with feature cluster for skin cancer diagnosis. [PDF]
Gou Q, Cui G.
europepmc +1 more source
Related searches:
Principal Component Adversarial Example
IEEE Transactions on Image Processing, 2020Despite having achieved excellent performance on various tasks, deep neural networks have been shown to be susceptible to adversarial examples, i.e., visual inputs crafted with structural imperceptible noise. To explain this phenomenon, previous works implicate the weak capability of the classification models and the difficulty of the classification ...
Yonggang Zhang +4 more
openaire +2 more sources
Learning Universal Adversarial Perturbation by Adversarial Example
Proceedings of the AAAI Conference on Artificial Intelligence, 2022Deep learning models have shown to be susceptible to universal adversarial perturbation (UAP), which has aroused wide concerns in the community. Compared with the conventional adversarial attacks that generate adversarial samples at the instance level, UAP can fool the target model for different instances with only a single perturbation, enabling us to
Maosen Li +4 more
openaire +1 more source

