Small Ubiquitin-Like Modifier 1 (SUMO-1) Modification of the Synergy Control Motif of Ad4 Binding Protein/Steroidogenic Factor 1 (Ad4BP/SF-1) Regulates Synergistic Transcription between Ad4BP/SF-1 and Sox9 [PDF]
Tomoko Komatsu+10 more
openalex +1 more source
IGF2BP2 expression is enhanced by SENP1 through the regulation of SUMO1‐induced IGF2BP2 SUMOylation. Additionally, IGF2BP2 promotes the neuronal differentiation of olfactory mucosa‐mesenchymal stem cells (OM‐MSCs) by stabilizing SOX11, which helps to mitigate brain injury following intracerebral hemorrhage.
Jun He+5 more
wiley +1 more source
Regulation of Homeodomain-interacting Protein Kinase 2 (HIPK2) Effector Function through Dynamic Small Ubiquitin-related Modifier-1 (SUMO-1) Modification [PDF]
Thomas G. Hofmann+4 more
openalex +1 more source
Megalocytivirus: A Review of Epidemiology, Pathogenicity, Immune Evasion, and Prevention Strategies
ABSTRACT Megalocytivirus, a large double‐stranded DNA virus belonging to the Iridoviridae family, has infected over 100 species of fish, leading to significant economic losses in the aquaculture, food, and ornamental fish industries. These viruses exhibit icosahedral symmetry and have diameters ranging from 120 to 200 nm.
Changjun Guo+4 more
wiley +1 more source
The DEAD-Box Protein DP103 (Ddx20 or Gemin-3) Represses Orphan Nuclear Receptor Activity via SUMO Modification [PDF]
Martin B. Lee+5 more
openalex +1 more source
The Molecular Toolbox for Linkage Type‐Specific Analysis of Ubiquitin Signaling
Ubiquitin modifications regulate virtually all aspects of eukaryotic cell biology through an array of polyubiquitin chain signals that adopt distinct structures, enabling the individual polyubiquitin linkages to mediate specific functions in cells.
Julian Koch+3 more
wiley +1 more source
The SuMo server: 3D search for protein functional sites [PDF]
Martin Jambon+5 more
openalex +1 more source
SUMO: Subspace-Aware Moment-Orthogonalization for Accelerating Memory-Efficient LLM Training [PDF]
Low-rank gradient-based optimization methods have significantly improved memory efficiency during the training of large language models (LLMs), enabling operations within constrained hardware without sacrificing performance. However, these methods primarily emphasize memory savings, often overlooking potential acceleration in convergence due to their ...
arxiv