Composition is the Core Driver of the Language-selective Network
The frontotemporal language network responds robustly and selectively to sentences. But the features of linguistic input that drive this response and the computations that these language areas support remain debated.
Francis Mollica+8 more
doaj +2 more sources
Joint attention and exogenous attention allocation during mother-infant interaction at 12 months associate with 24-month vocabulary composition [PDF]
IntroductionEarly attentional processes are inherently linked with early parent-infant interactions and play a critical role in shaping cognitive and linguistic development.
Elena Capelli+4 more
doaj +2 more sources
Commutative Languages and their Composition by Consensual Methods [PDF]
Commutative languages with the semilinear property (SLIP) can be naturally recognized by real-time NLOG-SPACE multi-counter machines. We show that unions and concatenations of such languages can be similarly recognized, relying on -- and further ...
Pietro, Pierluigi San+1 more
core +12 more sources
Apocrypha “The Passion of Jesus Christ”: Genesis, Composition, Language Features
The article deals with the issues of genesis, content, and peculiarities of existence in the Russian manuscript tradition of the passionary compiled work "The Passion of Jesus Christ" devoted to the description of the last days of the Saviour's life on ...
Elina Valeryevna Serebryakova
doaj +2 more sources
Free composition instead of language dictatorship [PDF]
Historically, programming languages have been—benevolent—dictators: reducing all possible semantics to specific ones offered by a few built-in language constructs.
Aksit, Mehmet+3 more
core +3 more sources
InternLM-XComposer: A Vision-Language Large Model for Advanced Text-image Comprehension and Composition [PDF]
We propose InternLM-XComposer, a vision-language large model that enables advanced image-text comprehension and composition. The innovative nature of our model is highlighted by three appealing properties: 1) Interleaved Text-Image Composition: InternLM ...
Pan Zhang+18 more
semanticscholar +1 more source
How Abilities in Large Language Models are Affected by Supervised Fine-tuning Data Composition [PDF]
Large language models (LLMs) with enormous pre-training tokens and parameters emerge diverse abilities, including math reasoning, code generation, and instruction following. These abilities are further enhanced by supervised fine-tuning (SFT).
Guanting Dong+9 more
semanticscholar +1 more source
LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition [PDF]
Low-rank adaptations (LoRA) are often employed to fine-tune large language models (LLMs) for new tasks. This paper investigates LoRA composability for cross-task generalization and introduces LoraHub, a simple framework devised for the purposive assembly
Chengsong Huang+5 more
semanticscholar +1 more source
Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning [PDF]
The popularity of LLaMA (Touvron et al., 2023a;b) and other recently emerged moderate-sized large language models (LLMs) highlights the potential of building smaller yet powerful LLMs.
Mengzhou Xia+3 more
semanticscholar +1 more source
Piccola - A Small Composition Language [PDF]
Piccola is a “small composition language” currently being developed within the Software Composition Group. The goal of Piccola is to support the flexible composition of applications from software components.
Oscar Nierstrasz
semanticscholar +2 more sources