Results 321 to 330 of about 2,849,722 (390)
Some of the next articles are maybe not open access.
2005
AbstractThis chapter examines Hume’s theory of empirical reason, and the difference between its rational and its irrational exercise (reasoning reasonably and unreasonably). The theory has five structural levels: (1) reasoning from one matter of fact or real existence to another takes the form of an inference from an impression to an idea; (2 ...
openaire +1 more source
AbstractThis chapter examines Hume’s theory of empirical reason, and the difference between its rational and its irrational exercise (reasoning reasonably and unreasonably). The theory has five structural levels: (1) reasoning from one matter of fact or real existence to another takes the form of an inference from an impression to an idea; (2 ...
openaire +1 more source
Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models
Trans. Mach. Learn. Res.Large Language Models (LLMs) have demonstrated remarkable capabilities in complex tasks. Recent advancements in Large Reasoning Models (LRMs), such as OpenAI o1 and DeepSeek-R1, have further improved performance in System-2 reasoning domains like ...
Yang Sui +10 more
semanticscholar +1 more source
Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement
arXiv.orgTraditional methods for reasoning segmentation rely on supervised fine-tuning with categorical labels and simple descriptions, limiting its out-of-domain generalization and lacking explicit reasoning processes.
Yuqi Liu +6 more
semanticscholar +1 more source
Imagine while Reasoning in Space: Multimodal Visualization-of-Thought
International Conference on Machine LearningChain-of-Thought (CoT) prompting has proven highly effective for enhancing complex reasoning in Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs). Yet, it struggles in complex spatial reasoning tasks.
Chengzu Li +7 more
semanticscholar +1 more source
Episteme, 2015
ABSTRACTHilary Kornblith explores the prospects for reasons eliminationism, the view that reasons ought not to be regarded as being of central importance in epistemology. I reply by conceding that reasons may not be necessary for knowledge, in at least some cases, but I argue that they are nevertheless vitally important in epistemology more broadly ...
openaire +1 more source
ABSTRACTHilary Kornblith explores the prospects for reasons eliminationism, the view that reasons ought not to be regarded as being of central importance in epistemology. I reply by conceding that reasons may not be necessary for knowledge, in at least some cases, but I argue that they are nevertheless vitally important in epistemology more broadly ...
openaire +1 more source
Robotics
Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes before providing answers.
P. Shojaee +5 more
semanticscholar +1 more source
Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes before providing answers.
P. Shojaee +5 more
semanticscholar +1 more source
2006
Legal reasoning has several aspects. On the one hand it is necessary to determine which rules can play a role in legal arguments, which rules are legal rules. The formal sources of law, such as legislation, treaties and case law, play a central role in this connection.
openaire +3 more sources
Legal reasoning has several aspects. On the one hand it is necessary to determine which rules can play a role in legal arguments, which rules are legal rules. The formal sources of law, such as legislation, treaties and case law, play a central role in this connection.
openaire +3 more sources
GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models
arXiv.orgWe present GLM-4.5, an open-source Mixture-of-Experts (MoE) large language model with 355B total parameters and 32B activated parameters, featuring a hybrid reasoning method that supports both thinking and direct response modes.
GLM-4.5 Team Aohan Zeng +169 more
semanticscholar +1 more source
Towards Large Reasoning Models: A Survey of Reinforced Reasoning with Large Language Models
arXiv.orgLanguage has long been conceived as an essential tool for human reasoning. The breakthrough of Large Language Models (LLMs) has sparked significant research interest in leveraging these models to tackle complex reasoning tasks.
Fengli Xu +19 more
semanticscholar +1 more source
From System 1 to System 2: A Survey of Reasoning Large Language Models
IEEE Transactions on Pattern Analysis and Machine IntelligenceAchieving human-level intelligence requires refining the transition from the fast, intuitive System 1 to the slower, more deliberate System 2 reasoning.
Zhong-Zhi Li +15 more
semanticscholar +1 more source

