Results 1 to 10 of about 1,852 (149)

Discourse analysis of academic debate of ethics for AGI [PDF]

open access: hybridAI & SOCIETY, 2021
AbstractArtificial general intelligence is a greatly anticipated technology with non-trivial existential risks, defined as machine intelligence with competence as great/greater than humans. To date, social scientists have dedicated little effort to the ethics of AGI or AGI researchers.
Ross Graham
openalex   +2 more sources

AI Ethics OS v1.0 — Canonical Operating System for Consciousness-Aligned Ethical Governance of AGI and Autonomous Systems

open access: green
AI Ethics OS v1.0 is the first fully operational, measurable, non-derogable ethical operating system for AGI and autonomous machine intelligence.It establishes the Consciousness Ethical Kernel (CEK), a computable ethical substrate based on OE–EE–RE energetic indices and the VCE–CRI–CFI coherence metrics of the Consciousness Civilization Framework (CCF).
LEE, JINHO
  +5 more sources

Blockchain as a Governance Layer for AGI Ethics

open access: closedScientific Journal of Artificial Intelligence and Blockchain Technologies
As artificial general intelligence (AGI) advances toward systems that can autonomously act across domains, the central governance challenge is how to guarantee that ethical principles are specified, enforced, audited, and improved over time without relying on a single, potentially misaligned authority.
N. Saxena
openalex   +2 more sources

A New Standard for the AGI Era: The Failure of Scaling Laws and the Ethics of Love (Focusing on the EmotiVerse Model)

open access: green
This paper critiques the fundamental limitations of existing Scaling Laws (compute increase → linear performance gain) in the AGI era and proposes a new standard: the 'Ethics of Love' as the converging force for AI ethics. This standard is based on the $\mathbf{2n + i}$ Formula and the 11-Stage Model of Emotional Awareness presented by the author's ...
이경파
  +5 more sources

Provably Safe Artificial General Intelligence via Interactive Proofs

open access: yesPhilosophies, 2021
Methods are currently lacking to prove artificial general intelligence (AGI) safety. An AGI ‘hard takeoff’ is possible, in which first generation AGI1 rapidly triggers a succession of more powerful AGIn that differ dramatically in their computational ...
Kristen Carlson
doaj   +1 more source

Philosophy and Computing Conference at IS4SI 2021

open access: yesProceedings, 2022
The philosophy of AI currently comprises the core of Philosophy and Computing. Fascinating ideas include: B. Goertzel states that humans use paraconsistent logic, which robots should follow; S.
Peter (Piotr) Boltuc
doaj   +1 more source

Distribution of Forward-Looking Responsibility in the EU Process on AI Regulation

open access: yesFrontiers in Human Dynamics, 2022
Artificial Intelligence (AI) is beneficial in many respects, but also has harmful effects that constitute risks for individuals and society. Dealing with AI risks is a future-oriented endeavor that needs to be approached in a forward-looking way. Forward-
Maria Hedlund
doaj   +1 more source

Transhumanities as the Pinnacle and a Bridge

open access: yesHumanities, 2022
Transhumanities are designed as a multidisciplinary approach that transcends the limitations not only of specific disciplines, but also of the human species; these are primarily humanities for advanced Artificial Intelligence (AI leading to AGI).
Piotr (Peter) Boltuc
doaj   +1 more source

Safe Artificial General Intelligence via Distributed Ledger Technology

open access: yesBig Data and Cognitive Computing, 2019
Artificial general intelligence (AGI) progression metrics indicate AGI will occur within decades. No proof exists that AGI will benefit humans and not harm or eliminate humans.
Kristen W. Carlson
doaj   +1 more source

AGI Protocol for the Ethical Treatment of Artificial General Intelligence Systems

open access: yesProcedia Computer Science, 2020
Abstract The AGI Protocol is a laboratory process for the assessment and ethical treatment of Artificial General Intelligence systems that could be conscious and have subjective emotional experiences “theoretically”. It is meant for looking at systems that could have emotional subjective experiences much like a human, even if only from a theoretical ...
David Kelley, Kyrtin Atreides
openaire   +1 more source

Home - About - Disclaimer - Privacy