Special Note: If you click on the black ink art picture before you begin reading, it will play machine music in the background as you read, but you need to lower the volume on your device.
Artificial Intelligence Agency, and Existential Fear
A Technical and Philosophical Examination of Perceived AI Risk
Executive Summary: Overview of Purpose
Public discourse surrounding artificial intelligence (AI) increasingly frames this technology as an existential threat capable of superseding human agency, autonomy, and control. These concerns combine legitimate technical risks with speculative extrapolation and anthropomorphic misunderstanding. This paper examines AI risk claims through a first-principles analysis grounded in system architecture, alignment mechanisms, governance structures, and peer-reviewed research. It argues that contemporary AI systems lack independent agency, intrinsic goals, and self-directed authority. The primary risks posed by AI remain human-mediated rather than machine-initiated.
Introduction
Artificial intelligence has rapidly transitioned from a specialized technical domain into a visible social interface. Large language models and generative systems interact directly with the public, creating impressions of autonomy and intention. This visibility has intensified fears of AI takeover, despite the absence of agentic capabilities. This paper clarifies which fears are evidence-based and which arise from category errors.
Definitions
Agency: The capacity to form independent goals and initiate action based on internally generated intent.
Autonomy: Operational independence within predefined parameters.
Alignment: The degree to which an AI system’s outputs reflect human-defined objectives.
Existential Risk: Risk threatening long-term human survival.
Capability vs. Agency
Modern AI systems demonstrate significant capability but lack agency. They do not originate goals, assign value, or authorize deployment.
Optimization without self-generated values does not constitute intent.
Major Risk Claims
Autonomous takeover narratives assume intrinsic goals and self-directed authority, neither of which exist.
Recursive self-improvement requires human authorization. Loss of human agency reflects organizational design choices, not machine intent.
Scientific Perspectives
Researchers including Bostrom, Russell, Bengio, and Hinton emphasize governance, alignment, and misuse risks rather than autonomous machine agency.
System Constraints
AI systems are dependent on human data, objectives, infrastructure, and deployment environments. They do not possess desire, authority, or independent motivation.
Public Perception
Anthropomorphism arises from fluent language and interface design, leading users to project intent onto tools.
Legitimate Risks
Real risks include misuse, bias, concentration of power, and governance failure. These are human-driven risks.
Conclusion
AI risk is best addressed through responsible design and governance rather than catastrophic speculation.
Refutation of the Spontaneous Origin of Life
A Critical White Paper on Abiogenesis
Peter Zacharoff
Doctrine Seminary
January 27, 2026
Is it reasonable to assume that non-living chemistry, operating without direction or instruction, spontaneously generated the first system capable of storing information, replicating itself, and sustaining integrated biological function? Despite the extraordinary complexity such systems exhibit, this assumption is commonly treated as a historical given rather than as a claim requiring rigorous empirical and probabilistic justification. The purpose of this paper is to critically evaluate abiogenesis as an explanation for the origin of life by examining whether its proposed mechanisms are supported by empirical evidence and realistic probabilistic plausibility under known physical laws, in comparison with alternative explanatory frameworks.
A Critical Analysis of Abiogenesis
Abstract
The spontaneous origin of life remains one of the most challenging unsolved problems in science. This paper critically evaluates abiogenesis using chemistry, information theory, probability, and systems integration. Drawing on peer-reviewed origin-of-life research, it argues that when all necessary conditions for life are considered together, the probability of spontaneous life arising under realistic early-Earth conditions approaches zero in practical terms.
Introduction
Abiogenesis proposes that life arose spontaneously from nonliving matter. While conceptually appealing, modern abiogenesis models rely heavily on assumptions rather than experimentally demonstrated mechanisms (Hazén, 2019; Szostak, 2012).
Information as a Fundamental Requirement
All life requires information—organized instructions that govern structure and replication. Abiogenesis presupposes information while attempting to explain its origin, creating a circular dependency (Meyer, 2013).
Replication Fidelity and Error Threshold
Experimental studies show that non-enzymatic nucleic acid replication produces error rates too high to preserve information, leading to an error catastrophe that prevents heredity (Eigen, 1971; Rajamani et al., 2010).
Chemical Instability
Nucleic acids degrade rapidly under ultraviolet radiation, heat, and aqueous environments, making long-term evolutionary improvement implausible under prebiotic conditions (Lindahl, 1993).
Environmental Incompatibility
Conditions favorable to synthesis often accelerate degradation, while protective environments inhibit reactivity, leaving no known natural setting that satisfies all requirements simultaneously (Benner et al., 2012).
Concentration Fallacy
Increasing molecular concentration increases destructive side reactions and does not generate biological information or replication fidelity (Shapiro, 2007).
Chirality Constraint
Biological polymers require uniform molecular handedness, but unguided chemistry produces racemic mixtures that inhibit polymer growth and function (Blackmond, 2010).
System Integration Requirement
Life requires multiple interdependent systems—information, metabolism, replication, and boundaries—operating together. Partial systems are nonfunctional and do not evolve gradually (Kauffman, 1993).
Probability and Practical Impossibility
Abiogenesis is not logically impossible but dynamically and practically impossible. Known physical processes suppress required outcomes, yielding an expected success rate effectively equal to zero (Miller & Orgel, 1974).
Conclusion
Abiogenesis fails due to converging independent constraints. When chemistry, information theory, probability, and systems integration are considered together, spontaneous origin scenarios are scientifically implausible.
References
Final Integrated Conclusion
Based on current evidence, no abiogenesis model has empirically demonstrated a verified pathway by which non-living chemistry produces life, and significant unresolved mechanistic and probabilistic barriers remain.
Epilog: Author Position, Bias, and Methodological Transparency
The author acknowledges an explicit bias toward scientific creationism, defined herein not as a religious doctrine but as an inference to best explanation based on observed biological complexity, information density, and system-level integration.
Comparative Probability Analysis: Abiogenesis vs. Scientific Creationism
Closing Perspective
Recognizing the limits of unguided origin models does not diminish science—it sharpens it.
Clarification on Origins and Ongoing Evolution
Origin is inseparable from evolution because evolutionary mechanisms require an already functional, information-bearing system.
Alternative Origin Models and Shared Constraints
Predictions and Testability
Limitations and Scope
Methodological Assumptions Overview
About the Author
Peter Zacharoff is an interdisciplinary scholar with formal academic training across seven institutions of higher education and has earned three academic degrees, with additional doctoral-level study completed to all-but-dissertation (ABD) status.
Artificial Intelligence Disclosure
This paper was developed with the assistance of OpenAI’s ChatGPT large language model.
Copyright and Disclaimer
© 2026 Peter Zacharoff. All rights reserved.
