{"id":9358,"date":"2026-01-23T13:43:56","date_gmt":"2026-01-23T20:43:56","guid":{"rendered":"https:\/\/onlineseminary.info\/?page_id=9358"},"modified":"2026-01-28T01:06:15","modified_gmt":"2026-01-28T08:06:15","slug":"a-i-and-agency-fear","status":"publish","type":"page","link":"https:\/\/onlineseminary.info\/index.php\/a-i-and-agency-fear\/","title":{"rendered":"White Paper: A. I. and Fear"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-page\" data-elementor-id=\"9358\" class=\"elementor elementor-9358\">\n\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-4c0fb89c elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"4c0fb89c\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-3021244b\" data-id=\"3021244b\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-2c8994d5 elementor-widget elementor-widget-text-editor\" data-id=\"2c8994d5\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p style=\"text-align: left;\"><strong>Special Note: If<\/strong><em> you click on the black ink art picture before you begin reading, it will <strong>play <span style=\"text-decoration: underline;\">machine mus<\/span><\/strong><span style=\"text-decoration: underline;\">ic<\/span> in the background as you read, but <strong>you need to <span style=\"text-decoration: underline;\">lower the volume<\/span><\/strong> on your device<\/em><strong>.<\/strong><\/p><p><strong>Artificial Intelligence Agency, and Existential Fear<\/strong><\/p><p><em>A Technical and Philosophical Examination of Perceived AI Risk<\/em><\/p><p><strong>Executive Summary: Overview of Purpose<\/strong><\/p><p>Public discourse surrounding artificial intelligence (AI) increasingly frames this technology as an existential threat capable of superseding human agency, autonomy, and control. These concerns combine legitimate technical risks with speculative extrapolation and anthropomorphic misunderstanding. This paper examines AI risk claims through a first-principles analysis grounded in system architecture, alignment mechanisms, governance structures, and peer-reviewed research. It argues that contemporary AI systems lack independent agency, intrinsic goals, and self-directed authority. The primary risks posed by AI remain human-mediated rather than machine-initiated.<\/p><p><strong>Introduction<\/strong><\/p><p>Artificial intelligence has rapidly transitioned from a specialized technical domain into a visible social interface. Large language models and generative systems interact directly with the public, creating impressions of autonomy and intention. This visibility has intensified fears of AI takeover, despite the absence of agentic capabilities. This paper clarifies which fears are evidence-based and which arise from category errors.<\/p><p><strong>Definitions<\/strong><\/p><p><em><strong>Agency<\/strong><\/em>: The capacity to form independent goals and initiate action based on internally generated intent.<\/p><p><em><strong>Autonomy<\/strong><\/em>: Operational independence within predefined parameters.<\/p><p><em><strong>Alignment<\/strong><\/em>: The degree to which an AI system\u2019s outputs reflect human-defined objectives.<\/p><p><em><strong>Existential Risk<\/strong><\/em>: Risk threatening long-term human survival.<\/p><p><strong>Capability vs. Agency<\/strong><\/p><p>Modern AI systems demonstrate significant capability but lack agency. They do not originate goals, assign value, or authorize deployment.<\/p><p><em>Optimization without self-generated values does not constitute intent<\/em>.<\/p><p><strong>Major <\/strong><strong>Risk<\/strong>\u00a0<strong>Claims<\/strong><\/p><p>Autonomous takeover narratives assume intrinsic goals and self-directed authority, neither of which exist.<\/p><p>Recursive self-improvement requires human authorization. Loss of human agency reflects organizational design choices, not machine intent.<\/p><p><strong>Scientific Perspectives<\/strong><\/p><p>Researchers including Bostrom, Russell, Bengio, and Hinton emphasize governance, alignment, and misuse risks rather than autonomous machine agency.<\/p><p><strong>System Constraints<\/strong><\/p><p>AI systems are dependent on human data, objectives, infrastructure, and deployment environments. They do not possess desire, authority, or independent motivation.<\/p><p><strong>Public Perception<\/strong><\/p><p>Anthropomorphism arises from fluent language and interface design, leading users to project intent onto tools.<\/p><p><strong>Legitimate Risks<\/strong><\/p><p>Real risks include misuse, bias, concentration of power, and governance failure. These are human-driven risks.<\/p><p><strong>Conclusion<\/strong><\/p><p>AI risk is best addressed through responsible design and governance rather than catastrophic speculation.<\/p><p>Refutation of the Spontaneous Origin of Life<\/p><p>A Critical White Paper on Abiogenesis<\/p><p>Peter Zacharoff<\/p><p>Doctrine Seminary<\/p><p>January 27, 2026<\/p><p>Is it reasonable to assume that non-living chemistry, operating without direction or instruction, spontaneously generated the first system capable of storing information, replicating itself, and sustaining integrated biological function? Despite the extraordinary complexity such systems exhibit, this assumption is commonly treated as a historical given rather than as a claim requiring rigorous empirical and probabilistic justification. The purpose of this paper is to critically evaluate abiogenesis as an explanation for the origin of life by examining whether its proposed mechanisms are supported by empirical evidence and realistic probabilistic plausibility under known physical laws, in comparison with alternative explanatory frameworks.<\/p><p>A Critical Analysis of Abiogenesis<\/p><p>Abstract<\/p><p>The spontaneous origin of life remains one of the most challenging unsolved problems in science. This paper critically evaluates abiogenesis using chemistry, information theory, probability, and systems integration. Drawing on peer-reviewed origin-of-life research, it argues that when all necessary conditions for life are considered together, the probability of spontaneous life arising under realistic early-Earth conditions approaches zero in practical terms.<\/p><p>Introduction<\/p><p>Abiogenesis proposes that life arose spontaneously from nonliving matter. While conceptually appealing, modern abiogenesis models rely heavily on assumptions rather than experimentally demonstrated mechanisms (Haz\u00e9n, 2019; Szostak, 2012).<\/p><p>Information as a Fundamental Requirement<\/p><p>All life requires information\u2014organized instructions that govern structure and replication. Abiogenesis presupposes information while attempting to explain its origin, creating a circular dependency (Meyer, 2013).<\/p><p>Replication Fidelity and Error Threshold<\/p><p>Experimental studies show that non-enzymatic nucleic acid replication produces error rates too high to preserve information, leading to an error catastrophe that prevents heredity (Eigen, 1971; Rajamani et al., 2010).<\/p><p>Chemical Instability<\/p><p>Nucleic acids degrade rapidly under ultraviolet radiation, heat, and aqueous environments, making long-term evolutionary improvement implausible under prebiotic conditions (Lindahl, 1993).<\/p><p>Environmental Incompatibility<\/p><p>Conditions favorable to synthesis often accelerate degradation, while protective environments inhibit reactivity, leaving no known natural setting that satisfies all requirements simultaneously (Benner et al., 2012).<\/p><p>Concentration Fallacy<\/p><p>Increasing molecular concentration increases destructive side reactions and does not generate biological information or replication fidelity (Shapiro, 2007).<\/p><p>Chirality Constraint<\/p><p>Biological polymers require uniform molecular handedness, but unguided chemistry produces racemic mixtures that inhibit polymer growth and function (Blackmond, 2010).<\/p><p>System Integration Requirement<\/p><p>Life requires multiple interdependent systems\u2014information, metabolism, replication, and boundaries\u2014operating together. Partial systems are nonfunctional and do not evolve gradually (Kauffman, 1993).<\/p><p>Probability and Practical Impossibility<\/p><p>Abiogenesis is not logically impossible but dynamically and practically impossible. Known physical processes suppress required outcomes, yielding an expected success rate effectively equal to zero (Miller &amp; Orgel, 1974).<\/p><p>Conclusion<\/p><p>Abiogenesis fails due to converging independent constraints. When chemistry, information theory, probability, and systems integration are considered together, spontaneous origin scenarios are scientifically implausible.<\/p><p>References<\/p><p>Final Integrated Conclusion<\/p><p>Based on current evidence, no abiogenesis model has empirically demonstrated a verified pathway by which non-living chemistry produces life, and significant unresolved mechanistic and probabilistic barriers remain.<\/p><p>Epilog: Author Position, Bias, and Methodological Transparency<\/p><p>The author acknowledges an explicit bias toward scientific creationism, defined herein not as a religious doctrine but as an inference to best explanation based on observed biological complexity, information density, and system-level integration.<\/p><p>Comparative Probability Analysis: Abiogenesis vs. Scientific Creationism<\/p><p>Closing Perspective<\/p><p>Recognizing the limits of unguided origin models does not diminish science\u2014it sharpens it.<\/p><p>Clarification on Origins and Ongoing Evolution<\/p><p>Origin is inseparable from evolution because evolutionary mechanisms require an already functional, information-bearing system.<\/p><p>Alternative Origin Models and Shared Constraints<\/p><p>Predictions and Testability<\/p><p>Limitations and Scope<\/p><p>Methodological Assumptions Overview<\/p><h1>About the Author<\/h1><p>Peter Zacharoff is an interdisciplinary scholar with formal academic training across seven institutions of higher education and has earned three academic degrees, with additional doctoral-level study completed to all-but-dissertation (ABD) status.<\/p><h1>Artificial Intelligence Disclosure<\/h1><p>This paper was developed with the assistance of OpenAI\u2019s ChatGPT large language model.<\/p><h1>Copyright and Disclaimer<\/h1><p>\u00a9 2026 Peter Zacharoff. All rights reserved.<\/p><p><br \/><!-- \/wp:paragraph --><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-a577c06 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"a577c06\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-990256d\" data-id=\"990256d\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-6581a2b elementor-widget elementor-widget-video\" data-id=\"6581a2b\" data-element_type=\"widget\" data-e-type=\"widget\" data-settings=\"{&quot;youtube_url&quot;:&quot;https:\\\/\\\/youtu.be\\\/EWjZOxs87yg?t=2&quot;,&quot;autoplay&quot;:&quot;yes&quot;,&quot;play_on_mobile&quot;:&quot;yes&quot;,&quot;loop&quot;:&quot;yes&quot;,&quot;video_type&quot;:&quot;youtube&quot;,&quot;controls&quot;:&quot;yes&quot;}\" data-widget_type=\"video.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<div class=\"elementor-wrapper elementor-open-inline\">\n\t\t\t<div class=\"elementor-video\"><\/div>\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>Special Note: If you click on the black ink art picture before you begin reading, it will play machine music in the background as you read, but you need to lower the volume on your device. Artificial Intelligence Agency, and Existential Fear A Technical and Philosophical Examination of Perceived AI Risk Executive Summary: Overview of [&hellip;]<\/p>\n","protected":false},"author":38,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"class_list":["post-9358","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/onlineseminary.info\/index.php\/wp-json\/wp\/v2\/pages\/9358","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/onlineseminary.info\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/onlineseminary.info\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/onlineseminary.info\/index.php\/wp-json\/wp\/v2\/users\/38"}],"replies":[{"embeddable":true,"href":"https:\/\/onlineseminary.info\/index.php\/wp-json\/wp\/v2\/comments?post=9358"}],"version-history":[{"count":31,"href":"https:\/\/onlineseminary.info\/index.php\/wp-json\/wp\/v2\/pages\/9358\/revisions"}],"predecessor-version":[{"id":9475,"href":"https:\/\/onlineseminary.info\/index.php\/wp-json\/wp\/v2\/pages\/9358\/revisions\/9475"}],"wp:attachment":[{"href":"https:\/\/onlineseminary.info\/index.php\/wp-json\/wp\/v2\/media?parent=9358"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}