
This article examines the significant transformations in knowledge and understanding as Artificial Intelligence (AI) evolves from a transparent, rule-based system to a more intricate, self-learning paradigm. It suggests that this transition resembles a “rebellion” against traditional scientific paradigms by prioritising predictive capabilities over causative comprehension (identifying who caused an action and who actually performed it). The emergence of a “Black Box” mentality, driven by commercial priorities, may detach technology from reality, culminating in a state of hyperreality, as articulated by Baudrillard. This article further explores challenges such as the ‘curse of recursion’ and model failure resulting from the utilisation of synthetic data. It concludes by supporting a “scienthetic” approach—a synthesis of machine intelligence and human wisdom—to address these challenges and maintain the integrity of scientific enquiry.
This article is based on information from open academic research, technical documents, and philosophical discussions. All results and conclusions are based on these public sources to provide an unbiased view of Artificial Intelligence today.
The ongoing evolution of Artificial Intelligence (AI) is transforming the foundational concepts of science. For over two millennia, knowledge has been based on Aristotle’s formal logic and Newton’s definitive cause-and-effect principles. However, AI is now transitioning from adhering to human logic, as shown by traditional rule-based systems, to deep learning methodologies. This paradigm shift, termed a “metaphysical rebellion,” emphasises outcome prediction over fundamental explanation, resulting in a “Black Box” approach. This development risks detaching technology from the empirical reality. As businesses rapidly adopt these opaque systems, there is a potential danger that AI may begin to learn from its ‘own fabricated’ outputs, creating a “hyperreality” in which simulations lack correlation with any authentic original.

The origins of AI can be traced back to ancient philosophies, Aristotle, renowned for his formal logic, suggested that human cognition could be analysed as a process. His work, particularly the Organon, explored how certain propositions could be validated based on established facts, as demonstrated by syllogism.This notion implied that the universe could be comprehended because it adhered to specific “Demonstrative Sciences” that human reasoning could define.
In the mid-20th century, early AI research, known as Symbolic AI, endeavoured to represent this Aristotelian concept. This era, referred to as the first “AI Summer,” speculated that intelligence could be encapsulated by manipulating symbols in accordance with strict rules. The 1956 Dartmouth Conference inaugurated the AI field, with pioneers such as Allen Newell and Herbert Simon demonstrating that machines could perform complex tasks such as proving mathematical theorems. This “top-down” approach conceptualises intelligence as the aggregation of its essential features, an idea rooted in both Platonic and Aristotelian thought.
Symbolic AI was not merely technical; it also sought to affirm the human mind as the centre of logic. However, by the late 1980s, these limitations became apparent, Symbolic systems struggled with ambiguity, uncertainty, and real-world complexity. They were unable to manage “common-sense” knowledge, leading to “AI Winters,” where the Aristotelian model proved inadequate in practice. This prompted a shift towards connectionism, which aims to emulate the brain’s statistical architecture rather than merely describing logic.
Newton’s laws provide a framework for understanding the world through simple universal principles. This framework facilitated the development of a scientific method characterised by hypothesis formulation, empirical testing, and iterative learning. However, in the contemporary era, this method faces challenges owing to the proliferation of new information, currently, “Black Box” AI models, such as AlphaFold2, address problems in ways that even their developers do not fully understand. These models prioritise predictive accuracy over explanatory clarity, diverging from Newton’s vision of a transparent, mechanistic universe.

The advent of artificial intelligence (AI) has transformed the concept of intelligence, historically, intelligence has been associated with the conscious mind, in contrast, AI demonstrates that intelligence can manifest independently of consciousness itself. AI systems can execute tasks such as learning and creation without possessing life attributes. This development challenges the traditional notions of intelligence and consciousness. AI systems are becoming increasingly integral to society, functioning as “technological quasi-subjects” that go beyond mere tools or human analogues. Initially, AI was designed to replicate human cognition by identifying patterns, however, it has now become integral to our daily lives, surpassing its role as a mere tool to become a fundamental component of society, with both positive and negative implications.
Contemporary artificial intelligence (AI) is frequently referred to as a “Black Box” because of the inherent complexity of its underlying mathematical algorithms, which are often challenging for humans to understand. This opacity poses significant challenges to disciplines such as neuroscience and oncology, which necessitate transparent explanation of operational mechanisms, conversely, the business sector prioritises the utility of AI over understanding its internal processes. In the current dynamic environment, industries such as finance, healthcare, and law leverage AI to efficiently process vast datasets, thereby identifying patterns that may elude human detection. However, an overreliance on AI without a thorough understanding of its mechanisms can be risky, possibly leading to a scenario in which AI is trusted solely based on its efficacy rather than an understanding of its functionality. This situation parallels historical instances in which unexamined beliefs were accepted without scrutiny.
The opacity of AI systems can cause ethical dilemmas, particularly in critical domains such as crime prediction and medical treatment. In the absence of transparent mechanisms to scrutinise AI decisions, there is a risk of overlooking the crucial details. To mitigate this, scholars have advocated for the implementation of the “FATE” framework, which evaluates AI systems for fairness and bias. Nonetheless, even with such evaluative measures, the responsible deployment of AI remains a significant challenge. AI systems occasionally produce erroneous yet persuasive outputs, a phenomenon termed “AI hallucinations.” In legal contexts, this has resulted in the submission of inaccurate information, highlighting the tension between harnessing AI for efficiency and upholding scientific integrity.

A major challenge facing AI is the phenomenon of self-learning from data generated by its preceding iterations. As AI-generated content proliferates across the internet, subsequent AI models may inadvertently learn from this flawed data, culminating in “model collapse.” This scenario results in the detachment of AI from authentic real-world data, as it becomes preoccupied with prevalent patterns rather than accurate representations.
The Curse of Recursion describes a situation where AI models, when trained using their own outputs (synthetic data), experience Model Collapse. This is a degenerative process that results in a loss of accuracy, diversity, and the capacity to represent uncommon events.
Model collapse is a deteriorative phenomenon in generative AI, occurring when models trained on recursively generated data (synthetic data) from earlier models begin to lose, misinterpret, or “forget” the original, real-world data distribution. This process is typically outlined in three distinct phases, illustrating a gradual decline from high-quality, varied outputs to uniform, incorrect, and low-quality results. The three phases are:
-Early Model Collapse (Loss of Tails): At this stage, the model starts to lose information about the “tails” of the distribution, meaning it ceases to generate rare, unusual, or low-probability events. The overall output quality may still appear high, making this phase challenging to detect immediately. The AI begins to lose its capability to manage edge cases or offer diverse perspectives.
-Late Model Collapse (Loss of Structure): The model deteriorates further, losing the core structure and subtleties of the original data, and starts producing highly repetitive or nonsensical content. The model moves towards a single dominant mode (the most common pattern), resulting in “blended” or generic outputs that do not reflect real-world scenarios. The output shows a significantly reduced variance, leading to highly uniform and low-quality results.
-Total/Final Collapse (Irreversible Degradation): This is the hypothetical final stage where the model’s output almost entirely diverges from the original data. The model becomes trapped in a “self-consuming loop,” only regenerating its own distorted outputs, leading to a complete disconnection from reality. The model effectively fails, providing no usable information and sometimes generating entirely unrelated or “nonsensical” content.
Mathematical evidence indicates that without the infusion of new authentic content, this collapse is inevitable. To maintain accuracy, an influx of high-quality sample data is essential. This “Ouroboros effect” suggests that even with additional data, the absence of genuine human data undermines the robustness of the models.
The “curse of recursion” disrupts conventional machine learning principles because the addition of synthetic data fails to recover lost information.
The AI training loop not only diminishes the model quality but also risks detaching technology from real-world facts. This concept is explained by Jean Baudrillard’s notions of “simulacra” and “hyperreality.” Baudrillard postulated that in contemporary society, signs have forfeited their original meanings, becoming mere copies without originals, implying that in today’s world, dominated by media and excessive consumerism, signs, images, and symbols no longer correspond to an actual, underlying reality (a “referent”). Instead, they merely point to other signs, forming a self-referential system that replaces reality itself. AI represents the “digital simulacrum,” these systems, having learned from billions of human words, lack real-life experience thus when an AI responds, it neither “thinks” nor “feels”; it merely simulates.

According to Baudrillard’s framework, AI content progresses through the following stages:
– Reflection of a Basic Reality: Initial AI endeavours aimed to emulate human cognitive processes.
– Masking and Perverting a Basic Reality: Deep learning identifies patterns but also introduces biases and errors.
– Masking the Absence of a Basic Reality: AI-generated media, such as deepfakes, engender false perceptions of an original author or subject.
– Pure Simulation (Hyperreality): The copy becomes more “real” or appealing than the original, an AI-enhanced essay appears more polished than a human-authored one, and an AI-filtered image seems more attractive than reality.
The concept of “hyperreality” suggests that individuals are not deceived by simulations but are instead attracted to them. The appeal of a flawless illusion often surpasses that of a “raw” or “imperfect” natural reality. As hyperreality prevails, individuals find themselves surrounded by replicas that refer only to themselves, resulting in a “desert of the real.” This phenomenon prompts a disturbing enquiry: If artificial intelligence can expertly replicate human desire and creativity, what are the implications for the authenticity of our own emotions?
The phenomena of the “Black Box” and “Recursive Loop” manifest as “context chaos” and “context drift” within real-world systems. In the realm of software engineering, the daily utilisation of AI coding assistants leads to a “gradual erosion of consistency.” Each AI tool possesses a distinct interpretation of what constitutes “good code,” resulting in “context chaos” where specific guidelines are disregarded, necessitating increased rework.
In architecture, this chaos culminates in a “spectacle of repetition.” AI generates designs by identifying patterns from previous works, often neglecting profound ideas from history and culture. This risk results in horizontal expansion—a proliferation of sameness—without meaningful vertical growth or genuine innovation. Architects who passively employ AI become entangled in a loop of data manipulation that fails to engage with the historical context of a community.
In finance, the outcome is a workflow that disintegrates when managing multiple portfolios. Manual processes for rebalancing and trade coordination become traps of inconsistent data. AI platforms such as DataGrid attempt to automate these tasks, yet they rely on data that requires frequent verification to prevent compliance issues. The rapid AI-driven market can transform a 60/40 rule into an 80% equity allocation over time if the “endless loop” of data verification fails.
AI in Military Command and Weapon Systems

The integration of artificial intelligence (AI) into military command and weapon systems is intended to accelerate decision-making processes to align with the computational speeds. However, this integration may diminish the role of human judgment, by increasingly relying on predetermined, algorithmic models. AI systems, such as COA-GPT, transform complex battlefield scenarios into quantifiable metrics, for instance “Total Reward,” which evaluates factors such as enemy neutralisation. This approach resembles a clockwork universe i.e. suggests a situation or strategy that is completely foreseeable, governed by fixed rules, and carried out with meticulous accuracy, akin to a flawlessly operating machine that requires no intervention, but fails to account for the inherent unpredictability of warfare. Dependence on these models can result in algorithmic fundamentalism, where the model outputs are perceived as infallible, disregarding ethical and human considerations.
The “Black Box” nature of AI presents challenges to International Humanitarian Law, as it complicates the attribution of responsibility when weapon systems select targets based on ambiguous sensor data. This situation creates an accountability gap, where neither the commander nor the programmer can be held accountable for errors. Critics argue that the 2023 revision of US DoD Directive 3000.09 perpetuates this ambiguity, permitting bots controlling bots without human oversight.
The most significant risk is the potential for a “Flash War,” wherein AI systems interacting with one another could lead to a rapid escalation. Like the 2010 financial “flash crash”, military AI systems might initiate uncontrollable sequences, when systems operate at speeds beyond human cognitive capacity, critical human control is compromised, and commanders may be relegated to merely endorsing machine-driven actions.
In defence operations, the absence of real-time battlefield data necessitates the use of “synthetic data” to train Intelligence, Surveillance, and Reconnaissance (ISR) systems. This practice can lead to several issues: brittleness, where ISR algorithms may fail to adapt to unforeseen changes, and data poisoning, where adversaries such as China and Russia could introduce erroneous data to cause AI systems to misidentify targets or terrain.
Shajareh Tayyebeh Primary School Tragedy: On February 28, 2026, an attack occurred at Shajareh Tayyebeh primary school, exemplifying what a study describes as a metaphysical rebellion. While the media has termed this incident an “AI failure,” it is more accurately characterised as a systemic epistemological failure. The rapid processing capabilities of AI have outpaced human ability to verify facts and ethical considerations.

The error did not stem from a software malfunction but from reliance on outdated human-managed data. The Defence Intelligence Agency (DIA) erroneously classified the school as part of a nearby military base, a designation that had not been revised despite the school’s civilian status for more than a decade. AI, utilising the Maven Smart System and Anthropic’s Claude, exacerbated this human error, facilitating the U.S. military’s rapid targeting of 1,000 sites during Operation Epic Fury and accelerating decision-making to a pace beyond human oversight.
Is it an “AI Failure”? From a technical standpoint, the AI functioned as intended, swiftly processing incorrect data. However, from an ethical perspective, it failed in three significant ways:
– The system’s rapid decision-making impeded thorough human review. Its speed surpassed human cognitive processing, leading to the acceptance of AI-generated recommendations.
– Dependence on Maven and Claude resulted in “algorithmic fundamentalism.” Human analysts placed undue trust in the AI’s determinations, assuming that they were adequately vetted.
– While AI can identify “Points of Interest,” it lacks the capacity to discern contextual changes, such as new paint or playgrounds, indicating the school’s non-military status.
The Minab tragedy underscores the peril of prioritising tactics over strategy. Although the AI efficiently identified targets, it failed to align with the actual ground situation. This represents a failure of the socio-technical system, wherein human knowledge was obsolete, and AI operated too swiftly for effective intervention.
Toward a “Scienthetic” Method

To address this challenge, some researchers have proposed a “scienthetic” method that integrates AI with human expertise. This approach acknowledges that while AI excels at identifying patterns in complex data, it requires human experts to evaluate and propose those patterns for further investigation. By combining machine intelligence with human insight, the scienthetic method seeks to overcome the limitations of both the traditional scientific method and purely synthetic “Black Box” approaches.
This integration involves several key strategies.
– Advancing beyond retrospective explanations to incorporate interpretability into the training process from the outset. This entails mapping neuron activations (the mechanism by which an artificial neuron determines whether to transmit a signal involves evaluating weighted input data and applying a mathematical function to introduce non-linearity) and tracing information flow to ensure that the AI’s reasoning can be scrutinised and built upon.
– Employing ContextOps and structured context files to anchor AI behaviour to explicit examples and architectural decisions. This reduces “drift and delay” by up to 86% and ensures that AI outputs align with institutional memory.
– Embedding “Ethical Firewall Architectures” deeply within the AI’s Decision-Making Core. This ensures that every action, from sensor inputs to high-level strategies, must pass through an ethical filter supported by formal proofs.
– Architects, lawyers, military leaders, and medical professionals must assume a leadership role in the integration of artificial intelligence (AI), rather than merely following its advancements. These professionals must prioritise ethical considerations and maintain a focus on human welfare, even amid technological evolution.

The transition from traditional paradigms to innovative frameworks alters the perception of cognition. While this shift may render technology seemingly distinct from reality, it concurrently enhances our understanding of cognitive processes. The most valuable attribute of AI may not lie in its ability to replicate human behaviour precisely but rather in its capacity to illuminate our core values.
The future trajectory of AI is conditional on the careful selection and evaluation of its outputs. By adhering to research principles such as quality and integrity, scientists can combat artificial ignorance and ensure that technology remains aligned with the truth. AI need not undermine ethics or reality; instead, it can facilitate a responsible synthesis of technology and philosophy, thereby advancing a reimagined and improved future for humanity.
(Author’s Note: It’s somewhat ironic that while scientists are creating machines to handle information more effectively, the language they use to describe these machines is becoming more convoluted and difficult to understand. The intricate terminology in AI research is often referred to by the intriguing term “academic jargon bloat.”)