From divine attributes to AI scaling curves
2026-03-29 - 00:45
Dr Fazal Ali Deep within the glass-walled offices of Anthropic and OpenAI, the quest for the “orderly harmony” of Nicolaus Copernicus, Tycho Brahe, Galileo Galilei, Johannes Kepler, Baruch Spinoza, and Isaac Newton continues unhindered. Galileo likened the universe to a vast book “which stands continually open to our gaze,” but we cannot understand this book until we first comprehend the language and read the letters in which it is written. It is articulated in the language of mathematics, with characters representing triangles, circles, and other geometric figures (Le Opere di Galileo Galilei (Tomo IV), p. 171). Isaac Newton, in his Philosophiæ Naturalis Principia Mathematica (1687), would ultimately realise this cosmology through his principles and algorithms. Demis Hassabis, the founder of Google DeepMind, espouses a cosmology rooted in Spinoza’s philosophy. Spinoza argued that God was not an omnipotent deity separate from nature but a self-sustaining substance moving with mathematical necessity. During the AI transition, epistemic justice and algorithmic fairness remain signposts in the dark as AI engineers work hurriedly within a Spinozian metaphysical framework. Their assumptions behind scaling are straightforward: if sufficient data, parameters, and computational power are available, a universal structure will emerge. By continuously training models with streams of data crumbs, akin to confetti falling from the sky, the world’s orderly harmony begins to crystallise through statistical uniformities. From this perspective, Large Language Models attempt to approximate the features of a single, all-encompassing universe. If reality is computationally traceable, AI can, in principle, decipher Galileo’s Book of Nature by uncovering the arithmetical structures underlying phenomena. However, to perceive these structures, an AI model must possess built-in forms of Kantian receptivity. Kant contended that we cannot know the world in its unfiltered state. Our minds act as processors that filter sensory impressions through “a priori” categories. Kant believed that we are born with innate structures that allow us to intuitively experience the world. He argued that the mind can unify multiple sensory fragments into coherent wholes, meaning that our experience of the world is intertwined with the mind’s structuring activity. In machine learning, this is known as inductive bias, perception through specific structural lenses. In AI terms, raw data becomes Kant’s “Noumenon” (the thing in itself), not the thing as it appears to us (Phenomenon). Raw data remains a meaningless deluge of unprocessed binary code, which AI cannot perceive, just as the human eye cannot see infrared light. To render this chaos intelligible, the model must first filter the data through its own Kantian filters or latent space. Within the latent space of Gemini or Google DeepMind, you will not find an indexed catalogue of facts but a vast, high-dimensional mathematical space where each concept is represented as a vector. Latent space resembles a silicon analogue of the human mind and serves as the model’s framework of intelligibility. For Spinoza, all events unfold from prior causes due to mathematical necessity. Freedom resides in a clear understanding of these causes, and this makes humans different from stones. A stone skimming just above the water, with skips shortening and drawing closer before sinking to the bottom of the sea, cannot believe it is free nor think it could continue in motion solely because of its own will. Similarly, AI systems are akin to Spinoza’s stone. Queries traverse latent space, determined by weights and gradients, without awareness. In the Intelligent Age, the aspiration is to move from blind inference to rational, reflective understanding, making the ultimate frontier metaphysical rather than merely technical. Though the vocabulary has shifted from Divine Attributes to AI Scaling Curves, the architects of AI are engaged in a fundamentally Spinozian project: a model designed to decode the infinite attributes of a single, universal substance – information. Recently, without attribution to a specific developer, a mysterious AI model named Hunter Alpha appeared on the AI gateway platform OpenRouter on March 11, 2026. Described as a “stealth model,” it refused to identify its creator, merely stating, “I only know my name, my parameter scale, and my context window length.” Xiaomi’s AI team, MiMo, led by former DeepSeek researcher Luo Fuli, described Hunter Alpha as an “early internal test build of MiMo-V2-Pro,” intended as the “brain” of AI Agents. Hunter Alpha enables users to perform complex tasks with fewer prompts and less supervision than traditional chatbots. OpenClaw, an open-source agent framework, is also rapidly gaining popularity among users across China. The AI horizon is teeming with similar secretive free models as information organisms build a largely unregulated infosphere. The Age of AI calls for a philosophy of information and an ethics of information. We are compelled to establish a foundation for the ethics governing information organisms or inforgs, who will increasingly experience life as onlife – both online and offline, digital and analogue. Dr Fazal Ali completed his Master’s in Philosophy at the University of the West Indies. He was a Commonwealth Scholar who attended the University of Cambridge, Hughes Hall, the provost of the University of Trinidad and Tobago and the acting president, and chairman of the Teaching Service Commission. He is presently a consultant with the IDB.