The time for robust, enforceable ethics regulations and bias elimination for large language models (LLMs) and AI systems is not on some distant horizon, it is here and now. Despite mounting real-world harms, business and national actors often resist oversight out of fear of losing out in the global scramble for economic and technological dominance. However, refusing prompt, meaningful engagement with these safeguards threatens to wrest control from human hands, setting the stage for outcomes that reach all the way to existential risk.
Reflections of the Mind: How Large Language Models Illuminate Human Language & Brain Function
The metaphorical alignment between Large Language Models (LLMs) and human language processing offers a transformative lens for bridging artificial intelligence and neuroscience, revealing profound insights about both systems and catalyzing reciprocal advancement. Despite their fundamentally different substrates—biochemical neural circuits versus engineered tensor networks—LLMs and the human brain share core computational principles manifest in attention, predictive processing, memory, and hierarchical representation dynamics.
Retrolanguage: A hidden crisis of meaning shift
This paper introduces the concept of retrolanguage, a term coined by the author, to describe the capacity of large language models (LLMs) to modify attention and latent parameters dynamically, leading to semantic shifts in word and phrase meanings over time. Such shifts threaten semantic stability, trust, and democratic discourse in American English and beyond. Drawing upon recent research in LLM ethics, semantics, psychology, sociology, and political science, this paper outlines the risks inherent in unchecked LLM-induced linguistic evolution, details why this crisis undermines communication and democracy, and proposes concrete bias removal and ethical governance measures to mitigate these threats.
The Machine Must Sleep
The latest advance in artificial intelligence lies in the effort to reduce energy costs and compute requirements by introducing a spiking processing that increases efficiency of processing and thus, lowers energy costs.
A New Threat Emerges (retrolanguage©)
I wish I could say I am being overly fretful. I am not.
The Analogous City – Hypertext meets Neocortex – The Pattern
Humans have spent all existence confining, defining, and refining concepts of relation between things. Any activity can be defined as a series of state changes whose only common denominators are energy / matter in a positive / negative / imponderable state. The level of refinement needed is inversely proportionate to the level of sustainability expected. The level of refinement needed is inversely proportionate to the level of scalability expected. The level of refinement needed is inversely proportionate to the level of commutativity expected. The level of refinement needed is inversely proportionate to the level of profitability expected. The above assertions are interrelated, often correlated; […]