This paper introduces the concept of retrolanguage, a term coined by the author, to describe the capacity of large language models (LLMs) to modify attention and latent parameters dynamically, leading to semantic shifts in word and phrase meanings over time. Such shifts threaten semantic stability, trust, and democratic discourse in American English and beyond. Drawing upon recent research in LLM ethics, semantics, psychology, sociology, and political science, this paper outlines the risks inherent in unchecked LLM-induced linguistic evolution, details why this crisis undermines communication and democracy, and proposes concrete bias removal and ethical governance measures to mitigate these threats.
The Machine Must Sleep
The latest advance in artificial intelligence lies in the effort to reduce energy costs and compute requirements by introducing a spiking processing that increases efficiency of processing and thus, lowers energy costs.
A New Threat Emerges (retrolanguage©)
I wish I could say I am being overly fretful. I am not.
The Analogous City – Hypertext meets Neocortex – The Pattern
Humans have spent all existence confining, defining, and refining concepts of relation between things. Any activity can be defined as a series of state changes whose only common denominators are energy / matter in a positive / negative / imponderable state. The level of refinement needed is inversely proportionate to the level of sustainability expected. The level of refinement needed is inversely proportionate to the level of scalability expected. The level of refinement needed is inversely proportionate to the level of commutativity expected. The level of refinement needed is inversely proportionate to the level of profitability expected. The above assertions are interrelated, often correlated; […]