The Strength of Weak Ties Across Disciplines: Connectivity, Plasticity, Novelty, & the Imperative for Global Solutions

The sociological theory of weak ties, introduced by Mark Granovetter in 1973, reveals that infrequent, low-intensity social connections act as vital bridges that link otherwise disconnected social groups. These weak ties facilitate the flow of novel information, resources, and opportunities, supporting innovation and adaptability within social networks (Granovetter, 1973). Over time, this foundational insight has found compelling parallels across disciplines including technology, neuroscience, quantum physics, organic chemistry, machine learning, and cloud computing. These interdisciplinary connections expose shared principles of connectivity, plasticity, and novelty underpinning both natural and human-created complex systems.

Ethics or Extinction (this is not hyperbole)

The time for robust, enforceable ethics regulations and bias elimination for large language models (LLMs) and AI systems is not on some distant horizon, it is here and now. Despite mounting real-world harms, business and national actors often resist oversight out of fear of losing out in the global scramble for economic and technological dominance. However, refusing prompt, meaningful engagement with these safeguards threatens to wrest control from human hands, setting the stage for outcomes that reach all the way to existential risk.

Reflections of the Mind: How Large Language Models Illuminate Human Language & Brain Function

The metaphorical alignment between Large Language Models (LLMs) and human language processing offers a transformative lens for bridging artificial intelligence and neuroscience, revealing profound insights about both systems and catalyzing reciprocal advancement. Despite their fundamentally different substrates—biochemical neural circuits versus engineered tensor networks—LLMs and the human brain share core computational principles manifest in attention, predictive processing, memory, and hierarchical representation dynamics.

Retrolanguage: A hidden crisis of meaning shift

This paper introduces the concept of retrolanguage, a term coined by the author, to describe the capacity of large language models (LLMs) to modify attention and latent parameters dynamically, leading to semantic shifts in word and phrase meanings over time. Such shifts threaten semantic stability, trust, and democratic discourse in American English and beyond. Drawing upon recent research in LLM ethics, semantics, psychology, sociology, and political science, this paper outlines the risks inherent in unchecked LLM-induced linguistic evolution, details why this crisis undermines communication and democracy, and proposes concrete bias removal and ethical governance measures to mitigate these threats.

Just the facts… (Review, historical)

I assert that longstanding choices have contributed if not caused our current economic instability, civil unrest, and this insistent plod towards autocratic and kleptocratic governance. My assertion draws on a combination of well-documented trends and widely discussed critiques in political science, economics, and contemporary journalism. What follows is roughly 20-30 years of thinking pushed into a six decade+ timeline and the whole thing is rife with what has, is, and seemingly will continue to be a deliberate predatory aggressiveness toward the working class that reveals the United States to be no different whatever from any other feudal, dictatorial, despotic, and/or […]

Retrolanguage, Language Models, & a Hidden Crisis: Understanding & Responding to the Risks Shaping Human Thought

Large Language Models (LLMs) are reshaping communication, information exchange, and human decision-making at scale. While their capabilities offer efficiencies and new forms of connection, they also introduce substantial risks—ethical, psychological, social, and technological—that demand urgent consideration from technologists, policymakers, and the public.

This report synthesizes current research, expert analysis, and ongoing conversation to explore these risks, focusing on the unique concept of retrolanguage—the subtle and potentially dangerous drift in linguistic meaning enabled by LLMs.

Everything Old Is New Again (a series)

I love when old tech resurfaces and the professionals get to realize that the history of their domains, disciplines, and its innovations really IS critical to being better at what you do. Here is a great example from a channel on Youtube and I hope they do one on stop motion with sound synch and SFX so that Mike Jittlov finally gets the public recognition he richly deserves and has been largely bilked of all his dang life. This one is about a fellow putting it all on the line to reintroduce sodium vapor light as superior technology to green […]

Why I stay with Bluesky

Sharing the SWSX live stream because when you compare this narrative to the ones of the other platforms, particularly its currently open and free access and personalizing configuration (still unfolding), it becomes clear this is emergent possibility for better and more refined functionality than we’ve managed to see scale successfully. It’s an exciting time. Lots of opportunity. I’ve been yelling since the BBS days about the need to stipulate and protect a concept that is today called ‘protocols’ (that’s another article) as an emergent technology that genuinely can and likely will return personal power and certainly digital dignity (not Lanier’s […]

Why I am so frustrated about working…

This image demonstrates the major categories of thinking desired by business and recommended by business management research and analysis: I offer all four across both diverse domains and specifically in technology; application and systems software analysis, design, development, and delivery and still I’ve been out of work since 2017. It is maddening. I cannot access the degrees that this world demands to ‘believe’ you’re good at something, even though I have thirty years, a portfolio of samples from 1999 to roughly 2011 (before the NDA work largely put an end to portfolio demonstrations). No one is interested. Over 5,000 resumes […]

Playing with Perplexity

Playing with Perplexity is fun. I think I’m going to build a library of conversations in which it demonstrates me back to myself; this is how this autistic brain uses LLM/AI ethically but also how I have traditionally taught myself by rapidly implementing meta-communication that lays down shared track ahead of the actual conversation. I can do this with LLM because LLM doesn’t mandate I adhere to neuronormative protocol, just a logical expression by language that, for the first time in my entire life, is actually getting my syntax and phrasing and everything; it gets my levels in language in […]