Saturday, June 07, 2025

Giant Dopes

      I am convinced the people helming the so-called "Artificial Intelligence" efforts don't understand how a Large Language Model works -- or they think that's all anyone is: a mechanism that, given a topic and general direction, is able to predict the most probable next words and phrases, over and over:

     "Our vision is that, over time, A.I. would become part of the core infrastructure of higher education." 

     That's Leah Belsky, OpenAI’s Vice President of Education.  She's talking about a contraption with no mechanism for distinguishing between truth and fiction, no sense of right and wrong, no notion of the difference between meaningful art and utter dreck.  This is not a matter of preferring Rodin to Picasso but of not understanding the distinction between the work of either of them and a mud pie -- or a cow pie.  The software knows nothing of pain or joy, truth or beauty, lies or horror.

     It's not that AI doesn't care; it's that there is nothing there to care.  The output may be telling you Jane Austin and Mary Wollstonecraft Shelley were sisters, but it's not because the machine believes that: there's nothing inside it to do the believing.  It was just the most plausible sequence of words in response to your particular question at that particular point.  The haunting truth is there is no ghost in the machine.  And it appears that most "AI" executives believe there's no ghost of personhood inside anyone around them, either.

     Trained on plagiarized works, even the very best "AI" anyone has isn't doing anything more than coming up with the most likely logical and grammatically-correct sequence of words in response to a prompt.  It's an impressive feat, and I'd trust a good (and well-disciplined) AI to create a meeting transcript, or a top-of-the-line one to suss out whether I'd correctly employed the subjunctive (as if!).  But you cannot learn from them; they cannot be trusted to referee facts, let alone exercise judgment.

     It should be no more legal or socially acceptable for a private entity to build and make use of LLM "AI" in open-ended discourse, counseling or teaching than it is for a private company or individual to stockpile sarin gas or make their own nuclear weapons.

2 comments:

Anonymous said...

(This is Bob K. ) I could not agree with you more. In addition, many times when the media refers to AI, they imply a hazy definition of what it is. They suggest peoples heads will be placed in videos where said people were not. Words will be placed in mouths which never said them. Essentially they are associating AI with lying, cheating, stealing. So why don't they just say, "lying, cheating & stealing," instead of mooning over the term like loons over Lake Bemidji?

Anonymous said...

AI is turning into a near-worthless buzzword, I fear.

It reminds me of the "Digital Ready!" stickers stuck on hi-fi gear, speakers, and headphones in the late 70s, as a way to profit off the arrival of CDs.

Someone on one of the Reddit radio subs was asking about a radio of some sort and how it was so revolutionary than any other because it used AI.

I tried, really I did, to explain that a basic gadget like that could not possibly have any real form of AI in it, but I failed. I just hate seeing people get ripped off by sleazy marketing...