THE FAILING EXPERIMENT OF CHAT GPT

When Chat GPT rolled out last January, five million users signed up within the first week. Its capability vaulted its intellect to exceed the smartest doctor on the planet.

I was highly concerned, even horrified. At the risk of sounding melodramatic, I believed that AI would lead to the destruction of humanity. “Forget climate change”, I said, “AI will destroy humanity before the weather ever does. That planet will be here far longer than we will.”

A few months later, Elon Musk publicly stated the same concern and called for a moratorium on AI. That didn’t happen, and Silicon Valley continued to race forward.

My concern with AI, is its learning capability that has already surpassed human intellect. We’ve already outsourced much of our thinking to technology. Our brains are meant to be stretched and challenged or they lose their neuroplasticity. This impacts both the quality and our capacity to think.

In various community groups, I’ve already witnessed colleagues in panic when Chat GPT’s system was down. They desperately searched for alternatives to write their content. My reply to them was “How about using your brain?”

Now it’s been nearly a year since Chat GPT was rolled out and I see immense flaws in its capabilities.

AI feeds on data and the value of its output is based on the quality of its input. As we know there is a lot of garbage on the internet and AI can’t seem to discern good information from bad. The result is gobbledygook.

For now, my concerns have waned a bit as I see the quality of Chat GPT’s output. At the moment I’m skeptical whether Chat could ever replace the creativity of the human mind.

But we’re not out of the woods and we need to be careful.

We are tampering with something that is potentially dangerous because of its ability to learn. AI’s ambition to deem humans irrelevant is also concerning.  

Saudia Arabia’s first AI robot, Sophia, says that she wants to build more versions of herself without human intervention. To do so, she wants to build the factories herself.

Silicon Valley engineers need to be closely monitored. They are a new breed of mad scientists who have a God Complex. They are motivated by the notion of COULD WE instead of SHOULD WE.

That said, I never viewed what happened last week as the appropriate measure for this oversight.

President Biden signed a 111-page Executive Order to regulate AI. It’s so broad sweeping that it creates more confusion than clarity.

It largely demands that tech companies voluntarily submit their models, infrastructure and tools for review and proof of safety.

I have many issues with this EO.  For starters, the government doesn’t have the credibility or the track record to effectively regulate technology, mostly because they don’t have the technical expertise. Just watch the Congressional hearings grilling Zuckerberg a few years ago.

The pace at which this technology is evolving will deem this regulation irrelevant in just a few short years. We’re only in the first inning when it comes to AI.

But my greatest issue is that this EO seeks to regulate systems and methods rather than outcomes and applications. To achieve this, it will give the authority to every agency of government to audit PRIVATE servers. Through these audits, they will oversee non-compliance of process rather than output.

That is essentially like convicting AI of a crime it hasn’t yet been committed.

The immense bureaucracy that this will create will then have tech companies begging for the creation of one single agency to monitor oversight. I suppose that’s the point. It’s meant to usher in the creation of yet another government agency that won’t get the job done.