How AI and ChatGPT affect the way executives do business

Image of a phone with Chat GPT open, in front of a textbook on AI

I’ve been thinking a lot lately about artificial intelligence. I get articles in my inbox daily, it’s on the cover of magazines, and people everywhere are talking about it. The arrival of OpenAI’s ChatGPT and other large language models (LLMs) is a profound technological breakthrough, and it’s just the start.

AI is showing up in more places every day, and it’s important to understand the basics about how it works and why it matters. ChatGPT is an AI-powered language model trained on a huge amount of text on the internet. It’s a tool to provide information or help you answer questions across a range of topics, though it comes with caveats.

As someone who advises senior business executives, I’m diving in to learn more about AI. I want to help clients make intelligent decisions about how to use AI and understand how this rapidly expanding technology will impact their businesses. With that in mind, I’ve been seeking serious, thoughtful sources of information about what AI is — and is not — and its benefits and risks.

Full disclosure: I am not a technologically sophisticated person. But I, like many people, use tech daily for all kinds of tasks without understanding the many details and nuances. For example, I can drive a car without fully knowing how my engine works. Consider chatbots. Most of us have experienced these AI-powered live chat agents on retail sites when they pop up to ask, “How can I help you?” or “What are you searching for today?”

Not everybody is a techie, and that’s fine. What my clients and I need is straightforward information in plain language about how to understand AI. During the coming weeks, I’m going to share what I’m learning.

The black box poses one of the biggest problems with ChatGPT

Remember when your math teacher required you to show your work on a quiz or exam? Or when your executive coach insisted that you explain your thinking to your team, not just give them the answers?

AI can’t do that.

In a recent webinar from The Economist, computer scientist Abby Bertics explained why. Large language models can’t read words, so everything has to be translated into numbers and then back into words. These models are trained on huge data sets with billions of connections.

The process is simply too large and too complex to be able to trace how ChatGPT arrives at its answers. Even the programmers don’t fully understand how it works. For example, ChatGPT has proved to be surprisingly good at accurate translation, even from languages where it has a fairly small data set. No one knows why.

The result is that ChatGPT sometimes generates answers that are inaccurate, biased or otherwise wrong — and there’s no way to check its sources.

What does this mean for business leaders? Don’t assume that text generated by ChatGPT is accurate. Human beings will need to do the fact-checking. That will slow down the process, but it reduces the risk of making decisions based on faulty information or inferences. 

I’ll share more information about AI in coming posts. In the meantime, if you’re interested in talking about the potential and risks of AI, please get in touch.

Gail Golden

As a psychologist and consultant for over twenty-five years, Gail Golden has developed deep expertise in helping businesses to build better leaders.

https://www.gailgoldenconsulting.com/
Previous
Previous

Why coaching lessons from Wendy Rhoades of ‘Billions’ are priceless

Next
Next

Lessons learned from the pandemic, and a hearty good riddance