ChatGPT: A Change in How You Use It, and Everything Else to Know – CNET

In late 2022, OpenAI wowed the world when it introduced ChatGPT and showed us a chatbot with an entirely new level of power, breadth and usefulness, thanks to the generative AI technology behind it. Since then, ChatGPT has continued to evolve, including its most recent development: Easy access for everyone.

ChatGPT and generative AI aren’t a surprise anymore, but keeping track of what they can do can be a challenge as new abilities arrive. Most notably, OpenAI now lets anyone write custom AI apps called GPTs and share them on its own app store, while on a smaller scale ChatGPT can now speak its responses to you. OpenAI has been leading the generative AI charge, but it’s hotly pursued by Microsoft, Google and startups far and wide.

Generative AI still hasn’t shaken a core problem — it makes up information that sounds plausible but isn’t necessarily correct. But there’s no denying AI has fired the imaginations of computer scientists, loosened the purse strings of venture capitalists and caught the attention of everyone from teachers to doctors to artists and more, all wondering how AI will change their work and their lives. 

If you’re trying to get a handle on ChatGPT, this FAQ is for you. Here’s a look at what’s up.

What is ChatGPT?

ChatGPT is an online chatbot that responds to “prompts” — text requests that you type. ChatGPT has countless uses. You can request relationship advice, a summarized history of punk rock or an explanation of the ocean’s tides. It’s particularly good at writing software, and it can also handle some other technical tasks, like creating 3D models.

ChatGPT is called a generative AI because it generates these responses on its own. But it can also display more overtly creative output like screenplays, poetry, jokes and student essays. That’s one of the abilities that really caught people’s attention.

Much of AI has been focused on specific tasks, but ChatGPT is a general-purpose tool. This puts it more into a category like a search engine.

That breadth makes it powerful but also hard to fully control. OpenAI has many mechanisms in place to try to screen out abuse and other problems, but there’s an active cat-and-mouse game afoot by researchers and others who try to get ChatGPT to do things like offer bomb-making recipes.

ChatGPT really blew people’s minds when it began passing tests. For example, AnsibleHealth researchers reported in 2023 that “ChatGPT performed at or near the passing threshold” for the United States Medical Licensing Exam, suggesting that AI chatbots “may have the potential to assist with medical education, and potentially, clinical decision-making.”

We’re a long way from fully fledged doctor-bots you can trust, but the computing industry is investing billions of dollars to solve the problems and expand AI into new domains like visual data too. OpenAI is among those at the vanguard. So strap in, because the AI journey is going to be a sometimes terrifying, sometimes exciting thrill.

What’s ChatGPT’s origin?

Artificial intelligence algorithms had been ticking away for years before ChatGPT arrived. These systems were a big departure from traditional programming, which follows a rigid if-this-then-that approach. AI, in contrast, is trained to spot patterns in complex real-world data. AI has been busy for more than a decade screening out spam, identifying our friends in photos, recommending videos and translating our Alexa voice commands into computerese.

A Google technology called transformers helped propel AI to a new level, leading to a type of AI called a large language model, or LLM. These AIs are trained on enormous quantities of text, including material like books, blog posts, forum comments and news articles. The training process internalizes the relationships between words, letting chatbots process input text and then generate what it believes to be appropriate output text. 

A second phase of building an LLM is called reinforcement learning through human feedback, or RLHF. That’s when people review the chatbot’s responses and steer it toward good answers or away from bad ones. That significantly alters the tool’s behavior and is one important mechanism for trying to stop abuse.

OpenAI’s LLM is called GPT, which stands for “generative pretrained transformer.” Training a new model is expensive and time consuming, typically taking weeks and requiring a data center packed with thousands of expensive AI acceleration processors. OpenAI’s latest LLM is called GPT-4 Turbo. Other LLMs include Google’s Gemini (formerly called Bard), Anthropic’s Claude and Meta’s Llama.

ChatGPT is an interface that lets you easily prompt GPT for responses. When it arrived as a free tool in November 2022, its use exploded far beyond what OpenAI expected.

When OpenAI launched ChatGPT, the company didn’t even see it as a product. It was supposed to be a mere “research preview,” a test that could draw some feedback from a broader audience, said ChatGPT product leader Nick Turley. Instead, it went viral, and OpenAI scrambled to just keep the service up and running under the demand.

“It was surreal,” Turley said. “There was something about that release that just struck a nerve with folks in a way that we certainly did not expect. I remember distinctly coming back the day after we launched and looking at dashboards and thinking, something’s broken, this couldn’t be real, because we really didn’t make a very big deal out of this launch.”

OpenAI CEO Sam Altman stands in front of a black screen that shows the term "GPTs" in bold white letters during a developer event in November 2023. OpenAI CEO Sam Altman stands in front of a black screen that shows the term

OpenAI CEO Sam Altman announces custom AI apps called GPTs at a developer event in November 2023.

Stephen Shankland/CNET

Think of GPTs as OpenAI trying to make the general-purpose power of ChatGPT more refined the same way smartphones have a wealth of specific tools. (And think of GPTs as OpenAI’s attempt to take control over how we find, use and pay for these apps, much like Apple has a commanding role over iPhones through its App Store.)

What GPTs are available now?

OpenAI’s GPT store now offers millions of GPTs, though as with smartphone apps, you’ll probably not be interested in most of them. A range of GPT custom apps are available, including AllTrails personal trail recommendations, a Khan Academy programming tutor, a Canva design tool, a book recommender, a fitness trainer, the laundry buddy clothes washing label decoder, a music theory instructor, a haiku writer and the Pearl for Pets for vet advice bot.

One person excited by GPTs is Daniel Kivatinos, co-founder of financial services company JustPaid. His team is building a GPT designed to take a spreadsheet of financial data as input and then let executives ask questions. How fast is a startup going through the money investors gave it? Why did that employee just file a $6,000 travel expense?

JustPaid hopes that GPTs will eventually be powerful enough to accept connections to bank accounts and financial software, which would mean a more powerful tool. For now, the developers are focusing on guardrails to avoid problems like hallucinations — those answers that sound plausible but are actually wrong — or making sure the GPT is answering based on the users’ data, not on some general information in its AI model, Kivatinos said.

Anyone can create a GPT, at least in principle. OpenAI’s GPT editor walks you through the process with a series of prompts. Just like the regular ChatGPT, your ability to craft the right prompt will generate better results.

Another notable difference from regular ChatGPT: GPTs let you upload extra data that’s relevant to your particular GPT, like a collection of essays or a writing style guide.

Some of the GPTs draw on OpenAI’s Dall-E tool for turning text into images, which can be useful and entertaining. For example, there is a coloring book picture creator, a logo generator and a tool that turns text prompts into diagrams like company org charts. OpenAI calls Dall-E a GPT.

How up to date is ChatGPT?

Not very, and that can be a problem. For example, a Bing search using ChatGPT to process results said OpenAI hadn’t yet released its ChatGPT Android app. Search results from traditional search engines can help to “ground” AI results, and indeed that’s part of the Microsoft-OpenAI partnership that can tweak ChatGPT Plus results.

GPT-4 Turbo, announced in November, is trained on data up through April 2023. But it’s nothing like a search engine whose bots crawl news sites many times a day for the latest information.

Can you trust ChatGPT responses?

Sadly, no. Well, sometimes, sure, but you need to be wary.

Large language models work by stringing words together, one after another, based on what’s probable each step of the way. But it turns out that LLM’s generative AI works better and sounds more natural with a little spice of randomness added to the word selection recipe. That’s the basic statistical nature that underlies the criticism that LLMs are mere “stochastic parrots” rather than sophisticated systems that in some way understand the world’s complexity.

The result of this system, combined with the steering influence of the human training, is an AI that produces results that sound plausible but that aren’t necessarily true. ChatGPT does better with information that’s well represented in training data and undisputed — for instance, red traffic signals mean stop, Plato was a philosopher who wrote the Allegory of the Cave, an Alaskan earthquake in 1964 was the largest in US history at magnitude 9.2.

Microsoft CEO Satya Nadella speaking while standing between logos for OpenAI and Microsoft Microsoft CEO Satya Nadella speaking while standing between logos for OpenAI and Microsoft

Microsoft CEO Satya Nadella touted his company’s partnership with OpenAI at a November 2023 event for OpenAI developers. Microsoft uses OpenAI’s GPT large language model for its Bing search engine, Office productivity tools and GitHub Copilot programming assistant.

Stephen Shankland/CNET

ChatGPT also can solve some math problems, explain physics phenomena, write chemistry lab reports and handle all kinds of other work students are supposed to handle on their own. Companies that sell anti-plagiarism software have pivoted to flagging text they believe an AI generated.

But not everyone is opposed, seeing it more like a tool akin to Google search and Wikipedia articles that can help students.

“There was a time when using calculators on exams was a huge no-no,” said Alexis Abramson, dean of Dartmouth’s Thayer School of Engineering. “It’s really important that our students learn how to use these tools, because 90% of them are going into jobs where they’re going to be expected to use these tools. They’re going to walk in the office and people will expect them, being age 22 and technologically savvy, to be able to use these tools.”

ChatGPT also can help kids get past writer’s block and can help kids who aren’t as good at writing, perhaps because English isn’t their first language, she said.

So for Abramson, using ChatGPT to write a first draft or polish their grammar is fine. But she asks her students to disclose that fact.

“Anytime you use it, I would like you to include what you did when you turn in your assignment,” she said. “It’s unavoidable that students will use ChatGPT, so why don’t we figure out a way to help them use it responsibly?”

Is ChatGPT coming for my job?

The threat to employment is real as managers seek to replace expensive humans with cheaper automated processes. We’ve seen this movie before: elevator operators were replaced by buttons, bookkeepers were replaced by accounting software, welders were replaced by robots. 

ChatGPT has all sorts of potential to blitz white-collar jobs. Paralegals summarizing documents, marketers writing promotional materials, tax advisers interpreting IRS rules, even therapists offering relationship advice.

But so far, in part because of problems with things like hallucinations, AI companies present their bots as assistants and “copilots,” not replacements.

And so far, sentiment is more positive than negative about chatbots, according to a survey by consulting firm PwC. Of 53,912 people surveyed around the world, 52% expressed at least one good expectation about the arrival of AI, for example that AI would increase their productivity. That compares with 35% who had at least one negative thing to say, for example that AI will replace them or require skills they’re not confident they can learn.

How will ChatGPT affect programmers?

Software development is a particular area where people have found ChatGPT and its rivals useful. Trained on millions of lines of code, it internalized enough information to build websites and mobile apps. It can help programmers frame up bigger projects or fill in details.

One of the biggest fans is Microsoft’s GitHub, a site where developers can host projects and invite collaboration. Nearly a third of people maintaining GitHub projects use its GPT-based assistant, called Copilot, and 92% of US developers say they’re using AI tools.

“We call it the industrial revolution of software development,” said Github Chief Product Officer Inbal Shani. “We see it lowering the barrier for entry. People who are not developers today can write software and develop applications using Copilot.”

It’s the next step in making programming more accessible, she said. Programmers used to have to understand bits and bytes, then higher-level languages gradually eased the difficulties. “Now you can write coding the way you talk to people,” she said.

And AI programming aids still have a lot to prove. Researchers from Stanford and the University of California-San Diego found in a study of 47 programmers that those with access to an OpenAI programming help “wrote significantly less secure code than those without access.”

And they raise a variation of the cheating problem that some teachers are worried about: copying software that shouldn’t be copied, which can lead to copyright problems. That’s why Copyleaks, a maker of plagiarism detection software, offers a tool called the Codeleaks Source Code AI Detector designed to spot AI-generated code from ChatGPT, Google Gemini and GitHub Copilot. AIs could inadvertently copy code from other sources, and the latest version is designed to spot copied code based on its semantic structures, not just verbatim software.

At least in the next five years, Shani doesn’t see AI tools like Copilot as taking humans out of programming.

“I don’t think that it will replace the human in the loop. There’s some capabilities that we as humanity have — the creative thinking, the innovation, the ability to think beyond how a machine thinks in terms of putting things together in a creative way. That’s something that the machine can still not do.”

Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.

Leave a Reply