AI Is Now Part of Popular Culture: Thanks To ChatGPT.
In the past few months, the ChatGPT buzz has taken over my social feed, which is full of tech news. On TikTok and Twitter, I follow a lot of coders and content makers, and most of them are going crazy over how OpenAI’s ChatGPT is changing their fields. Their favorite thing to say was, “This changes everything.” I agree with them now that I’ve tried it myself (more on that later) and read more about it.
As someone who keeps up with technology and AI, this test start of the ChatGPT project shouldn’t have surprised me. My news feed has been full of stories about AI tools like GPT-3, Copilot on GitHub, DALL-E, Stable Diffusion, and more for a long time. But until now, these technologies needed a certain level of skill and tools to set up, which meant that only tech experts or smart enthusiasts could use them. ChatGPT gives this technology to anyone who wants to use the service and enroll for a free account.
What makes it so popular?
No other creative AI app has gained as much popularity and spread as ChatGPT has.
It has been the topic of many jokes and was a big topic of conversation last month at the Economic Forum in Davos, Switzerland. Baidu, a big tech company in China, made its version called Ernie Bot.
Altman said on December 5 in a tweet that 1 million people signed up for the robot in the five days after it came out. According to a UBS note last week, ChatGPT had 100 million monthly active users by January, just two months after its release. This rendered it the consumer app with the most rapid growth in history.
TikTok reached 100 million people in nine months, while IG took two and a half years.
According to figures from Similarweb, January 31 was the busiest day ever for ChatGPT, with a record 28 million daily hits to its website. That was 165% more than it was a month ago.
AI: How to Explain the Unexplainable
In a way, ChatGPT’s success makes it easy for me to clarify what we do to my friends and family. As an AI company, we make money by applying machine methods to business problems and projects that can change the world. When I tell this to my wife’s aunt Susan, she nods and says, “Ohhh,” but her eyes get blurry. We immediately have a common understanding (to a degree) when I explain, “For instance, we use the processing of natural languages and reinforcement education to build expert systems that may enhance outcomes, like in ChatGPT.”
People I talk to about ChatGPT feel one of two ways:
Thrilled and hopeful:
This includes both new teachers and
Freaked out and scared:
This includes experienced writers, content creators, and programmers with extensive experience. They are aware that the results from ChatGPT are frequently incorrect or incomplete and may violate the intellectual property rights of other manufacturers. Still, there is not to specify where the information originated or give credit. Even though ChatGPT’s defects may be reassuring (“It can’t do my job yet!”). We are worried that many people may accept “good adequate” answers, diluting the knowledge that we have fought so hard to achieve.
The big language model behind ChatGPT was trained on a wide range of written materials, such as books (fiction and nonfiction), web pages, social media, and scientific papers. The model has many paths to predict a good answer to almost any question. But it doesn’t give any sources or say how it came to that answer. This makes it feasible for the model to accidentally copy from another source or give an incorrect or biased answer to some secret factors.
How does ChatGPT affect support for self-service?
Using my new ChatGPT account, one of the primary things I did was ask it to answer some questions. The biggest place for practitioners to share information. I’d like to know if this AI is trying to take my job.
All I know is that the material on our community site may have been used to train the GPT3 -3 models. Our friends at StackOverflow, the biggest general-purpose computer question-and-answer site in the world, banned GPT-generated replies from their boards, which was a bold move. Due to the lack of AI-generated solutions being provided by users in the Community, we have not yet done this.
We are all training the machine.
By picking apart ChatGPT’s replies to technical questions, I’m not trying to be mean. The ChatGPT team made sure that the model could give answers that could be wrong. They could have stopped these if they had regained their confidence, but they wouldn’t have tried to answer nearly as many problems. I think it’s a great step forward, despite some problems. I also know that things will only get better from here on.