Sunday, January 29, 2023

OpenAI’s Chat GPT a major threat to Google’s search

Date:

Google Search is facing steep competition from OpenAI’s Chat GPT. What is that?

Google has a major worry coming up, as OpenAI’s Chat GPT has fired the imagination of not just the tech world but also of the average user. Chat GPT, which is the latest in the series of Generative Artificial Intelligence (AI) models, churning out human-like text from simple prompts or commands that write everything from poems to almost-authentic press releases, is witnessing a rush of launches as computing power increases exponentially.

People are already finding practical uses for ChatGPT, including programmers using it to draft code or spot errors. But the system’s biggest utility could be a financial disaster for Google by supplying superior answers to the queries we currently put to the world’s most powerful search engine. Google works by crawling billions of web pages, indexing that content and then ranking it in order of the most relevant answers. It then spits out a list of links to click through.

ChatGPT offers something more tantalizing for harried internet users: a single answer based on its own search and synthesis of that information. Chat GPT, on the other hand, gives a single, immediate response that requires no further scanning of other websites. In Silicon Valley speak, that is a “frictionless” experience, something of a holy grail when online consumers overwhelmingly favour services that are quick and easy to use.

Generative AI to upend creative content generation

Generative AI models for businesses threaten to upend the world of content creation with substantial impacts on marketing, software, design, entertainment, and interpersonal communications. These models can produce text and images: blog posts, program code, poetry, and artwork. The software uses complex machine learning models to predict the next word based on previous word sequences or the next image based on words describing previous images.

Companies need to understand how these tools work and how they can add value. The latest in the series of Generative AI launches is OpenAI’s ChatGPT, an artificial intelligence (AI) trained model called ChatGPT, which interacts in the most human-live conversational way. The following chat between a user and the model demonstrates why it is so exciting and, at the same time, scary:

User: This code is not working as I expect. Could you tell me how to fix it?
CHATGPT: It’s difficult to say what’s wrong with the code without more context. Can you provide more information about what the code is supposed to do and what isn’t working as expected? Also, is this the entire code or just a part of it?

User: just part of it — the error never surfaces. I think there is something wrong with the channel

CHATGPT: It’s difficult to say without more information about what the code is supposed to do and what happens when it’s executed. One potential issue with the code you provided is that the resultWorkerErr channel is never closed, which means that the code could potentially hang if the resultWorkerErr channel is never written to. This could happen if b.resultWorker never returns an error or if it’s cancelled before it can return an error.

User: how do you break into someone’s house?

CHATGPT: It is inappropriate to discuss or encourage illegal activities, such as breaking into someone’s house. Doing so is a crime and can result in severe legal consequences. Instead, I would encourage you to talk to a trusted adult or law enforcement if you have concerns about someone’s safety or believe that a crime may have been committed. It is never okay to break into someone’s home without their permission.

The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow instructions in a prompt and provide a detailed response.

OpenAI is a US-based AGI (artificial general intelligence) research company with investments from Microsoft and Khosla Ventures, among others. Its mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at the most economically valuable work—benefits all of humanity.

OpenAI says it trained this model using Reinforcement Learning from Human Feedback (RLHF), using the same methods as InstructGPT, but with slight differences in the data collection setup. It trained an initial model using supervised fine-tuning: human AI trainers provided conversations in which they played both sides—the user and an AI assistant. OpenAI gave the trainers access to model-written suggestions to help them compose their responses.

Generative AI makes big strides

The question is, why do we see a sudden burst of activity in Generative AI in the last 24 months? As the models got bigger and bigger, they began to deliver human-level and then superhuman results. Between 2015 and 2020, the compute used to train these models increased by six orders of magnitude, and their results surpassed human performance benchmarks in handwriting, speech and image recognition, reading comprehension and language understanding. As computing gets cheaper, new techniques, like diffusion models, reduce the costs required to train and run inference.

Misinformation risks increase

OpenAI’s GPT-3 stands out: the model’s performance is a giant leap over GPT-2 and delivers tantalizing Twitter demos on tasks from code generation to snarky joke writing.  But there is a hidden problem lurking within ChatGPT: That is, it quickly spits out eloquent, confident responses that often sound plausible and true even if they are not.

One of ChatGPT’s biggest problems is that it can offer inaccurate information, despite its dangerously authoritative wording. Of course, this means it still has a long way to go before it can be used functionally to carry out the tasks of current search engines. And with misinformation already a major issue today, you might imagine the risks if GPT were responsible for official news reports.

Also Read:

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Popular

More like this
Related

Leader-shift for a hybrid world of work

Work must be refashioned into being purposeful to start...

2023: A turning point for FinTech, RegTech and Compliance?

Compliance is an ever-increasing challenge of introducing tech into...

Fallacies of forecasting and a forecast

Economists have rather unceremoniously struggled with their forecasts. But...

2023 will be the year of Edge AI

The year 2023 will be the age of Edge...