Beyond the Search Box: Large Language Models(LLM’s) vs. Search Engines and Unlocking LLM’s Potential
Hey there, coding wizards and curious non-coders! If you’ve been exploring the realms of Large Language Models (LLMs) like ChatGPT or BARD, you’re in for a treat. In this article, we’re going to unravel the magic behind LLMs and compare them to our trusty old search engines.
Part 1: How Do Traditional Search Engines Work?
When you ask a question to a Traditional Search Engines, they search through their vast index of web content and then “provides you with a ranked list of web pages that contain relevant information”.
To do this they follow a three-step process:
· Crawling: Think of web crawlers as diligent scouts combing the web for new and updated pages.
· Indexing: Once the crawlers return, the search engine organizes all the information they found during the crawling phase by creating an index in a giant library catalog
· Ranking: This is the final step where the search engine delivers a ranked list of web pages, ensuring that the most relevant results appear at the top of your search results basing on your query
Part 2: How Do Large Language Models Work?
Generative AI is a broad category of artificial intelligence that encompasses various techniques for generating content, including text, images, music, and more.
Large Language Models (LLMs) are a specific type of generative AI model that focuses on generating human-like text.
LLMs work by learning from a massive amount of training text data. LLMs don’t just search based on the query, they reason and understand context of the query.
How LLMs Work: LLMs are designed to grasp human language, taking context into account. Instead of linearly searching for a direct answer and ranking everything, they employ logic and inference to “predict and generate the next word in response to your query”. The basic structure of these LLM models consists of nodes and connections and the task of the model is to map distance between these words with output being the word most likely to come next
The Process: The LLM model takes in a sequence of input tokens that we gave and generates a sequence of output tokens. This generation of output tokens involves repeatedly predicting the next token and appending it to the output sequence until a stopping criterion is met (e.g., reaching a maximum length or generating an end token).
Part 3: Choosing Between Search Engines and LLMs
The million-dollar question is when to use search engines and when to tap into the power of LLMs:
Search Engines: Perfect for pointed, specific, and factual questions. They’re like your go-to encyclopedias when you need precise information.
LLMs: Ideal for creative brainstorming, idea generation, and problem-solving. They understand context and simplify complex tasks. And if you do the same task in a traditional search engine, you will get examples that then have another step that you have to go into and actually find your answer, which creates more work. So with a reasoning engine, a lot of that is surfaced in a way that you don’t have those extra steps, makes that process much faster, but also just much more human-friendly
Part 4: Best Practices for Using LLM Models
Before you embark on your LLM journey, here are some key considerations:
Embrace Variation: LLM’s are not designed to generate the same output every time, There’s inherent randomness or variation in what they produce. So, don’t be surprised if you get different outputs for the same query.
Master the Art of Prompting: The prompt (query) that you are giving to LLM’s makes a big difference.
“Its Not just about asking a question; it’s about asking it well”.
Crafting a good prompt is like having a conversation with an LLM. Conversations are all about refining either our understanding to build some shared understanding or to interact together in order to solve a problem. Rather than treating it as a one-off, think of it as an ongoing dialogue where you iterate and refine the prompt. If we don’t engage in ongoing conversations, ask follow-up questions, work together to solve problems, and adapt the information we receive to suit our needs, we won’t fully harness the potential and capabilities of these large language models
Beware of Hallucinations: LLMs can occasionally generate factually incorrect information due to noisy training data or discrepancies between training and real-time information. Always double-check the answers you receive to ensure accuracy.
Conclusion:
In the ever-evolving landscape of technology, understanding the nuances of LLMs and search engines is essential. These tools can be your trusted companions, whether you seek precise answers or seek to unlock your creative potential. By adhering to best practices and remaining vigilant for the idiosyncrasies of LLMs, you’ll harness their incredible capabilities effectively. Happy exploring and reasoning!
References
Terrance, A. R., Shrivastava, S., Kumari, A., & Sivanandam, L. (2018, May 1). Competitive Analysis of Retail Websites through Search Engine Marketing. Revista Ingeniería Solidaria; Universidad Cooperativa de Colombia. https://doi.org/10.16925/.v14i0.2235
Prompt Engineering for ChatGPT. (n.d.). Coursera. https://www.coursera.org/learn/prompt-engineering/
Kennedy, A. (2023, June 13). How finding and sharing information online has evolved — Generative AI: The Evolution of Thoughtful Online Search. LinkedIn. https://www.linkedin.com/learning/generative-ai-the-evolution-of-thoughtful-online-search/how-finding-and-sharing-information-online-has-evolved
Clarke, S., Blight, G., & Milmo, D. (2023, November 2). How AI chatbots like ChatGPT or Bard work — visual explainer. The Guardian. https://www.theguardian.com/technology/ng-interactive/2023/nov/01/how-ai-chatbots-like-chatgpt-or-bard-work-visual-explainer?ref=upstract.com