Towards A New Approach to Real-Time AI Insights Without Moving Any Data

Sid Probstein -
Towards A New Approach to Real-Time AI Insights Without Moving Any Data

In the Artificial Intelligence (AI) world, the traditional approach has been to “put the data into the AI.” Like the database model, this method involves inputting data and expecting insights in return. However, in large enterprises with numerous data silos and diverse content types, this approach may not be the most efficient or effective.

The Challenges of the Traditional AI Approach

  1. Handling diverse content types: No single database can effectively model every type of content, particularly application data that may not fit into a single schema.
  2. Enterprise complexity: In large organizations, onboarding employees to a single application can be a cumbersome process, making data copying and extensive ETL (Extract, Transform, Load) projects challenging.

Problems Managing Multiple Data Sources
Problems Managing Multiple Data Sources

At SWIRL, we have taken a different approach to AI, starting with unified search as the foundation. Instead of moving data, our system queries important sources in real time, utilizing internal authorities like Single Sign-On (SSO) to determine user access rights.

Key Features of SWIRL’s Approach

  1. Data remains in the system of record or an analytic-friendly copy, along with associated domain models like taxonomies.
  2. The system finds the best results from all sources by re-ranking and optionally presents them to the user for “adjusting the take”.
  3. The best results are then used to generate AI insights via Retrieval Augmented Generation (RAG).

The Importance of Retrieval in RAG

The “R” in RAG, which stands for Retrieval, is a critical component. If knowledge workers struggle to find the right information, an internally deployed AI will face the same challenge. SWIRL addresses this issue by enabling users to query across sources without extensive IT projects upfront, obtaining the best results from multiple sources, and generating AI insights from a wide range of Large Language Models (LLMs).

The Role of Reader LLMs

SWIRL’s ability to find the best results across sources in real time is made possible by “Reader LLMs.” These unsung heroes of information retrieval play a crucial role in the system’s effectiveness. The next post will discuss further details about Reader LLMs.

SWIRL’s innovative approach to AI, based on Retrieval Augmented Generation, addresses the challenges faced by traditional AI architectures in large enterprises. By leveraging unified search, real-time querying, and Reader LLMs, SWIRL enables organizations to generate valuable AI insights from diverse data sources without requiring extensive data movement or IT projects.