Insights
breadcrumb
Helm Gen™ – From RAG to Riches

Helm Gen – From RAG to Riches

Unpacking the intricacies of Large Language Models and opening the door for Helm Gen through RAG, a fitting form of Generative AI for the business world
Dawood Patel
14 August 2024
.
7 min read

The hype and talk about LLMs and Generative AI have created much anticipation in the business world around the benefits these developments can bring to both top and bottom lines.

The challenge though, is that like any hype cycle it oversimplifies the inherent nuances and fragilities of implementing Gen AI within a business – such as the application, process, and channels, to name a few.

With Helm having been at the forefront of Natural Language Processing developments for the last eight years, we have learnt a great deal - we have the scars to prove it, and the experience that goes with it. Coupled with the rapid advancements in the Generative AI space –

"

We believe we have now identified the safest way to implement a GenAI solution within a business environment that allows organisations to realise the benefits they’ve envisaged.

Dawood Patel – Helm CEO

As many companies have found out the hard way – it's not as simple as just plugging ChatGPT or Llama 3 into your systems or processes. We also need to consider whether ‘traditional’ machine learning or well thought-out UX will do the trick. Or perhaps how we combine all three.

Like any tech implementation, defining a problem statement is critical, as is assessing whether the tech you intend to use is actually going to solve the problem. We have seen many organisations excitedly implementing popular solutions only to rue the haste of their decision further down the line.

In situations such as the one we find ourselves in now, the confluence of service design and data science capabilities come to the fore - skills we have been honing and mastering since 2016. I cannot stress this enough – it is crucial to understanding brand and technology landscapes before delving into the intricacies of LLMs and Generative AI. In fact, it’s so critical that we won’t start development on any such projects until we’re satisfied that these factors have been considered. Only then can we, with a clear conscience, begin to build a GenAI solution.

Once that hurdle has been successfully negotiated, one needs to understand how LLMs function in the Gen AI domain – that is the key to unlocking their potential. In order to simplify the understanding of how a LLM needs to be structured for an implementation, we need to deconstruct its building blocks:

These are the tips of many, many icebergs.

This level of sophistication requires not only skill, but a platform robust enough to manage any mode of input and marry them with the correct response. This requires language detection, sentiment analysis, intent mapping, integrations and much more. Doing the same for other languages like Zulu or Afrikaans is therefore not just a simple exercise of translation. Further work needs to be done in order to leverage the initial investment and effort.

The above training of these LLMs needs to cater for text-based interactions as well as factor in alternative modalities – voice, vision or even virtual reality.

It is a big, big undertaking, and you can now see why it’s not as simple as plugging the likes of ChatGPT into your brand platforms.

We have just taken a new product to market called Helm Gen, which makes use of a pivotal advancement in the realm of LLMs called Retrieval-Augmented Generation (RAG). Once we have unpacked the problem, investigated and addressed all of the considerations above, Helm Gen uses RAG to provide a Generative AI solution that operates within the bounds of brand content, while limiting ‘hallucinations’.

The reason I mention this now is that the considerations I have raised here are not intended to dissuade anyone from using these GenAI solutions – all we are trying to do is ensure that businesses understand the investment required to truly realise the benefits of Generative AI for business.