How does Brainglue work?

A quick guide on how Brainglue works and how to get started with prompt chains.

Brainglue is a playground for large language models (LLMs) that allows you to easily create prompt chains that can yield better and more systematic responses from large language models. Brainglue might look just like some random AI prompt/response user interface, no different than what you get from AI chat solutions such as ChatGPT or Bard. But under the hood, Brainglue is a powerful system for AI chaining and experimentation that allows you to combine different models and configure them to execute a sequence of discrete micro-tasks that can be configured to produce impressive Generative AI solutions. To better understand how Brainglue works, let's explore a common use case for Large Language Models: Text Classification.

Example: Classifying Reviews in an E-Commerce Website

To follow along, you can view and clone the template we will explore in this example here: https://www.brainglue.ai/templates/clmh6klv8000r93b3th2bw7kl

Text classification is considered a trivial task for LLMs, but how simple is it to craft a system that can consistently evaluate and classify reviews? Without Brainglue, this might look like a daunting task. Sure, you can craft a prompt that classifies reviews in ChatGPT, but then what? How do you implement it on your own system? How can you guarantee that you will get consistent results from that prompt? Enter Brainglue + Prompt Chains.

Step 1: Define a Chain Variable "review"

Chain variables are an important concept in Brainglue because they allow you to set dynamic values that can later be used via the API. Chain variables can have a default value that you can use to test your chain, but the ultimate goal of the variables is to override them via the API so that you can leverage your chain as a solution in your application or automation. In this case, we will set a chain variable review with the following review as a default value:

I purchased, at full price, this USB screen. When I received the box, it had been returned from AOC with official AOC tape on top and bottom, covering the handle. I paid full price. This company thought they could substitute.
It doesn't work. When plugged in, the Manufacturers splash screen shows, clears and the monitor goes black. I work with three other people that use this monitor; I connected to a working unit, got the AOC splash, and nothing. Note, Same Model, Same Driver, Same OS, SAME CABLE. With his attached, AOC disappears and then shows the assigned screen. With mine, nothing.
I understand Refurbished items. I also don't mind purchasing one. I will NEVER purchase from this company again; you don't "Substitute" without permission. You don't pawn used equipment as New. Ever.
THIS company thought it was Okay. I do not.
I have left accurate, if less than stellar, reviews and been contacted and told to change. I hope they try.

This comment will allow us to test our prompts as we craft them later in the sequence.

Step 1: Chain Instructions

In Brainglue we can create a set of global instructions that will be applied to all the prompts in the chain. This is particularly important because Brainglue doesn't explicitly pass context between prompts. It's up to you to decide what outputs are passed to the next prompt in the chain and how exactly they will be used and interpreted by the next prompt. This is where prompting becomes a magical craft with endless possibilities. Chain instructions allow you to define a behavior that will be known by all the prompts, irrespective of whether or not you're passing down the output of one prompt to the next.

For our current case, we can use these instructions:

Step 2: Create Comment Sanitization Prompt

Prompting is a subtle art that often requires systematic thinking. It would be tempting to plug the review directly into a prompt that classifies it, but instead, we will run it through a sanitization prompt that will re-write in a more expository and clear way.

We do that to guarantee that we are evaluating a normalized and objective re-interpretation of the review and not the verbatim text, which might contain text or expressions that steer the model away from a more deterministic behavior. The prompt we will use is the following:

Notice the usage of the @ character. In Brainglue, you can use the @ character to plug chain variables and prompt outputs into subsequent prompts. When we run the chain, this prompt will be evaluated as follows:

As you can see, the @review reference gets replaced with the default value in the variable, and it will be replaced with whatever you pass via the API.

Step 3: Create a Classification Prompt

Now that we have a normalized comment, we will pass it to a subsequent prompt, which will create the classification.

We will use the following prompt:

As you can see, this is a fairly more complex prompt that uses heuristics and examples to steer the model toward the right classification behavior. At the end of the prompt, use the @ character again, but this time, we use it to pass the output from the sanitization prompt. We do so by referencing it by its index number @output0

And we are done!

Just like that, we have an LLM-powered system that can be invoked via an API and used to classify reviews. At this point, you can use the returned results and parse them on your end to fully understand what your customers are saying about your product and what exactly they like and dislike. As you can see, Brainglue is an incredibly flexible system that allows you to configure LLM prompts in creative ways and get productized solutions via the built-in Chain API.

Last updated