diff --git a/docusaurus/docs/getting-started.md b/docusaurus/docs/getting-started.md new file mode 100644 index 0000000..db159a8 --- /dev/null +++ b/docusaurus/docs/getting-started.md @@ -0,0 +1,302 @@ +# Getting Started with Guardrails JS + +## Installation + +See the [installation](/docs/installation) page for instructions on downloading the latest and pre-release versions as well as any requirements. + +## Create A Simple Guard Without an LLM + +For your first Guardrails application, we'll create a simple guard that checks if a word is between 1 and 10 characters long. If it's longer than 10 characters, we'll ask Guardrails to "fix" the output by truncating it. + +The first pass of this code will test the guard against hard-coded values. In the next step, you'll connect the code to an LLM and test its output. + +1. Save the following code to a file named `test-gr.js`. + +```javascript +import assert from 'node:assert'; +import process from 'node:process'; +import { Guard, Validators, exit } from '@guardrails-ai/core'; + +process.on('exit', (code) => { + console.log(`About to exit with code: ${code}`); + exit(); +}); + +async function main () { + try { + const guard = await Guard.fromString( + [await Validators.ValidLength(1, 10, 'fix')], + { + description: 'A word.', + prompt: 'Generate a single word with a length between 1 and 10.' + } + ); + + const firstResponse = await guard.parse('Hello World!'); + console.log("first response: ", JSON.stringify(firstResponse, null, 2)); + assert.equal(firstResponse.validationPassed, true); + assert.equal(firstResponse.validatedOutput, 'Hello Worl'); + assert.equal(guard.history.at(0).status, 'pass'); + + const secondResponse = await guard.parse('Hello World 2!'); + console.log("second response: ", JSON.stringify(secondResponse, null, 2)); + assert.equal(secondResponse.validationPassed, true); + assert.equal(secondResponse.validatedOutput, 'Hello Worl'); + assert.equal(guard.history.at(1).status, 'pass'); + + process.exit(0); + } catch (error) { + console.error(error); + process.exit(1); + } +} +await main(); +``` + +2. Run the code with the following command: + +```bash +node ./test-gr.js +``` + +3. Examine the result, which will resemble the output below. Notice that Guardrails “fixed” the output for us by truncating each phrase down to exactly 10 characters. + +``` +first response: { + "rawLlmOutput": "Hello World!", + "validatedOutput": "Hello Worl", + "validationPassed": true, + "error": null +} +second response: { + "rawLlmOutput": "Hello World 2!", + "validatedOutput": "Hello Worl", + "validationPassed": true, + "error": null +} +``` + +### Explanation of the Guardrails JS Code + +The code above creates a Guard that you can use to wrap a call to an LLM. In this case, your Guard will examine the callback from an LLM to ensure that it followed your instructions to generate a word of between one and 10 characters in length. + +```javascript +try { + const guard = await Guard.fromString( + [await Validators.ValidLength(1, 10, 'fix')], + { + description: 'A word.', + prompt: 'Generate a single word with a length between 1 and 10.' + } + ); +``` + +In this case, you specify validation by creating your validators using Javascript. You can also create a validation schema by creating and loading a RAIL (Reliable AI Markup Language) specification. + +When you specify our guard, you can also specify what happens if the LLM fails to adhere to it. This is called an `on_fail` policy. In the above, when you define your validator, you specify as the third argument that the `on_fail` policy should be `fix`. In other words, if the output is longer than 10 characters, you want your guard to truncate it. + +```javascript + const guard = await Guard.fromString( + [await Validators.ValidLength(1, 10, 'fix')], +``` + +In the current code, instead of wrapping an LLM call directly, you act as if you’re parsing the result of an LLM response by calling the `parse()` method on your Guard: + +```javascript +const firstResponse = await guard.parse('Hello World!'); +``` + +The guard will generate additional context in the prompt to enforce whatever limits you specify using your guards. In this simple example, this will be restricting the output to a string of 10 characters or less. This could also include more complex guards, such as testing the output you’ve received from an LLM’s attempt to [extract information from an unstructured PDF file](https://www.guardrailsai.com/blog/ai-information-retrieval-guardrails). + +## Validating LLM Output + +Now let's change the code to connect to OpenAI and examine its output. + +1. Install OpenAI for Node.js: + +```bash +npm install --save openai +``` + +2. Log in to OpenAI and [create an API key](https://platform.openai.com/docs/quickstart) to use with your code. + +3. Set the OpenAI key as an environment variable. For the Bash shell, you can use an export command in your .bash_profile: + +```bash +export OPENAI_API_KEY= +``` + +4. Save the following code to the file `test-openai.js`: + +```javascript +import assert from 'node:assert'; +import process from 'node:process'; +import { Guard, Validators, exit } from '@guardrails-ai/core'; +import OpenAI from "openai"; + +const openai = new OpenAI(); + +process.on('exit', (code) => { + console.log(`About to exit with code: ${code}`); + exit(); +}); + +async function main () { + try { + const guard = await Guard.fromString( + [await Validators.ValidLength(1, 10, 'noop')], + { + description: 'A word.', + prompt: 'Generate a single word with a length between 1 and 10.' + } + ); + + const completion = await openai.chat.completions.create({ + messages: [{ role: "system", content: "You are a helpful assistant. Please generate a word with a length of between 1 and 10 characters. Do not exceed 10 characters in length. Return only this word in your output." }], + model: "gpt-3.5-turbo", + }); + + console.log(completion.choices[0]); + + const firstResponse = await guard.parse(completion.choices[0].message['content']); + console.log("first response: ", JSON.stringify(firstResponse, null, 2)); + assert.equal(firstResponse.validationPassed, true); + assert.equal(guard.history.at(0).status, 'pass'); + + const completion2 = await openai.chat.completions.create({ + messages: [{ role: "system", content: "You are a helpful assistant. Please generate a word with a length of between 11 and 20 characters. Do not exceed 20 characters in length. Do not include punctuation." }], + model: "gpt-3.5-turbo", + }); + + console.log(completion.choices[0]); + + const secondResponse = await guard.parse(completion2.choices[0].message['content']); + console.log("second response: ", JSON.stringify(secondResponse, null, 2)); + assert.equal(secondResponse.validationPassed, true); + assert.equal(guard.history.at(1).status, 'pass'); + + process.exit(0); + } catch (error) { + console.error(error); + process.exit(1); + } +} +await main(); +``` + +5. Run this code with your + +```bash +node ./test-openai.js +``` + +6. Examine the output of the program, which will look something like this: + +``` +{ + index: 0, + message: { role: 'assistant', content: 'Wonderful' }, + logprobs: null, + finish_reason: 'stop' +} +first response: { + "rawLlmOutput": "Wonderful", + "validatedOutput": "Wonderful", + "validationPassed": true, + "error": null +} +second response: { + "rawLlmOutput": "Granddaughter", + "validatedOutput": "Granddaugh", + "validationPassed": true, + "error": null +} +``` + +You'll note that Guardrails "fixed" the second word that was greater than 10 characters by truncating it. This probably isn't the behavior you desire, so let's fix it. + +7. To cause Guardrails to fail and return an error when an LLM returns an incorrect response, change the `Guard.fromString()` call [to use a different on_fail policy](https://www.guardrailsai.com/docs/hub/concepts/on_fail_policies): + +``` +const guard = await Guard.fromString( + [await Validators.ValidLength(1, 10, 'noop')], +``` + +8. Run the application again with `node ./test-openai.js`. You should now see error output due to a failed assertion. + +``` +second response: { + "rawLlmOutput": "Resplendence", + "validationPassed": false, + "error": null +} +AssertionError [ERR_ASSERTION]: false == true + at main (file:///home/jayallen/guardrails/test-gr.js:43:12) + at process.processTicksAndRejections (node:internal/process/task_queues:95:5) + at async file:///home/jayallen/guardrails/test-gr.js:52:1 { + generatedMessage: true, + code: 'ERR_ASSERTION', + actual: false, + expected: true, + operator: '==' +} +``` + +### Explanation of the Code + +There’s a key difference in how the Python and Javascript Guardrails APIs work. In the Python Guardrails API, you can pass the LLM you want to call directly to Guardrails, which calls it on your behalf. For the Javascript API, you call the LLM directly and pass the results to the Guardrails parse() method for validation. + +To call OpenAI, you add the following to the top of your Node file: + +```javascript +import OpenAI from "openai"; + +const openai = new OpenAI(); +``` + +Next, within the main method, you call the OpenAI API’s completions.create() method: + +```javascript + const completion = await openai.chat.completions.create({ + messages: [{ role: "system", content: "You are a helpful assistant. Please generate a single word with a length of between 1 and 10 characters. Do not exceed 10 characters in length. Do not include punctuation." }], + model: "gpt-3.5-turbo", + }); +``` + +You can print out the result of this to see the format of the OpenAI response: + +console.log(completion.choices[0]); + +It should look something like this: + +```json +{ + index: 0, + message: { role: 'assistant', content: 'Wonderful' }, + logprobs: null, + finish_reason: 'stop' +} +``` + +You can supply this to Guardrails for validation like so. Note that, in the code below, we’ve removed the string equality assertions, since we don’t know what word the LLM will output. + +```javascript + const firstResponse = await guard.parse(completion.choices[0].message['content']); + console.log("first response: ", JSON.stringify(firstResponse, null, 2)); + assert.equal(firstResponse.validationPassed, true); + assert.equal(guard.history.at(0).status, 'pass'); +``` + +You can do something similar for the second set of assertions. In this block, we purposefully ask the LLM for a word longer than 10 characters: + +```javascript + const completion2 = await openai.chat.completions.create({ + messages: [{ role: "system", content: "You are a helpful assistant. Please generate a word with a length of between 11 and 20 characters. Do not exceed 20 characters in length. Do not include punctuation." }], + model: "gpt-3.5-turbo", + }); + + const secondResponse = await guard.parse(completion2.choices[0].message['content']); + console.log("second response: ", JSON.stringify(secondResponse, null, 2)); + assert.equal(secondResponse.validationPassed, true); + assert.equal(guard.history.at(1).status, 'pass'); +``` \ No newline at end of file diff --git a/docusaurus/docs/installation.md b/docusaurus/docs/installation.md new file mode 100644 index 0000000..7b1bfab --- /dev/null +++ b/docusaurus/docs/installation.md @@ -0,0 +1,29 @@ +# Installing Guardrails AI for JavaScript + +## Prerequisites + +Before installing Guardrails JS, be sure you’ve installed Python 3.16 or greater on your system. The current implementation works via an I/O bridge to run the underlying Python library, so both are required. + +## Installation via NPM (latest) + +You can install Guardrails JS like any other Node package using NPM: + +``` +npm i @guardrails-ai/core +``` + +## Releases + +Currently in beta, Guardrails AI maintains both stable and pre-release versions. + +### Install Pre-Release Version + +To install a pre-release version of Guardrails JS,install it with the intended semantic version. + +### Install from GitHub + +Installing directly from GitHub is useful when a release has not yet been cut with the changes pushed to a branch that you need. Non-released versions may include breaking changes, and may not yet have full test coverage. We recommend using a released version whenever possible. + +``` +npm i git+https://github.com/guardrails-ai/guardrails-js.git +``` \ No newline at end of file diff --git a/docusaurus/docs/intro.md b/docusaurus/docs/intro.md index 7f534d7..2fa4d4c 100644 --- a/docusaurus/docs/intro.md +++ b/docusaurus/docs/intro.md @@ -1,98 +1,18 @@ -# guardrails-js -A Javascript wrapper for guardrails-ai. +# Guardrails AI (JavaScript) -This library contains limited support for using [guardrails-ai](https://pypi.org/project/guardrails-ai/) in javascript. +## What is Guardrails? -The following methods and properties are supported: -* Guard.fromRail -* Guard.fromRailString -* Guard.fromString -* Guard.parse (without an `llm_api`) -* Guard.history +Guardrails JS is a JavaScript framework that helps build reliable AI applications by performing two key functions: -The key differences between this wrapper and the python library are as follows: -1. All methods and properties are in `camelCase` instead of `snake_case` -1. No support for custom validators -1. No support for re-asking (though you can perform reasks yourself outside of the library using `ValidationOutcome.reask` or `guard.history.at(#).reask_prompts` when defined) -1. LLM calls must be made by the user and the text response passed into parse +* Guardrails runs Input/Output Guards in your application that detect, quantify and mitigate the presence of specific types of risks. To look at the full suite of risks, check out Guardrails Hub. +* Guardrails helps you generate structured data from Large Language Models (LLMs). -In addition to above, this library also supports the readonly properties on the [ValidationOutcome class](https://www.guardrailsai.com/docs/hub/api_reference_markdown/validation_outcome) as well as readonly versions of the History & Logs related classes like [Call](https://www.guardrailsai.com/docs/api_reference_markdown/history_and_logs#call-objects), [Iteration](https://www.guardrailsai.com/docs/api_reference_markdown/history_and_logs#iteration-objects), etc.. +Guardrails JS is built off of the core Guardrails implementation written in Python and leverages its codebase. You can read a full list of the differences between Guardrails JS and Guardrails Python [in the Github README](https://github.com/guardrails-ai/guardrails-js). -See the JS docs [here](/docs/modules.md) +![How Guardrails works vs. using a Large Language Model (LMM) directly](/img/with_and_without_guardrails.svg "How Guardrails works vs. using a Large Language Model (LMM) directly.") -## Installation -```sh -npm i @guardrails-ai/core -``` +## Guardrails Hub -## Example -```js -import { Guard, Validators } from '@guardrails-ai/core'; +Guardrails Hub is a collection of pre-built measures of specific types of risks (called **validators**). M ultiple validators can be combined together into Input and Output Guards that intercept the inputs and outputs of LLMs. Visit Guardrails Hub to see the full list of validators and their documentation. -const guard = await Guard.fromRail('./my-railspec.rail'); - -const messages = ['Hello World!', 'Goodbye World!']; - -const response = await guard.parse( - 'Hello World!', - { - promptParams: { 'messages': messages } - } -); - -console.log(response); -``` - -## Caveats and Oddities -The current version of the library uses a IO bridge so both javascript and python3 must be available in the environment. - -For the best experience, you may also need to explicitly call for the bridge to exit at the end of the node process. We export an `exit` function to serve this purpose. - - -Below is a simple end-to-end test we use that demonstrates the concepts above: - -```js -import assert from 'node:assert'; -import process from 'node:process'; -import { Guard, Validators, exit } from '@guardrails-ai/core'; - -process.on('exit', (code) => { - console.log(`About to exit with code: ${code}`); - exit(); -}); - -async function main () { - try { - const guard = await Guard.fromString( - [await Validators.ValidLength(1, 10, 'fix')], - { - description: 'A word.', - prompt: 'Generate a single word with a length betwen 1 and 10.' - } - ); - - const firstResponse = await guard.parse('Hello World!'); - console.log("first response: ", JSON.stringify(firstResponse, null, 2)); - assert.equal(firstResponse.validationPassed, true); - assert.equal(firstResponse.validatedOutput, 'Hello Worl'); - assert.equal(guard.history.at(0).status, 'pass'); - - const secondResponse = await guard.parse('Hello World 2!'); - console.log("second response: ", JSON.stringify(secondResponse, null, 2)); - assert.equal(secondResponse.validationPassed, true); - assert.equal(secondResponse.validatedOutput, 'Hello Worl'); - assert.equal(guard.history.at(1).status, 'pass'); - - process.exit(0); - } catch (error) { - console.error(error); - process.exit(1); - } -} -await main(); -``` - -We run this with the following command: -```sh -node e2e.test.js -``` \ No newline at end of file +![Guardrails Hub - a small sample of the validators available](/img/guardrails_hub.gif "Guardrails Hub") \ No newline at end of file diff --git a/docusaurus/sidebars.js b/docusaurus/sidebars.js index 965350d..3f5cf7c 100644 --- a/docusaurus/sidebars.js +++ b/docusaurus/sidebars.js @@ -16,6 +16,8 @@ const sidebars = { // By default, Docusaurus generates a sidebar from the docs folder structure jsDocsSidebar: [ { id: 'intro', type: 'doc', label: "Guardrails JS" }, + { id: 'installation', type: 'doc', label: "Installation" }, + { id: 'getting-started', type: 'doc', label: "Getting Started" }, { type: 'category', label: 'How-To Guides', collapsed: true, items: [ { type: 'autogenerated', dirName: 'how-to-guides' }, ]}, diff --git a/docusaurus/static/img/guardrails_hub.gif b/docusaurus/static/img/guardrails_hub.gif new file mode 100644 index 0000000..e5be8d3 Binary files /dev/null and b/docusaurus/static/img/guardrails_hub.gif differ diff --git a/docusaurus/static/img/with_and_without_guardrails.svg b/docusaurus/static/img/with_and_without_guardrails.svg new file mode 100644 index 0000000..e2aee84 --- /dev/null +++ b/docusaurus/static/img/with_and_without_guardrails.svg @@ -0,0 +1,404 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +