TextGenerator
Gives access to a large language model for text generation.
Memory category | Instances |
---|
Member index 5
Description
A TextGenerator
instance lets you use a large language model (LLM) to
generate text based on a system prompt from you and a user prompt from the
player. The most common use of this class and its methods is for creating
interactive non-player characters (NPCs).
For example, in a survival experience, your system prompt for a talking animal might be "You are a very busy beaver. You end all statements by mentioning how you need to get back to work on your dam." Users could ask the beaver about water in the area, the size of a nearby forest, predators, etc.
The novelty of LLM responses can help create unique, delightful moments for players, but using the LLM effectively requires a bit of creativity and tuning. System prompts can be very extensive, so don't hesitate to include a long string with lots of detail.
The TextGenerator
class currently only supports RCC authentication. As a
result, you must use
Team Test to test
within your experience.
History 6
- 690 Add TopP
- 690 Add Temperature
- 690 Add Seed
- 688 Add GenerateTextAsync
- 688 Add SystemPrompt
- 688 Add TextGenerator
Members 5
GenerateTextAsync
Parameters (1) | ||
---|---|---|
request | Dictionary | |
Returns (1) | ||
Dictionary |
This method returns text generated by an LLM based on the provided system and user prompts, as well as any other optional paramaters that have been set.
The request
argument for this method should be a dictionary with the
following structure:
Key Name | Data Type | Description | Required |
---|---|---|---|
UserPrompt | string | Optional prompt from the user that initiates the chat. This could be a question, statement, or command that the user wants the model to respond to. | No |
ContextToken | string | Prompt history context token containing a summarization of the previous prompt requests and responses in a conversation up to the current request. If no token is provided, a new token is generated and returned in the response. Providing a previously generated context token restores the conversation state into the current request. | No |
MaxTokens | number | The maximum number of tokens in the response generated by the model, expected to be an integer whose value is at least 1 . This limits the length of the response, preventing overly long or incomplete answers. Non-integral numbers will be rounded to the nearest integer. | No |
This method returns a dictionary with the following structure:
Key Name | Data Type | Description |
---|---|---|
GeneratedText | string | The generated response. |
ContextToken | string | A token containing the summarization of a previously passed context token and the current generated response. This token can be passed into subsequent requests to maintain the state of the current conversation. Subsequent requests generate new tokens with updated conversation state. Extracting the token and providing it maintains the ongoing conversation context. |
Model | string | The model and version that generated the response. |
Thread safety | Unsafe |
---|
History 1
- 688 Add GenerateTextAsync
Seed
Type | Default | |
---|---|---|
int | 0 |
Sets a fixed seed for the random number generator, allowing reproducible
responses in cases where the same input parameters are used across
multiple requests. By setting the same seed value, you can obtain
identical results for debugging, testing, or evaluation purposes. The
value of Seed
should be an integer. Non-integral values will be
truncated. Default is 0
.
Thread safety | ReadSafe |
---|---|
Category | Data |
Loaded/Saved | true |
SystemPrompt
Type | Default | |
---|---|---|
string |
The system prompt provides context to the model about its role, tone, or behavior during the conversation. This parameter can guide the model on how to respond, setting expectations like "You are an assistant" or "Use a formal tone."
Thread safety | ReadSafe |
---|---|
Category | Data |
Loaded/Saved | true |
History 1
- 688 Add SystemPrompt
Temperature
Type | Default | |
---|---|---|
float | 0.699999988 |
Controls the "creativity" or randomness of the model's responses. Values
closer to 1
increase randomness, while values closer to 0
make the
responses more focused and deterministic. Values outside the accepted
range will be clamped to the range of [0.4, 1.0]
. Default is 0.7
.
Thread safety | ReadSafe |
---|---|
Category | Data |
Loaded/Saved | true |
History 1
- 690 Add Temperature
TopP
Type | Default | |
---|---|---|
float | 0.899999976 |
Helps the model narrow or expand the range of possible words to sample
from while generating the next token. This setting narrows the token
choices to only contain words that together make up a certain percentage
of total likelihood (for example, 90%). A lower TopP
means the model
sticks to closer and more predictable choices, while a higher TopP
opens
the door to more diverse and creative responses. Values outside the
accepted range will be clamped to the range of [0.5, 1.0]
. Default is
0.9
.
Thread safety | ReadSafe |
---|---|
Category | Data |
Loaded/Saved | true |