Reference API Roblox

Engine API

Website

Related

Reference API Roblox

TextGenerator

Gives access to a large language model for text generation.

Member index 5

HistoryMember
690Seed: int
688SystemPrompt: string
690Temperature: float
690TopP: float
688GenerateTextAsync(request: Dictionary): Dictionary
inherited from Instance
553Archivable: bool
670Capabilities: SecurityCapabilities
553Name: string
553Parent: Instance
670Sandboxed: bool
680UniqueId: UniqueId
576AddTag(tag: string): null
573ClearAllChildren(): null
462Clone(): Instance
573Destroy(): null
486FindFirstAncestor(name: string): Instance
486FindFirstAncestorOfClass(className: string): Instance
486FindFirstAncestorWhichIsA(className: string): Instance
486FindFirstChild(name: string, recursive: bool = false): Instance
486FindFirstChildOfClass(className: string): Instance
486FindFirstChildWhichIsA(className: string, recursive: bool = false): Instance
486FindFirstDescendant(name: string): Instance
563GetActor(): Actor
486GetAttribute(attribute: string): Variant
462GetAttributeChangedSignal(attribute: string): RBXScriptSignal
631GetAttributes(): Dictionary
648GetChildren(): Instances
462GetDebugId(scopeLength: int = 4): string
486GetDescendants(): Array
486GetFullName(): string
691GetPredictionMode(): PredictionMode
641GetStyled(name: string): Variant
657GetStyledPropertyChangedSignal(property: string): RBXScriptSignal
576GetTags(): Array
576HasTag(tag: string): bool
486IsAncestorOf(descendant: Instance): bool
486IsDescendantOf(ancestor: Instance): bool
690IsPredicted(): bool
664IsPropertyModified(property: string): bool
573Remove(): null
576RemoveTag(tag: string): null
664ResetPropertyToDefault(property: string): null
573SetAttribute(attribute: string, value: Variant): null
690SetPredictionMode(mode: PredictionMode): null
462WaitForChild(childName: string, timeOut: double): Instance
648children(): Instances
553clone(): Instance
573destroy(): null
553findFirstChild(name: string, recursive: bool = false): Instance
648getChildren(): Instances
553isDescendantOf(ancestor: Instance): bool
573remove(): null
462AncestryChanged(child: Instance, parent: Instance)
462AttributeChanged(attribute: string)
462ChildAdded(child: Instance)
462ChildRemoved(child: Instance)
462DescendantAdded(descendant: Instance)
462DescendantRemoving(descendant: Instance)
500Destroying()
657StyledPropertiesChanged()
553childAdded(child: Instance)
inherited from Object
647ClassName: string
647className: string
647GetPropertyChangedSignal(property: string): RBXScriptSignal
647IsA(className: string): bool
650isA(className: string): bool
647Changed(property: string)

Description

A TextGenerator instance lets you use a large language model (LLM) to generate text based on a system prompt from you and a user prompt from the player. The most common use of this class and its methods is for creating interactive non-player characters (NPCs).

For example, in a survival experience, your system prompt for a talking animal might be "You are a very busy beaver. You end all statements by mentioning how you need to get back to work on your dam." Users could ask the beaver about water in the area, the size of a nearby forest, predators, etc.

The novelty of LLM responses can help create unique, delightful moments for players, but using the LLM effectively requires a bit of creativity and tuning. System prompts can be very extensive, so don't hesitate to include a long string with lots of detail.

The TextGenerator class currently only supports RCC authentication. As a result, you must use Team Test to test within your experience.

History 6

Members 5

GenerateTextAsync

Parameters (1)
requestDictionary
Returns (1)
Dictionary

This method returns text generated by an LLM based on the provided system and user prompts, as well as any other optional paramaters that have been set.

The request argument for this method should be a dictionary with the following structure:

Key NameData TypeDescriptionRequired
UserPromptstringOptional prompt from the user that initiates the chat. This could be a question, statement, or command that the user wants the model to respond to.No
ContextTokenstringPrompt history context token containing a summarization of the previous prompt requests and responses in a conversation up to the current request. If no token is provided, a new token is generated and returned in the response. Providing a previously generated context token restores the conversation state into the current request.No
MaxTokensnumberThe maximum number of tokens in the response generated by the model, expected to be an integer whose value is at least 1. This limits the length of the response, preventing overly long or incomplete answers. Non-integral numbers will be rounded to the nearest integer.No

This method returns a dictionary with the following structure:

Key NameData TypeDescription
GeneratedTextstringThe generated response.
ContextTokenstringA token containing the summarization of a previously passed context token and the current generated response. This token can be passed into subsequent requests to maintain the state of the current conversation. Subsequent requests generate new tokens with updated conversation state. Extracting the token and providing it maintains the ongoing conversation context.
ModelstringThe model and version that generated the response.
This function yields. It will block the calling thread until completion.

History 1

Tags: [Yields]

Seed

TypeDefault
int0

Sets a fixed seed for the random number generator, allowing reproducible responses in cases where the same input parameters are used across multiple requests. By setting the same seed value, you can obtain identical results for debugging, testing, or evaluation purposes. The value of Seed should be an integer. Non-integral values will be truncated. Default is 0.

History 1

SystemPrompt

TypeDefault
string

The system prompt provides context to the model about its role, tone, or behavior during the conversation. This parameter can guide the model on how to respond, setting expectations like "You are an assistant" or "Use a formal tone."

History 1

Temperature

TypeDefault
float0.699999988

Controls the "creativity" or randomness of the model's responses. Values closer to 1 increase randomness, while values closer to 0 make the responses more focused and deterministic. Values outside the accepted range will be clamped to the range of [0.4, 1.0]. Default is 0.7.

History 1

TopP

TypeDefault
float0.899999976

Helps the model narrow or expand the range of possible words to sample from while generating the next token. This setting narrows the token choices to only contain words that together make up a certain percentage of total likelihood (for example, 90%). A lower TopP means the model sticks to closer and more predictable choices, while a higher TopP opens the door to more diverse and creative responses. Values outside the accepted range will be clamped to the range of [0.5, 1.0]. Default is 0.9.

History 1

Settings