Optional
fields: Partial<HFInput> & BaseLLMParamsAPI key to use.
Custom inference endpoint URL to use
Penalizes repeated tokens according to frequency
Credentials to use for the request. If this is a string, it will be passed straight on. If it's a boolean, true will be "include" and false will not send credentials at all.
Maximum number of tokens to generate in the completion.
Model to use
The model will stop generating text when one of the strings in the list is generated.
Sampling temperature to use
Integer to define the top tokens considered within the sample operation to create new text.
Total probability mass of tokens to consider at each step
Optional
options: Omit<BaseLLMCallOptions, never>Generated using TypeDoc
Class implementing the Large Language Model (LLM) interface using the Hugging Face Inference API for text generation.
Example