Wrapper around Groq API for large language models fine-tuned for chat

Groq API is compatible to the OpenAI API with some limitations. View the full API ref at:

Link

To use, you should have the GROQ_API_KEY environment variable set.

Example

const model = new ChatGroq({
temperature: 0.9,
apiKey: process.env.GROQ_API_KEY,
});

const response = await model.invoke([new HumanMessage("Hello there!")]);
console.log(response);

Hierarchy

Constructors

Properties

client: Groq
model: string = "mixtral-8x7b-32768"
modelName: string = "mixtral-8x7b-32768"
streaming: boolean = false
temperature: number = 0.7
maxTokens?: number
stop?: string[]
stopSequences?: string[]

Methods

  • Parameters

    • request: ChatCompletionCreateParamsStreaming
    • Optional options: any

    Returns Promise<AsyncIterable<ChatCompletionChunk>>

  • Parameters

    • request: ChatCompletionCreateParamsNonStreaming
    • Optional options: any

    Returns Promise<ChatCompletion>

  • Type Parameters

    • RunOutput extends Record<string, any> = Record<string, any>

    Parameters

    • outputSchema: Record<string, any> | ZodType<RunOutput, ZodTypeDef, RunOutput>
    • Optional config: StructuredOutputMethodOptions<false>

    Returns Runnable<BaseLanguageModelInput, RunOutput, RunnableConfig>

  • Type Parameters

    • RunOutput extends Record<string, any> = Record<string, any>

    Parameters

    • outputSchema: Record<string, any> | ZodType<RunOutput, ZodTypeDef, RunOutput>
    • Optional config: StructuredOutputMethodOptions<true>

    Returns Runnable<BaseLanguageModelInput, {
        parsed: RunOutput;
        raw: BaseMessage;
    }, RunnableConfig>

Generated using TypeDoc