Model
Calls an LLM to generate a response (free-form text or structured JSON). Configure prompts, parsing, and tools here.
When to use
- You need the model to answer a question, transform text, or produce structured data.
- You want the model to call tools/APIs (function-calling) as part of reasoning.
What it does
- Consumes upstream input (e.g., from User Utterance).
- Runs an LLM with your System Prompt and settings.
- Outputs messages (and optionally structured JSON if you set an Output Parser).
- Optionally calls Tools (Custom Tool / REST API with Is Tool on).
You can choose between two ways of connecting a model:
- Option 1 – Provided Models: Use IntentAI's managed model credentials.
- Option 2 – Custom Credentials: Use your own Azure OpenAI credentials.
Option 1 - Provided Models
Use this when you want to quickly connect to ready-to-use models provided by IntentAI.
| Field | Description | Example |
|---|---|---|
| Node ID * | Unique identifier for this node. | Model_3 |
| Model Source * | Choose provided_models. |
provided_models |
| Temperature | Controls output randomness (0 = deterministic, 1 = creative). | 0 |
| System Prompt | Defines the model's tone, role, and context. | You are IntentAI Assistant, a friendly expert on Chatbots... |
| Input Parser | Defines what data is sent into the model. | python\ndef input_parser(state, config):\n return state['messages'] |
| Output Parser | (Optional) Processes model output before saving it. | python\ndef output_parser(result):\n return result['text'] |
| Tools | (Optional) External functions the model can call. | ["web_search", "get_user_profile"] |
| Supported Models Type * | Select from available model list. | gpt-4o-mini |
- The "asterisk (*)" means the field is required
- Use a low temperature for tasks requiring accuracy (e.g., data extraction, structured JSON).
- Use a high temperature for creative writing, brainstorming, or idea generation.
- System Prompt is where you can set tone and expertise — think of it as the "personality core" of your assistant.
Option 2 - Custom Credentials
When "Use your own credentials" is selected, additional Azure fields appear. These replace the Supported Models Type dropdown.
| Field | Description | Example |
|---|---|---|
| Model * | The model name you deployed in Azure. | gpt-4 |
| Azure Model Options OpenAI API Key * | Your Azure OpenAI API key. | sk-xxxxxxxxxxxx |
| Azure Model Options Azure Endpoint * | Your Azure OpenAI resource endpoint. | https://my-instance.openai.azure.com/ |
| Azure Model Options Deployment Name * | The deployment name configured in your Azure portal. | gpt4-prod |
| Azure Model Options OpenAI API Version * | API version supported by your deployment. | 2024-02-01 |
- When using custom credentials, the Supported Models Type field is not displayed.
- All other fields (Temperature, System Prompt, Parsers, and Tools) remain available and behave the same as with Provided Models.
Default output path (example):
Replace
Model Namewith your actual node label (case-sensitive).