ai
ai nodes call large language models (LLMs) for moderation, summarization, or personalization tasks so you can use modern AI models inside your marketing campaign.
Summary
- Placement: Mid-flow.
- Context: Client apps and backend apps.
- Visual: No.
- Level: Intermediate (requires prompt design).
- Purpose: Let flows interpret user inputs or generate dynamic copy without leaving the campaign graph.
- Supports ResultFly-managed providers (OpenAI, Anthropic, internal models) with per-node API keys.
- Lets authors set prompts, temperature, max tokens, and output targets in the
$statetree. - Emits structured errors so flows can branch on success, failure, or rate-limit scenarios.
Example input payload:
{
"prompt": "Summarize the user's feedback in one sentence",
"provider": "openai:gpt-4o-mini",
"statePath": "analysis.summary",
"variables": {
"feedback": "{{ state.session.latest_feedback }}"
},
"settings": {
"temperature": 0.3,
"maxTokens": 150
}
}Success output:
{
"status": "ok",
"tokensUsed": 128,
"result": {
"text": "Users love the onboarding but want faster reward payouts."
},
"stateWrites": {
"analysis.summary": "Users love the onboarding but want faster reward payouts."
}
}Error output:
{
"status": "error",
"error": {
"code": "rate_limit",
"message": "Provider rejected the request due to quota exhaustion."
},
"stateWrites": {}
}Last updated on