Free · Windows App
Build visual pipelines where multiple models collaborate, critique, and refine each other's output. You design the structure — they do the work.
No account Bring your own API key Zero telemetry No subscription
Why structure helps
When a task is complex, a single prompt asks too much of one model. A pipeline distributes the work across roles built for each step.
The workspace
①
Left sidebar. Add Ollama, LM Studio, DigitalOcean, Novita AI — or any OpenAI-compatible endpoint. Paste an API key and models appear immediately.
②
Center canvas. Your Flow as a visual graph. Each node is a Model (a direct LLM call) connected in layers. Draw edges, rearrange freely, or build the graph from a DNA string.
③
Right panel. Describe your task once. Press Run. Final output arrives with Markdown and SVG preview. Enable Examiner for automatic quality scoring — it scores 0–100 and tells you why.
How it works
Load a Flow, describe your task, collect refined output. The Flow handles coordination — you stay focused on the brief.
Start from a pre-built Flow, or open a blank canvas and compose your own layers — add nodes, connect them, set a system prompt for each.
Enter your brief in the right panel. Assign a model to each node. Press Run — the Flow handles prompting strategy, coordination, and iteration.
Review results with Markdown and SVG preview. Enable Examiner for quality scoring. Use Cycle mode to re-run automatically until the score meets your target.
Every result auto-saves to disk. Export the entire workspace as JSON. Share the Flow structure with a single DNA string — anyone can load it instantly.
What's included
Six ready-made Flows open on first launch — brand strategy, pitch decks, logo design, visual identity, deep research, generative prompts. Each is a full multi-model pipeline. Load one and run.
Build a pipeline by connecting nodes and layers. Model nodes make direct LLM calls. The whole graph is visible at a glance — draw edges, reorder layers, adjust prompts inline.
An independent model scores final output 0–100 with a verdict and critique. You set the passing score. Enable Cycle mode — it keeps re-running automatically until the target is hit.
Every Flow encodes into a compact text string. Paste a DNA string — the graph builds itself. Edit the string — the graph updates. Design complex topologies without touching the canvas.
Every run, every result, every Flow configuration auto-saves to local JSON. Nothing is lost between sessions. Open the app tomorrow — your work is exactly where you left it.
Export your entire workspace as a single JSON file. Import it on another machine in one click. Share a configured Flow with a colleague — they get the exact same setup.
Ollama, LM Studio, DigitalOcean, Novita AI — and any OpenAI-compatible endpoint. Mix providers within a single pipeline. Switch models per node.
Full Ollama integration. Run complete multi-model pipelines on local hardware with no internet. Everything stays on your machine — no keys, no cloud, no latency.
Watch every model think in real time. Tokens appear live in each card as they arrive. Parallel execution across layers — nodes in the same layer run simultaneously.
No accounts. No analytics. No telemetry. Your prompts, API keys, and results never leave your machine. Verifiable with any network monitoring tool.
Supported providers
Connect cloud providers with an API key, or run fully offline with local runtimes. Mix them freely — each node in your pipeline can use a different model.
Run Llama, Mistral, Qwen, DeepSeek and dozens of other models entirely on your machine. No key, no internet required.
Desktop runtime for local models with a built-in OpenAI-compatible server. Great if you prefer a GUI for model management.
Serverless GPU inference from a trusted cloud provider. Good for teams already on the DO ecosystem.
Fast, cost-efficient inference on open-source models. Competitive pricing for high-volume pipeline runs.
If it speaks the OpenAI chat completions protocol, it works. Point to any endpoint — self-hosted, corporate proxy, or new provider.
Flows
Six pre-built Flows open on first launch. Each is a real working pipeline — layers of model nodes with distinct roles. Load one, describe your task, and run.
Brand · 4 layers · 10 models
End-to-end brand creation. Copywriter, visual director, marketing strategist, risk analyst, creative director, CFO, COO, and a customer simulator all weigh in before final assembly.
Pitch · 5 layers · 13 models
Turns a raw idea into a slide-by-slide investor script. Market sizing, unit economics, competitive analysis, visual direction, and an objection simulator — before the final deck.
Design · 4 layers · 8 models
Four model nodes generate parallel variants — minimalist, emblem, abstract, typographic. A critic and fact-checker refine before the final SVG code is assembled.
Visual · 4 layers · 9 models
For visual concepts and brand identity. Four parallel model nodes cover historical context, psychology, trends, and functionality — then consolidate and critique.
Research · 4 layers · 7 models
Breaks any complex question into sub-queries, runs three parallel model nodes, then consolidates, critiques, and fact-checks before delivering a synthesis.
Creative · 4 layers · 10 models
Turns a short image idea into a polished, model-ready prompt for generative art systems. Visual storyteller, art director, cinematographer, and prompt engineer in one pipeline.
Quick Start
01
Run FlowGraph Pro.exe. No installation, no admin rights. Fully portable
— runs from any folder, any drive.
Windows 10+ and .NET 8 Runtime required.
02
Expand a provider in the left sidebar. Paste your API key. Available models load immediately — no restart, no config file.
For offline use, start Ollama locally — no key needed.
03
Choose a pre-built Flow from the list, or start fresh. Add nodes, connect layers, set system prompts for each. Or paste a Flow DNA string — the graph builds itself.
04
Enter your task in the right panel. Press ▶ Run. Enable Examiner for quality scoring, Cycle for automatic re-runs until your target score is hit.
Flow
A named pipeline tab — its graph, settings, and full configuration. Auto-saves to JSON. You can have multiple Flows open at once.
Model node
A direct call to a language model. Has its own system prompt and model selection. The simplest unit in a Flow.
Layer
A horizontal step. All nodes in a layer execute in parallel and pass output to the next layer.
Examiner
An independent model that scores final output 0–100 with a verdict and critique. Powers automatic quality gates in Cycle mode.
Flow DNA
A compact text encoding of a Flow's graph. Paste it to build the graph instantly. Edit the string — the graph updates. No canvas required.
Workspace
All your Flows together. Auto-saved to local JSON after every change. Export as a single file to back up or move to another machine.
Download
Free. No account. Fully portable. Pre-built Flows open on first launch — connect a provider or run Ollama locally and you're ready.
Download Alpha v0.1 — WindowsFlowGraph Pro makes no network requests of its own. The only traffic it generates is the calls you explicitly trigger to whichever API provider you've configured. No usage analytics, no crash reporting, no telemetry — nothing is sent anywhere without your action. You can verify this with any network monitoring tool (Wireshark, Fiddler, or Windows Firewall logs).