Examples
Real products built on ALDO AI.
Not demos. Not screenshots of a future state. Each entry below is a live, useful product built by ALDO’s own in-house agency — usually in hours, not weeks — and is running on the public internet right now. Click through and use them.
Want to ship something like this? Sign up and brief the agency. Or read the docs first.
picenhancer
livebuilt with ALDOA local-only image upscaler. Drop, paste, or click an image — get back a ×4, ×8, or ×16 enhanced version in seconds, no signup, $0 cost, your image never leaves the box.
Scan on phone
- Time to ship
- Brief → live, working product in a single afternoon
- Cost per use
- $0 per enhancement (no cloud egress)
What it is
- One screen. Drop, paste, or click an image. Action picker: Enhance / Enhance + bg / Upscale ×4 / Upscale ×8. Strength slider for GFPGAN weight (0–100 %). Live progress bar with heartbeat ticks during the long blocking inference call so the SSE stream survives the upstream HTTP/2 idle timeout.
- Diffusion-style processing animation in the AFTER pane while inference runs — the source image with progressive deblur and an SVG turbulence noise overlay that scrambles every 150 ms and clears as progress climbs.
- Privacy as a product feature: every model runs on the same VPS, no cloud egress, no third-party API, no telemetry, $0 per request. The image never leaves the box.
- Same reference agency every ALDO customer gets. The strategist that picks the upscaling strategy is a normal ALDO prompt; the agents that scaffolded the page + Dockerfile + Python pipeline + MCP wrapper are normal ALDO composite agents.
How ALDO built it
- Minute 0Founder typed the brief into the prompts UI: "online site, simple UI, upload image, AI improves quality." Product-strategist agent expanded it into a one-page spec.
- Minute 8tech-lead composite agent ran. It fanned out to architect (chose Real-ESRGAN over diffusion for latency), ml-engineer (picked the x4plus model), ux-researcher (drafted the drop-zone-first layout), and security-auditor (locked the privacy tier).
- Minute 22Hono server scaffolded by the agency, wrapping realesrgan-ncnn-vulkan as a child process. /enhance returns Server-Sent Events so the bar can move on real progress, not a fake spinner.
- Minute 38Single-page front-end shipped: drop / paste / click upload, segmented ×4/×8/×16 picker, live progress bar, side-by-side before/after, download link.
- Minute 52Smoke-tested 220×220 → 880×880 in 1.5s. Then 480×360 → 3840×2880 (×8, two chained passes) in 30s. All on local hardware, $0.
- Same dayMounted under ai.aldo.tech/live/picenhancer — no DNS, no new TLS cert, no separate domain. The Next.js route proxies /enhance to the pixmend backend; the same docker-compose stack co-locates the Hono server with the web app on the VPS.
Agents involved
Use it as an API or MCP tool
Same pipeline that powers the live page is callable from any agent chain — HTTP for anything that speaks fetch, MCP for Claude Desktop / Cursor / ChatGPT GPTs / ALDO composite agents.
HTTP API · curl
POST a multipart form. Returns text/event-stream NDJSON; the final `data: {"type":"done", ...}` carries the imageUrl.
curl -N -X POST https://ai.aldo.tech/live/picenhancer/api/enhance \
-F file=@portrait.jpg \
-F scale=1 \
-F bg=1 \
-F weight=0.7
# Then GET https://ai.aldo.tech/live/picenhancer/api/out/<filename> for the PNG.JS · fetch + SSE
Streams the SSE; resolves with the final result once the pipeline emits `done`.
async function enhance(file) {
const fd = new FormData();
fd.append('file', file);
fd.append('scale', '1'); // 1 / 4 / 8 / 16
fd.append('bg', '1'); // 0 = leave background alone
fd.append('weight', '0.7'); // GFPGAN strength
const res = await fetch('https://ai.aldo.tech/live/picenhancer/api/enhance', {
method: 'POST', body: fd,
});
const reader = res.body.getReader();
const decoder = new TextDecoder();
let buf = '';
while (true) {
const { value, done } = await reader.read();
if (done) break;
buf += decoder.decode(value, { stream: true });
let i; while ((i = buf.indexOf('\n\n')) >= 0) {
const line = buf.slice(0, i).split('\n').find(l => l.startsWith('data:'));
buf = buf.slice(i + 2);
if (!line) continue;
const ev = JSON.parse(line.slice(5).trim());
if (ev.type === 'done') return ev; // { imageUrl, faces, scale, ... }
if (ev.type === 'progress') /* update UI */;
}
}
}Python · requests + sseclient
Same wire shape; works identically from any HTTP client.
import json, requests
from sseclient import SSEClient
with open('portrait.jpg', 'rb') as f:
res = requests.post(
'https://ai.aldo.tech/live/picenhancer/api/enhance',
files={'file': ('portrait.jpg', f, 'image/jpeg')},
data={'scale': '1', 'bg': '1', 'weight': '0.7'},
stream=True,
)
done = None
for ev in SSEClient(res).events():
payload = json.loads(ev.data)
if payload['type'] == 'done':
done = payload
print(done['imageUrl'], done['faces'], done['enhanceMs'])MCP · Claude Desktop / Cursor / ChatGPT GPTs
Drop into your MCP client config. The server runs as a stdio child process and exposes one tool: `picenhancer.enhance`.
// claude_desktop_config.json (or any MCP client that speaks stdio)
{
"mcpServers": {
"picenhancer": {
"command": "npx",
"args": ["-y", "@aldo-ai/mcp-picenhancer"]
// Optional override:
// "env": { "PICENHANCER_BASE_URL": "https://ai.aldo.tech/live/picenhancer/api" }
}
}
}MCP · tool call shape
Once the MCP server is registered, any chat / agent can call:
// tools/call request body
{
"name": "picenhancer.enhance",
"arguments": {
"image": "data:image/jpeg;base64,/9j/4AAQ...", // or an https:// URL
"mode": "enhance", // enhance | enhance-bg | upscale-x4 | upscale-x8
"strength": 0.7 // GFPGAN weight 0.0–1.0
}
}
// Response (structuredContent):
// {
// imageUrl, scale, bg, weight, faces,
// origDims, enhancedDims, origBytes, enhancedBytes, enhanceMs
// }ALDO agent spec
Wire the MCP tool into any composite agent in your tenant — same pattern as the bundled aldo-fs MCP server.
# agency/your-agent.yaml
name: my-image-agent
tools:
permissions:
mcp:
- picenhancer.enhance:
allow: ["*"]
prompt: |
When the user gives you a portrait photo, call picenhancer.enhance
with mode="enhance" and strength=0.7. Return the resulting imageUrl
and the face count to the user.The same pattern is yours: brief the agency, watch the composite run, ship the artefact. Read the docs or sign up.
What goes on this page.
A project lands here when (1) a visitor can click the live link and use the thing, (2) most of the build was driven by ALDO agents (briefs, design, code, review), and (3) we can show the timeline honestly — including which agents ran, what they decided, and how long it actually took. If your team builds something that fits, email info@aldo.tech and we’ll feature it.