AI Micro-Service · Workflow Compression · Solo Build

Nano Upscaler

I did not design another image editor. I designed a faster decision system for visual asset prep.

A focused AI micro-service that turns low-quality source images into production-ready assets with explicit control over resolution and aspect ratio. From workflow bottleneck to live product in one sprint.

Role Product Designer + Design Engineer
Stack Google AI Studio + Gemini
Format Solo · Founder-mode
Type AI Micro-SaaS
Nano Upscaler application running on a Mac, showing source image upload with resolution and orientation controls on the left, and enhanced output preview on the right
Nano Upscaler — upload, configure, enhance, download
1 Screen Workflow
3 Output Resolutions
5 Aspect Presets
3 Input Methods

I Found the Same Asset-Rescue Loop Hiding in Every Creative Workflow

Low-quality source images quietly break a surprising amount of digital work. A presentation looks weak, a landing page feels cheap, a thumbnail underperforms—and the team loses time jumping between tools to make one asset usable.

The pain was not lack of creative tools. The pain was a fragmented multi-tool recovery workflow that repeated across every project—screenshots, old files, compressed sources, client scraps, chat exports.

Core Scenario
A content manager needs a hero image for a campaign landing page. The only source is a 400px screenshot from a Slack thread. She opens Photoshop, upscales manually, switches to an online enhancer, re-exports, adjusts the crop—30 minutes of rescue work for one asset.
Inconsistent Sources
Teams started with images too small, blurry, or compressed for the actual usage context—from screenshots to old files to chat exports.
Fragmented Workflow
Upscaling, reframing, exporting, and quality-checking happened across multiple disconnected tools every single time.
Black-Box AI
Most AI tools generated “something better” but gave weak control over the exact output needed for production—wrong size, wrong ratio, wrong format.
Cost-to-Value Gap
For narrow recurring tasks, heavy creative suites created too much interface and too little speed. The subscription cost rarely matched the job.
Product Designer UX Strategist Design Engineer AI Product Design Solo · End-to-End

I Mapped the Asset-Recovery Pipeline Nobody Had Productized

I ran a workflow audit across my own creative production tasks: presentations, landing pages, social media assets, video thumbnails, ad campaigns. Every workflow followed the same pattern—and the bottleneck was always the image rescue step.

Discovery confirmed: users needed explicit output control (not magic), the task was short enough for a single screen, and combining upscale + reframe into one action would eliminate the multi-tool shuffle entirely.

Deliverables
  • Workflow audit across 5 creative production contexts
  • Jobs-to-be-done definition: “Turn weak source image into usable asset at the right size, right now”
  • Competitive review of online upscalers, AI enhancers, and heavyweight editors
  • Gap analysis: “powerful but heavy” vs. “fast but shallow”

I Defined the MVP by What I Deliberately Excluded

The product could easily become another overloaded editing tool. I rejected that path and narrowed the scope to three user decisions: source image, target resolution, target orientation.

Key Decision
I rejected a multi-step editor, batch mode, visual comparison tools, and configurable enhancement settings from v1. The value came from workflow compression, not feature breadth.
3 Resolutions
1K, 2K, 4K—directly tied to common production usage contexts from social media to print.
5 Orientations
Square, Landscape (16:9), Portrait (9:16), Wide (4:3), Tall (3:4)—covering every major media format.
3 Input Methods
Click-to-upload, drag-and-drop, clipboard paste—three zero-friction paths to get the source image in.
Why Narrow Wins
A tool that does one job fast is more valuable than a platform that does five jobs slowly. The restraint is the strategy—it keeps cognitive load near zero and positions the product as a repeatable pattern for other workflow bottlenecks.
Deliverables
  • MVP scope definition with explicit exclusion rationale
  • Three-decision interaction model: source, resolution, orientation
  • JTBD: “When I have a low-quality image and need it usable now, help me turn it into a production-ready asset with control over the final format”

I Built the Interface Around Three Explicit Decisions, Not Hidden Settings

I designed a single-screen split workspace: control panel on the left, output preview on the right. Every important choice is visible, every stage has a clear state.

The dark interface keeps attention on the imagery, not the chrome. Control blocks are styled as distinct decision groups—numbered steps that guide without explaining.

Trade-off
I chose a compact utility-tool aesthetic over a spacious editorial layout. The product is a work tool, not a showcase—density signals professionalism for the target users.
Deliverables
  • Single-screen split layout with explicit 3-step flow
  • Dark UI with orange accent for active states and CTA
  • Loading state with progress indicator and model attribution
  • Full-screen output preview with one-click download
  • API key gate and switch path for cost transparency
Nano Upscaler empty state with upload area, resolution buttons, orientation options, and Ready to Upscale placeholder
Empty state — numbered steps guide the user through three decisions

I Designed the AI Layer So Users Control the Output, Not the Model

Most AI upscaling tools hide everything behind a single “enhance” button. I took the opposite approach—exposing production-relevant controls while hiding model complexity.

The user chooses resolution and aspect ratio. The system translates those choices into the right Gemini API parameters, handles the generation, and returns a result that fits the downstream use case exactly.

Deliverables
  • Gemini API integration with parameter mapping from user selections
  • Visible processing state with model attribution (“Nano Banana 2 is processing”)
  • API key management UI with switch path for cost awareness
  • Error handling for model failures, rate limits, and invalid inputs
Nano Upscaler with a completed upscale showing the source image on the left and enhanced high-resolution output on the right with download button
Completed upscale — source thumbnail on left, production-ready output on right
Upscaling in progress with circular progress indicator and Enhancing Pixels message
Processing state — intentional loading with model name visible

From Workflow Pain to Shipped Product in One Sprint

01
Bottleneck Discovery
Identified the repeated image-rescue step across my own creative production workflows.
02
Competitive Audit
Reviewed editors, AI enhancers, and online upscalers to find the gap between heavy and shallow.
03
Scope Definition
Narrowed the MVP to three user decisions: source, resolution, orientation.
04
Interaction Design
Single-screen split layout with explicit numbered decision groups and large output preview.
05
AI Integration
Connected Gemini image model with parameter mapping from user controls to API requests.
06
Ship + Validate
Built in Google AI Studio, validated with real creative tasks across 5 production contexts.

I Compressed a Multi-Tool Rescue Workflow Into One Focused Action

1 Screen Workflow vs. 3-4 tools previously
3 Output Resolutions 1K · 2K · 4K
5 Aspect Presets Every major media format
3 Input Methods Upload · Drag · Paste

The product replaced a fragmented multi-tool workflow with a single focused action. Users go from weak source image to production-ready asset in seconds—with full control over the output format instead of trusting a black box.

What This Unlocked
Faster asset readiness for design, content, video, and advertising workflows. A reusable product pattern proving that focused AI micro-services can replace expensive software subscriptions for narrow recurring tasks.

What This Project Taught Me About AI Product Design

01
The product did not need more intelligence—it needed stronger boundaries
The more clearly I defined the job, the more valuable the AI became. Constraint was the design tool, not the model capability.
02
Trust in AI output increases when the user controls the input constraints
Exposing resolution and aspect ratio as explicit choices transformed the experience from “hope the AI guesses right” to “I told it exactly what I need.”
03
A good micro-SaaS wins by removing interface, not adding capability
Every feature I excluded made the product faster and more legible. The restraint reads as senior product judgment—not as a limitation.
What I'd Ship Next

Before/after comparison slider for quality validation. Batch mode for studio workflows processing multiple assets.

Preset memory by use case so teams can save their common output configurations.

Current Limitations

Single-image processing only—no batch mode yet. No output history or versioning.

Export limited to image format—YAML/JSON metadata export planned for asset management integration.

I Used AI as Workflow Infrastructure, Not as Visual Hype

This project is AI-native in two ways: Gemini powers the core product feature (image upscaling and reframing), and Google AI Studio accelerated the entire build from concept to working product.

Google AI Studio Gemini Image Model Gemini API API Key Management
Gemini Image Generation API
Powered the core upscaling and reframing capability—taking low-quality source images and generating enhanced outputs at user-specified resolution and aspect ratio.
Made production-quality image enhancement accessible inside a lightweight product surface without a custom ML pipeline.
AI: image enhancement + aspect reframing · Human: product framing, UX strategy, parameter design, quality validation
Google AI Studio
Rapid product-building environment for prototyping the UI, testing model interactions, and iterating on the generation pipeline.
Shortened the distance from product idea to functional shipped tool—enabling a solo build sprint without a full engineering pipeline.
AI: implementation acceleration · Human: product definition, scope decisions, interaction design, commercial framing

If I Can Turn a Workflow Bottleneck Into a Shipped AI Micro-Service Solo, Imagine What I Can Build for Your Team

I design and ship focused AI tools end-to-end—from opportunity framing to live deployment.

Start a conversation