I'm Building an AI-First CAD Tool. Here's Why.

Legacy CAD software wasn't designed for AI. I think it's time for something new.

I’ve been thinking a lot about CAD software lately.

Not because I love it — actually, the opposite. Every time I try to design something for 3D printing, I hit the same wall: the tools are powerful, but they’re hard. SolidWorks, Fusion 360, even Blender — they all assume you’ve invested hundreds of hours learning their particular way of doing things.

Meanwhile, I use AI to write code every day. Claude, GPT — they’re genuinely useful for building things. So why can’t I just tell an AI what I want to make?

The Problem with “AI Features”

Every legacy CAD company is scrambling to add AI features. Autodesk has generative design. SolidWorks is adding copilot-style assistants. But here’s the thing: bolting AI onto software designed in the 90s doesn’t make it AI-native.

It’s like adding a ChatGPT widget to Microsoft Word and calling it revolutionary. The underlying paradigm hasn’t changed.

What “AI-First” Actually Means

When I say AI-first CAD, I mean:

  1. Natural language as a first-class input — “Make me a phone stand with a 45-degree angle” should just work
  2. Code as the model format — Instead of proprietary binary files, your design is readable, version-controllable code
  3. Iterative refinement through conversation — “Make it thicker” shouldn’t require finding the right menu

Think about how we write code with AI assistants now. You describe what you want, the AI generates it, you tweak it, iterate. That workflow is natural for software development. Why not for physical object design?

The Technical Approach

I’m starting with a code-based approach. Tools like OpenSCAD and CadQuery already let you define 3D models programmatically. The problem is: writing that code requires expertise.

But generating code is exactly what LLMs are good at.

The basic loop:

  1. User describes what they want in natural language
  2. AI generates parametric CAD code (CadQuery/OpenSCAD)
  3. Code compiles to a 3D model for preview
  4. User refines through conversation
  5. Export to STL for printing/manufacturing

Why Now?

Three things have converged:

  1. LLMs are actually good at code generation — Claude, GPT-4, they can write complex parametric geometry
  2. 3D printing is mainstream — millions of people want to make custom objects but can’t use traditional CAD
  3. The incumbents are slow — their business model depends on expensive licenses and training programs

What I’m Building

I’m going to build this in public. Starting with a simple prototype:

  • Text input → CadQuery code generation → STL output
  • Basic preview rendering
  • Iteration through conversation

If it works, there’s a real business here. If it doesn’t, at least I’ll learn something.

Follow along. I’ll share the wins and the failures.


This is post #1 of building in public. Next up: actually writing some code.