What I Learned After 12,000 Hours with AI
By Matt Martin, Founder of Medware Solutions
At the end of last year, I promised on LinkedIn that I'd run an AI training course. Life got in the way — client work, product launches, the usual chaos of running a health tech company in Sydney.
So here's the next best thing: everything I've learned after 12,000+ hours with AI, written down properly.
I was in my 50s with no formal developer training when I first opened ChatGPT in November 2022. Three years later, I've built and shipped multiple production software products — Medflow (clinical workflow management) and Medcast (medical education media). Real products, used by real businesses.
I'm not writing this to impress anyone. I'm writing it because most of what you read about AI is either breathless hype or academic jargon, and neither helps if you're trying to actually use the stuff.
This is what I've learned. Practically. From the trenches.
1. The Basics Nobody Explains Well
Let's start with what these things actually are, because most explanations are either dumbed down to uselessness or buried in technical language.
Large Language Models (LLMs) are pattern-completion engines. They've been trained on enormous amounts of text — books, code, websites, conversations — and they've learned to predict what comes next in a sequence. That's fundamentally it. They don't “think” the way you and I do. They're extraordinarily sophisticated pattern matchers.
But here's what that undersells: the patterns they've learned are so rich and so deep that the output often looks indistinguishable from genuine reasoning. When Claude writes a complex function or GPT drafts a legal summary, it's not copying something it memorised. It's generating new text based on patterns it learned during training.
The major players right now:
Claude (made by Anthropic) is my go-to for coding and complex reasoning. It understands software architecture, writes clean code, and handles nuanced instructions better than anything else I've used.
GPT (OpenAI) is the household name. Strong all-rounder, particularly good for writing tasks — emails, content, summaries. It has a natural flow that's hard to beat.
Gemini (Google) has a massive context window, meaning you can feed it huge documents — entire codebases, 500-page PDFs — and it handles them well. Great for research and analysis.
Llama (Meta) is open source, which means you can run it locally on your own hardware. Important for privacy-sensitive work and for the open-source community pushing things forward.
Grok (xAI, Elon Musk's company) is the newest serious contender. Strong reasoning, real-time web access, and it's improving fast. One to watch.
DeepSeek (Chinese lab) shocked the industry with models that rival the best at a fraction of the training cost. Open weights, strong at coding and maths. A reminder that this isn't just a Silicon Valley race.
Qwen (Alibaba) and Mistral (French startup) round out the global picture — both producing excellent open-weight models that push the whole ecosystem forward.
One distinction that trips people up: training versus inference. Training is when the model learns — it costs millions of dollars and takes months. Inference is when you use it — that's the chat, the API call, the prompt. When you're talking to ChatGPT, you're not teaching it anything. You're using what it already knows. Your conversations don't change the model.
I learned all of this in my 50s. Age is irrelevant. Curiosity isn't.
2. What Is an Agent and Why It Matters
For my first year with AI, I was a copy-paste merchant. Ask ChatGPT a question, copy the code, paste it into my editor, run it, find the error, go back to ChatGPT, paste the error, get a fix, copy, paste. Dozens of browser tabs. Constant context-switching.
Then I discovered agents, and everything changed.
An agent is AI that doesn't just answer questions — it takes actions. It can read your files, write code, execute commands, browse the web, manage tasks. It operates in your environment, not just in a chat window.
Claude Code lives in my terminal. I describe what I want, and it reads my existing codebase, understands the architecture, writes the new feature, runs the tests, and commits the code. It's not perfect — I review everything — but it's like having a tireless junior developer who knows every file in the project.
Cursor is an AI-powered code editor that understands your whole project. You can highlight code, ask it to refactor, and it does it in-place.
The shift from chatbot to agent is the biggest leap in practical AI usefulness I've experienced. If you're still just having conversations with AI, you're using maybe 10% of what's available.
3. The Right AI for the Right Job
“Which AI should I use?” is the most common question I get, and the honest answer is: it depends entirely on what you're doing.
For coding: Claude, and it's not close.
For general writing: GPT still has the smoothest output for emails, marketing copy, blog posts, and content creation.
For research and large documents: Gemini. Google's context window is enormous.
For images: Midjourney for artistic, Flux for photorealism.
For video: Sora and Runway are leading the pack.
The real skill isn't knowing which model is “best.” It's knowing which model to reach for based on what you're doing right now. I switch between models multiple times a day.
4. What Nobody Tells You About Using AI Well
Here's the single most valuable lesson from 12,000 hours:
Don't chase a poor response. Just start again.
When an AI gives you a mediocre or wrong answer, your instinct is to correct it. And sometimes that works. But often the AI has committed to an approach, and it will keep trying to make that approach work rather than stepping back and rethinking.
Once they start, they don't stop. They'll refactor, adjust, add workarounds — all to salvage their initial direction. I've watched AI models spend 20 iterations trying to fix something that was wrong from the first line.
The fix? Start a fresh conversation. Give it a cleaner prompt with better context. You'll get a better answer in one shot than you'd have gotten in 30 rounds of correction.
Prompting is a real skill. The difference between a vague prompt and a precise one is the difference between useless and extraordinary output. Be specific. Provide context. Give examples. Tell it what role to adopt. Tell it what to avoid.
The gap between people who say “AI is overhyped” and people who say “AI changed my life” is almost always the gap in how they use it, not the technology itself.
5. Measure Twice, Cut Once
This lesson cost me hundreds of hours before I learned it.
Early on, my workflow was: have an idea, describe it to AI, start building immediately. Three hours later, I'd scrap it and start over with a better understanding of what I actually wanted.
Then I started planning first. Not big, formal planning documents. Just 10 minutes of structured thinking before touching any code.
The difference was dramatic. Projects that used to take 8-12 hours of iteration now took 2-3 hours of focused building.
This pattern was so consistently valuable that I built a tool around it. Framewright (framewright.site) is a free, open-source planning tool that helps you create structured specifications before you write code.
No sign-up required. No paywall. It's open source. The old carpenter's rule applies perfectly to AI: measure twice, cut once.
6. What's Possible Now and What's Coming
When I started in November 2022, ChatGPT could write basic code but struggled with anything complex. Image generation produced nightmare fuel. Video generation didn't exist. The idea of an AI agent that could autonomously navigate a codebase was science fiction.
Today — just over three years later:
- AI writes production-quality code across full-stack applications.
- Image generation is indistinguishable from photography in many cases.
- Video generation produces cinematic-quality footage from text descriptions.
- AI agents manage complex multi-step workflows with minimal oversight.
- A 60-year-old non-developer has multiple production software products in the market.
What this means for non-developers: If you have domain expertise — in medicine, law, education, finance, anything — you can now build the tools your industry actually needs. The people who understand the problems are no longer blocked by the inability to code the solutions.
That's not some future prediction. I'm living proof of it, right now.
Where This Leaves Us
Three years ago, building software required years of training. Now it requires clarity of thought and willingness to learn. The tools do the rest.
The people who'll thrive aren't the ones who know the most about AI. They're the ones who know the most about their own domain and learn enough about AI to bridge the gap.
I started in my 50s with no coding background, and now I have production software in the market, an open-source tool helping others build better, and more energy for this work than I've had for anything in decades.
The best time to start was three years ago. The second best time is today.
Matt Martin is the founder of Medware Solutions, building Medflow (clinical workflow management) and Medcast (medical education media). He's been using AI daily since November 2022. Find his free planning tool at framewright.site. For 1-on-1 AI training or help building with AI, reach out at matt@medware.com.au.