Coding With AI All The Time
I’ve been a programmer for a long time. I consider myself pretty good at it. Perhaps you’ve even used something I’ve done in the past, maybe a keyboard or a web browser.
Well, over the last year or so, I’ve made the biggest-ever change to the way I write software. I now code with AI assistance all the time. Here’s why. Here’s how.
Why
I’m more productive with AI assistance. Today, I write fewer lines of code than ever— by hand in the old-fashioned way—yet I create more code than ever. What’s more, as far as I can tell, there is no detectable reduction in quality. I’m just faster at making changes, fixing bugs, and turning out more features.
I just completed a two-day project, and my git pull request showed: +5,643 −7,952 in 33 commits. It has never felt easier to change ~13.5k lines of code in such a short span of time. It’s not merely about lines of code, either. Nor about making the process easier in a superficial or simplistic way. It’s a more profound change. Coding with AI assistance frees up time so I can think.
I’ve always considered software to be a form of thinking: ideas, plans, and instructions distilled down into work I want the computer to do. In the past, and over the course of my career, making software has been about organizing my thinking at three different levels of abstraction.
- Sometimes musing about ideas for features the software could or should offer.
- Sometimes developing plans for realizing these features in the form of frameworks, libraries, modules, or classes that embody the feature ideas.
- Sometimes writing out the individual instructions as lines of code to make the data structures and algorithms to implement the plans for the frameworks, libraries, etc. that can deliver the feature ideas the software should offer.
Juggling these different abstraction layers is tricky. Doing it well is difficult. Switching between them has been a constant fact of my working life as a programmer.
This is what AI-assisted software assistance has changed. Now, in my daily work, I let the AI handle the instruction level of abstraction, at least the tougher part of it. I still read and review all the code I have the AI produce for me—but reading and reviewing code is so much easier than writing it.
I’ll show you what I mean. Confirm that all these animals are mammals:
cat, mouse, horse, turtle, cow, pig, donkey, rabbit, dog, kangaroo
See the error? Of course you do. Now, think of ten kinds of birds. Not as easy as checking the mammal list, eh?
This is the difference. This explains the productivity boost. I still think of the features ideas. I still plan how I want the features to be implemented. I still read over all the code before I commit—and I still take the same responsibility over the code I merge—but I don’t write each and every if/then
or function call anymore. No more typing out boilerplate code, either. I no longer have to. The AI does this grunt work for me.
My mind feels freed up. I remain at the higher levels of abstraction, with more time to think about ideas and plans. There’s less cognitive overhead in attempting things, so I attempt more things. I’m more willing to try out how my software might be better if I added a feature, changed an algorithm, or refactored a library, and when an experiment doesn’t work, I roll back with less worry about having to throw away a bunch of work I just concentrated hard to bring into being. It’s just easier to make changes. With AI assistance, I write more and better software than I ever have before.
So that’s why.
How
Mostly python
These days, I do most of my coding in python. I don’t love the language—maybe someday I’ll say why in more detail. However, since the models know python so well, it is possibly the most effective language to use for AI coding. Unlike other languages.
I use Cursor with claude-4-sonnet MAX
in Agent mode.
This Cursor combination has gotten good enough that I don’t feel a pressing need to check every day or week for the latest new thing. This isn’t a cheap option, but I spend the majority of my waking hours writing code. It’s my work and my hobby. It’s what I care about doing. So, the expense is worth it to me.
I have the AI write PRDs in the form of phases and checklists.
I describe my ideas in writing with as much detail as befits a statement of goals and a plan for how I want those goals achieved. I tell the AI what I want and what I don’t. For anything even moderately complex, I have the AI write a Product Requirements Document (PRD) before I set it off to write or change code.
The AI tends to get all agile-ish and lard up the document with useless bloat about “success factors”. After all, that’s what the models see when they scour the internet. I warn it away from that. For example, I’ll prompt the AI like so:
I’m planning to consolidate and modernize the logic responsible for transforming internal representations of structured inputs into OpenAPI-compatible schema components. The current codebase has this logic scattered across multiple modules, with redundant implementations, tight coupling, and difficult-to-maintain interdependencies. I want to unify this behavior into a single, coherent interface—ideally encapsulated in a purpose-built class that can handle schema generation, validation, formatting, and serialization of complex, nested input types. This should include support for type constraints, operator validation, enum handling with upper bounds, default value management, and structured formatting (especially for time-based and numeric fields). The solution should produce clean, reusable schema definitions that are ready for OpenAPI integration and tool generation. After the core functionality is in place, I’d like to migrate existing helper functions and legacy publishing logic into the new interface. Then, update all call sites throughout the system to use the unified API instead of the fragmented old logic. Design the solution for long-term maintainability: minimal redundancy, clear boundaries, strong typing, and compatibility with automated documentation and testing. The project is a FastAPI-based backend, using structured type models and requiring OpenAPI output compatibility. The final result should improve traceability, reduce technical debt, and centralize control over schema generation logic. So, let’s proceed methodically. Write a PRD to organize the proposed work using the `@asana.md` file I’ve provided. Break down this work into pieces we can apply in an incremental fashion, retaining the existing code paths as we develop new ones, and giving me the option to “cut over” when a refactored code path proves its worth over what exists now. Do not lard on agile-isms or “success factors”. Just a simple, action-oriented set of phases and checklist items.
The response from the AI will look with something like this:
# Project Development Log ## Phase 1: Build Core Utility Class (Week 1) **Status**: READY TO BEGIN ### 1.1 Setup and Planning - [ ] Create base module file - [ ] Define class structure with documentation - [ ] Plan imports to avoid circular dependencies - [ ] Determine type sharing strategy (additional module not needed) ### 1.2 Core Logic Implementation - [ ] Implement primary method with full feature support: - [ ] Schema compatibility validation - [ ] Type constraint enforcement - [ ] Operator compatibility checks - [ ] List and nested type handling - [ ] Extended mappings for structured types - [ ] Size limits for enumerations - [ ] Canonical identifier serialization - [ ] Default value logic - [ ] Format annotations for structured data - [ ] Named schema component generation ## Phase 2: Migrate and Consolidate Legacy Functions ### 2.1 Function Migration and Refactoring - [ ] Move legacy slot-to-schema function into new utility class - [ ] Migrate schema builder for publishing flow - [ ] Remove obsolete code from original location - [ ] Ensure new implementation maintains backward-compatible behavior - [ ] Validate functionality via existing test suite ## Phase 3: Update Downstream Consumers ### 3.1 Caller Refactor and Verification - [ ] Replace old utility calls with new class methods - [ ] Update internal references and imports - [ ] Remove deprecated dependencies - [ ] Run and verify endpoint behavior in integration tests - [ ] All downstream systems migrated and stable
(Aside: Quite obviously, this isn’t actually Asana. Originally calling the file asana.md
was a little workplace humor, coupled with a sense of giving the AI some context about what we wanted out of the plan. This workflow has proven mightily effective to tackle projects spanning a few hours to several days—the longest asana.md
I saved has over 300 completed checklist items! So I keep using the name with a wink and a touch of superstition. Hat tip to Felipe for this idea.)
I provide tools/scripts it can run to perform tests or confirm results.
See the asana.md
example above. Another example:
The code responsible for this work is attached. I want to add a *comprehensive* set of test cases that exercises this code, including a wide variety of types and their edge cases. Use `@test_infer.py` to help you write good tests.
I instruct the AI to proceed phase by phase
I refer to the specifics of the PRD, ands asking it to stop in between so I can review intermediate results.
OK. The Phase 1 code looks good. Proceed to Phase 2: Migrate and Consolidate Legacy Functions. Stop before going on to Phase 3 so I can review.
I liberally insert new PRD phases or skip/delete existing ones as needed.
I do this manually or I ask the AI edit the PRD. I keep the AI on track with reminders about priorities I have for the plan. Here’s an actual example from this past week, rather than the stylized one above.
OK. Here’s what I actually want for Phase 3f… simplification of service creation in services.py. We have a lot of complex code that duplicated over and over. Each service initialization follows the same basic pattern. Here’s one example: @classmethod def object_storage(cls) -> ‘ObjectStorage’: if not _services_started: raise RuntimeError(“ServicesAccessor not initialized. Call start_for_fastapi() or init_for_offline_use() first.”) # In developer mode (both FastAPI and offline), use lazy initialization if _developer_mode and not _object_storage_initialized: cls._ensure_service_initialized_sync(‘object_storage’) if _object_storage is None: raise RuntimeError(“Object storage not available after initialization attempt.”) return _object_storage Now, ideally, what I want is a @classmethod on the `ObjectStorage` class itself: `get_service`, which will return a shared instance that’s started. That’s what I want for *all* services. There’s also this complex string-based initialization in `_ensure_service_initialized_sync`. `_ensure_embeddings_initialized` is junk. I don’t want that. I don’t want `_create_service_instance_only` either. Just call `get_service` (once it’s implemented). Reduce the complexity. Make the access and initialization patterns simpler, while preserving the three key tenets of this refactor: 1. Lazy initialization of individual services for developer mode in FastAPI/uvicorn 2. Full/eager init of all services for production in FastAPI/uvicorn 3. Lazy offline usage of individual services for standalone scripts outside of FastAPI/uvicorn
I often remind the AI of my goals as we go.
See the “… while preserving the three key tenets of this refactor…” note above and the numbered list of “key tenets” that follows. The AI’s attention can drift. I deal with it by reminding it about the key aspects of my plans.
I have the AI create and run ad-hoc new unit tests and integration tests as we go.
It’s never been easier to write tests. I have the AI do it. I don’t spend much time maintaining a library existing tests. I just throw away most of the old ones and cons up new ones as the need arises. This matches well with my belief that the best way to ensure your code works is to use it.
I review everything before I commit.
As I said above, this is essential. I still have bugs and regressions, since my ideas, plans, and powers reading and reviewing remain imperfect. I continue to test and fix as I always have. I also use the software I write, which I think is the best way to find bugs and regressions.
That’s it. Thanks for reading.
— Ken