I have started writing this post a few times, and I decided this morning I need to just get it out there. I use AI at work. A lot.
This was not true six months ago. The tools are improving at a speed that I am impressed and also kind of scared? This week was the first week where I started to worry a bit about how AI is affecting my output, my coworkers output, and our company as a whole. This worry came from just a stark realization of how much code volume we're putting out compared to what we used to put out. Some of that is due to company growth (since January our engineering team has doubled in size), but a lot is also due to tools just getting better.
I don't think I know yet how I feel about all of these changes, so I decided instead I would just talk about the tools that are having a positive impact on my day to work, and how we're adjusting to it. I also want to write this as a snapshot. My good friend @pliable asked me "do you think there have been any significant advances in AI over the past few months?" I responded yes, but also don't have a great snapshot of where I was three months. It's a crazy time, I used to be able to hold opinions about tech and tools for one to two years. Now if I hold them more than three months, I find I am regularly wrong, based on how fast things are changing.
First off, context: I currently work at Laurel. I am an engineering manager for Infrastructure and Security. Coding is about 20% of my job. We as a company are investing heavily in trying to find ways to make AI useful without it being dangerous or distracting. We have two people dedicated to doing that full time. One of them wrote about his work with "Force multiplying with AI at Laurel ".
How I write code. I code in Typescript, Go, Terraform and occasionally Python. https://wakatime.com/@icco claims my time on these languages is... less than I'd like but also doesn't include Claude time.
z $project, then git co -b $issue and then cursor .. I don't write long complicated prompts. Instead I usually just click clear all chats to give me a clean workspace, then post a log message or stack trace into the chat box and ask "why is this happening." 95% of the time it finds the place the issue is coming from, I then read the code. If I see an easy fix, I make it. If I don't understand, I ask Cursor to suggest a change.cat ~/Desktop/migrate.txt | pbcopy, then z $project, then git co -b $task and then claude where I paste the prompt. Press enter and let it run until it has questions, where it causes my terminal to beep. I address and let it keep running.z : This is zoxide. It jumps to most used directories.
git co : This is an alias for git checkout.
Example Claude prompt:
Upgrade as many of the packages in yarn outdated as possible. After each package upgrade, build, test, lint, make any changes needed, commit and push.
Do not upgrade anything related to Jest. Also do not upgrade reflect-metadata or class-validator.
My general policy is if writing the prompt takes more than one minute, I should be doing it myself.
We are still evaluating tools for AI code review. We currently are getting code review comments from Cursor Bug Bot, Github Copilot, Sentry AI. Cursor is by far the best of these three so far.
These are technical tools I use when I want to understand how a service is doing, trying to understand a claim from another engineer, or investigating an incident.
This category is for outputting things like docs and stuff. This is the category I hate the most and use the least. Right now, I really hate the tone of all writing I've seen come out of LLMs and other AI tools. So instead I just use tools for researching what I want to say, not what I will say.
I have tried a ton of other tools which have failed me greatly. The above I keep coming back to though, despite issues I have had, which says something.
Two issues I do want to solve with AI that I have not put time into is natural language local device search and an AI for suggesting CLI tools. Doing both from the command line (I mostly use zsh from iTerm2 on OSX) would be nice.
I also would love a good AI diagram tool, but I am coming to the realization that I am bad at describing diagrams, and probably should just stick to drawing things.
I hope this was interesting, very curious to see how all of this will change in six months.
/Nat