I’m a dev. I’ve been for a while. My boss does a lot technology watch. He brings in a lot of cool ideas and information. He’s down to earth. Cool guy. I like him, but he’s now convinced that AI LLMs are about to swallow the world and the pressure to inject this stuff everywhere in our org is driving me nuts.
I enjoy every part of making software, from discussing with the clients and the future users to coding to deployment. I am NOT excited at the prospect of transitioning from designing an architecture and coding it to ChatGPT prompting. This sort of black box magic irks me to no end. Nobody understands it! I don’t want to read yet another article about how an AI enthusiast is baffled at how good an LLM is at coding. Why are they baffled? They have "AI" twelves times in their bio! If they don’t understand it who does?!
I’ve based twenty years of career on being attentive, inquisitive, creative and thorough. By now, in-depth understanding of my tools and more importantly of my work is basically an urge.
Maybe I’m just feeling threatened, or turning into "old man yells at cloud". If you ask me I’m mostly worried about my field becoming uninteresting. Anyways, that was the rant. TGIF, tomorrow I touch grass.
I pretty much agrre with you and everyone else here. AI is not as useful as a lot of people are pushing it to be.
I used GitHub CoPilot for several months, and some of the advanced autocompleting it can do during refactors is amazing. But I wish I could use just that as a dedicated feature (like an AI context aware find/replace).
I found most of the time CoPilot was more distracting than helpful. I'll have 90% of a solution done in my head, and then as I'm writing it out, CoPilot will recommend something that's almost, but not quite, what I want and completely interrupt my train of thought.
When I code with AI, it seems to just move all my time from coding to debugging and reading the AI's code. For me, I end up with a worse result in the end than if I just wrote it all manually, and I don't end up internalizing the structure as well.
I use ChatGPT a little differently now that I realized this. Much more like Rubber Duck Programming, where I'll set up the conversation so ChatGPT asks ME questions. For writing documentation, I've found this to produce far better and more accurate results. You can even ask it for a summary at the end.