The discussion of AI is all around us, but in my experience, practical instructions rooted in specific uses are surprisingly rare. After spent months deep in the Weeeds massive migration of documentation with AI as my assistant, I learned a few heavily won lessons, from which I think others could benefit.
If you work in content engineering, technical documentation, or you are simply wondering how AI holds in a complicated project in the real world, here is my view of what worked and what didn’t.
The context of the project
I DITA INFORMATION ARCHITECT on the Information Experience in Splnk. Dita, the abbreviation for the Darwin information architecture is an open standard based on XML for structuring and manic content.
We recently packed the migration of three large documentation websites in a single portal by a hint -powered DITA content management system (CCMS). The timeline was tight and almost all sources were internal. Migrations were complex and meaningful for business and required careful planning and implementation.
I originally planned only to support the migration of a smaller non -versing place. When I was well, I was asked to lead a much greater second migration. (The third page was solved by another team.) Together, these two migrations meant struggling with about 30,000 HTML files, two very different architectures, and a challenge to adapt the existing python migration script to fit in their hand, while mentioning processes to review and clean their content.
I want to clarify that AI did not finish this project for me. It allowed me to work faster and more efficiently, even though I was doing planning, architect and problem solving. AI is used efficiently and has become an energy tool that dramatically accelerated the van, but never replaced the need for supervision or supervision.
During this project, I used the then GPT-4 models through internal deployment based on Cisco chat. Nowadays I work more in tool editors, such as Github Copilot. Nevertheless, the lessons I have learned should apply to the current (half of 2025) the most modern state, with a few objections that I mention where it accepts.
As I actively use
Call
One of the lessons I soon learned was to treat the challenges of the way I approach the technical documentation: clear, consistency and understanding. Before consulting AI, I would draw up what to happen, then divide into granular steps and write a challenge that theft of the least theft of imagination.
If I were a certain solution, I would first use AA as a brainstorm partner and then followed the exact challenge to implement.
Development of iterative
Migration automation was not the only script, but has become python tools that shed navigation trees, load HTML, convert to DITA XML, divide topics into smaller units, map content and Diffs version. Each script began small and then grew as I threw it into functions.
I quickly learned that the request AI was suddenly recipe for errors and confusion. Instead, I added functionality in small, well -defined additions. Each function or repair has gained its own challenge and its Gitlab game has committed itself. This made it easier to go back when something went to the side and watch exactly what every change had achieved.
Tuning
Even is with good challenges, the code generated by AI rarely worked on the first attempt-especially with the occurrence of scripts. My most effective tuning tool was print commands. If the output was what I would expect, I would sprinkle printing orders throughout the logic to watch what’s going on. Sometimes I would ask AI to explain the code line again according to a line that often dreams of fine logical errors or marginal boxes that I haven’t considered.
Importantly, it is not just about fixing errors, but also about learning. My Python’s skills in this process grew immensely because I forced my massphs to actually understand every line generated by AI. If I didn’t, they would inevitably pay the price later when a small improvement broke something downstream.
These days I have been based on the integrated developmental environment (IDE) with integrated AI to speed up tuning. However, the principle is unchanged: Do not allow instrumentation and verification. If AI cannot tune in to you, fall back to print commands and your own ability to trace the problem to your resource. And always check any AY AI code.
Implemented, not the inventor
This project taught me that AI is fantastic in that how to enjoy a well -defined idea and turn it into a working code. But if you ask her to propose architecture or invent a migration strategy from scratch, it will probably disappoint you. My most productive workflow was (1) to propose the My Lyself process, (2) to describe it in detail, (3) let AI manage the implementation and boiler and (4) to examine, test and specify the output AI.
Control version
I cannot emphasize the importance of controlling versions, even for simple scripts. Every time I added a function or corrected an error, I made a commitment. When a mistake later appears, I could go through my history and determine where things broke. Certainly, it is a basic software engineering, but when you work with AI, it’s even more critical. The speed of change increases and your own memory of each modification is inevitably less exhausting.
The pure effect of these practices was speed without chaos. We supplied much faster than we could have differently, and the quality of the output significantly reduced cleaning after migration.
Where AI did not reach
As valuable as it was, Mayry had shortcomings. Cracks began to manifest when scripts grew in size and complexity:
- Context Limits: When the scripts take place, AI lost the trail of earlier sections of the code. Could this add new separate functions, but interconnect new logic into an existing, interconnected code? This often failed if I did not explain exactly where and how to make changes. I should realize that today’s new models with larger context windows can reduce some problems I have encountered migration scripts. However, I suspect that it is still important to be as specific as possible about what partitions you need to update and what logic.
- Inability to find a functional implementation: I found that AI sometimes could not solve the problem as stated in the challenge. If I asked for a change and failed three or four times, it was usually a signal that would retreat back and try something else – where it meant promoting an alternative approach or writing a Myself code.
- System understanding: Some errors or cases of the edges required a solid understanding of our systems, such as how CCMS processes value ID or how the rules on compaits could classify things across systems. This is a key area where AI could not help me.
What would I do next time
Here’s my advice if I had to do it again:
- Plan basic libraries and conventions soon: In the beginning, decide on your magazine, name schemes and file structure and include them in every challenge. The unnecessary refactoring scripts of Midstream led to the time. This means that work in an editor -based tool that is aware of your entire pipe will help keep your libraries from the beginning.
- Disinfect everything: File names, ID, caste and other seemingly small details can cause major subsequent problems. List this instructions in your promotional boiler.
- Account for your own content: He does not assume that all documents follow the same patterns and certainly God assumes that AI understands the nunces of your content. Find out soon where the outlying values are. This initial work will save you time in the long run.
- Do do the complex things: For any logic that takes more than a few minutes to understand, write a thorough explanation that you can go back to later. There were times when I had to analyze the complicated parts of the scripts for weeks later, when a detailed note would return me to the race.
One tip outside AI: Maintain copies of your source and transferred tags in Restizes even after recording the converted content to your manufacturing tools. I promise to turn red.
Have a partner, not to replace
As for the project, I can strongly say that AI does not replace my critical thinking. Instead, it strengthened my skills and helped me work at speed and scale that would be difficult to achieve separately while streaming post-migration cleaning. But whenever I leaned too much careful planning, I lost time and had to retreat.
The actual value came from pairing my knowledge of domain and critical thinking with the ability to quickly iterate and implement. It is used thoughtfully, and AI helped me to deliver a project that became a career milestone.
If you are facing your own depressing migration, or just want to get more from AI in your workflow, I hope these lessons will save you some pain, and may even inspire you to accept the challenge you might think that it is too big to deal with.
Here you will find other stories on the innovation channel and subscribe!
(Tagstotranslate) ai