6 Sep 2024 - Shahpur Khan

Dare to dream of even… automating it?

AI and machine learning are producing many game-changing tools for software development teams, but shockingly few address testing – the most time-consuming, taxing, tedious sector of the engineering business. Anyone who’s been in this industry for as little as an internship to as long as a lifetime can attest (heh) that testing takes up too much of engineers’ time while harnessing too little of their brilliance. My colleagues and I at Engine Room see the opportunities wide open in AI for testing. We seek to shake up the inefficient process with our latest initiative.

 

Dare to dream of automation

I’m thrilled to reveal that for a while now, Engine Room has been developing our own Test Case and Test Script Generator. This has been a great opportunity for me to apply the strong foundation and passion I’ve established in AI to something with the potential to utterly transform the workflow of development teams – just like yours! I’m very excited to dive into this, so let’s talk broadly about what this system will be, how it’s being put together, and why you should look forward to it. 
 
 Picture this: you’ve just come back from a stakeholder meeting, where you received vague, ambitious requests for your latest product or project. Your team stares at your scrambled notes for a few minutes, stuck like deer in headlights at the idea of turning these into formal requirements/specs, then turning those into formal test cases, then coding up those test cases, and running that code on the finished product. All that before any changes or fixes are even made…

Does that ring a bell? I’m not surprised. But you might be when I tell you that we’re on the fast track to trimming that dreadful process down by about 50%. At its core, our system will allow you to input a set of formal software requirements – specifying information like the type of system, a plain-language requirement, and a REQ-ID – and receive in return ready-made test cases for you to start following – including appropriate test steps, pre-conditions, and plain-language expected behavior. Ongoing experimentation and research are helping us decide between a couple of strategies for developing this. Whether the approach ends up being to take a language model by Google that has already been pre-trained on many NLP exercises, and to further fine-tune it for test case generation – or to build a Custom GPT that delivers optimized test cases on the spot – we’ll be able to get you up and running with strong, accurate guidelines to test your products for less time, effort, and cash.

 This is all Phase 1, by the way – our vision goes beyond. Why take only written requirements as input? Why output only test cases? Later, we aim to add ways for you to input elaborate User Flows instead of formal requirements. You may also be able to forgo the requirement altogether by inputting your wireframes (UI blueprints) from a tool like Figma – and receive the same shiny test cases in return. Even more daring stages of our plan involve auto-writing of test scripts. Have your magically-generated test cases further converted into real testing code, ready for execution in Microsoft’s Playwright or your favorite testing library.

Such a bold goal calls for the most reliable and innovative tools already out there. As we work on choosing from alternative project strategies, we’re also researching tirelessly to amass the best toolsuite for whichever plan we carry out. The magic started in a simple Jupyter Notebook (or .ipynb file), accessed in Jupyter itself or its cloud sibling Google Colab. Jupyter Notebooks let you use the same Python language you’re comfortable with, but allow you to split your code into rearrangeable blocks called “cells” that can run in any order (or in isolation)! Next up are our trusty Machine Learning libraries – Scikit-learn for data processing and model training, PyTorch for deep learning and tensors, and more. Newer, stronger resources like OpenAI’s Custom GPTs and API also come into play with their model fine-tuning, integration, and experimentation services. And of course, we can’t forget Hugging Face – a compilation platform with a reputable Transformers library and tons of ML models… including the one we’ve been fine-tuning to build out the Test Case Generator!

 Another helpful ally in this project has been Gemini, Google’s new AI chatbot assistant. Spoiler alert: If you’re trying to learn to develop AI, AI itself might know a thing or two! By carefully describing my initiative, sub-goals, and even opinions and preferences along the way, I’ve worked with Gemini to formally structure this project, find the right resources in seconds, and efficiently push through errors and standstills. 

Engine Room’s upcoming Test Case and Test Script Generator is bound to change the way software testers and their entire teams go about business. I hope you’ll stay posted on our system and look forward to using it to cut down your QA time, effort, and cost dramatically. After all, your team’s QA engineers are thorough, perseverant, and organized. Why not free their time to apply those talents to the more difficult and diverse challenges on your horizon than repetitive, tiresome testing?

 This blog post doubles as both our reveal of this exciting project and the overview of a planned collection of posts. In the coming entries, I’ll dive more deeply into each aspect of the Generator and its development. I’ll show off (but also teach you about) the models used, the tools involved, the Generative AI technology aiding us, and the decisions I’ve made to overcome challenges faced along the way. 

Stay (fine-)tuned!