Building a Local Test Generator
- Maksim Murin
- 5 сент.
- 5 мин. чтения
In a modern development workflow, writing and maintaining test coverage often becomes a time-consuming, repetitive task 🔁— especially for fast-moving teams working on multiple platforms like Flutter or .NET 💻. While AI-assisted tools have shown strong potential in generating boilerplate or even meaningful unit tests, most available solutions require sending source code to external APIs or cloud-based services ☁️. For many teams, including ours, that’s a deal-breaker 🚫.

We needed something different:
A secure, internal solution where code never leaves our network
A way to automate test generation for both individual files and large projects
A system that could evolve with current AI capabilities and integrate seamlessly into our development pipeline
Technologies don’t stand still — and neither do we. With advancements in local LLMs like Deepseek-Coder, and tools like Ollama making them easy to host, we saw an opportunity to build a lightweight but powerful tool tailored for internal use.
That idea became our internal AI-powered test generation service — designed to be private, extensible, and developer-friendly.
🔍 How It Works
At a high level, the test generation tool operates through a clean and straightforward process, carefully designed to prioritize both usability and data privacy.
1️⃣ Everything begins when a developer opens the internal Flutter-based web interface. From there, they can choose to upload either a single source file or a complete project packaged as a ZIP archive.
2️⃣ Once the file is uploaded, the system immediately places it into an internal job queue. This queue handles all incoming requests asynchronously, ensuring that the user interface remains responsive and that multiple test generation jobs can be managed in parallel without overwhelming the server. Developers don't need to sit and wait on the upload screen. Instead, they can check in on the progress of their job through REST API endpoints or, for more dynamic tracking, via a WebSocket connection that streams live updates.
3️⃣ When a job reaches the front of the queue, the real work begins. Server takes the uploaded code and passes it to a locally hosted instance of the AI model. Model, optimized for understanding and generating code, analyzes the structure of the input. In the case of larger ZIP archives, it scans through multiple files, resolves import or namespace dependencies, and maps out how different parts of the project connect. Based on this understanding, the model generates a corresponding set of test files, designed to mirror the organization of the original source code.

Because the system is hosted entirely on a local network and does not use any external APIs or cloud-based models, all of this processing happens in a secure and private environment. Code never leaves the internal infrastructure, making it ideal for proprietary or sensitive projects.
4️⃣ Once the tests are generated, the results are packaged and made available to download. For single-file uploads, the user receives a matching test file. For full project uploads, a new ZIP archive is created containing test files for each of the relevant source files in the project. The developer is then notified — either through the UI or programmatically — that the job has been completed and their download is ready.

This entire flow transforms what is usually a manual, time-consuming task into a seamless experience. With just a few clicks, developers can quickly generate meaningful, structured test files for their applications — all without sacrificing control over their source code.
🏗️ Test Generator High-Level Architecture
While the interface is designed to feel simple and seamless, there’s quite a bit happening in the background to make everything run smoothly. Let’s take a look behind the scenes to explore how this system actually works — from the way jobs are scheduled to how AI-driven test generation is triggered and delivered.
1. System-Level Architecture
At its core, the project is made up of three main layers: a browser-based user interface, a FastAPI-powered server backend, and a local large language model runtime. The user interacts with the system entirely through a Flutter web app, which runs directly in the browser and communicates with the server via HTTP and WebSocket connections.
The backend — built with Python’s FastAPI — acts as the central coordinator. It receives uploaded files, manages job scheduling, and handles communication with the language model. Instead of sending code to external cloud services, everything runs on a local network using Ollama, which hosts the AI model. This setup was chosen deliberately to maintain maximum privacy and prevent any chance of source code leaving the internal environment.
This layered architecture keeps responsibilities clearly separated: the UI focuses on usability, the backend handles orchestration, and the AI model takes care of understanding code and generating useful tests.

2. Job Queue & Worker Pool
To ensure the system can handle multiple test generation requests without slowing down or crashing, an internal job queue manages everything behind the scenes. When a developer uploads a file or archive, that request is placed into a queue. Worker processes monitor the queue and pick up tasks as they become available.
This design makes the system highly scalable and responsive. Developers don’t have to wait for previous jobs to finish before submitting their own. At the same time, the server stays stable under load by only processing as many jobs at once as the system can safely handle. It also opens the door for future scaling — for example, adding more workers if needed without rewriting core logic.
3. LLM Integration & Prompt Strategy
The most powerful part of the tool is its ability to generate intelligent, context-aware tests using an AI model. Each job is routed to the Deepseek-Coder 6.7B model running locally through Ollama. Before sending the source code to the model, the server wraps it in a carefully crafted prompt. This prompt guides the model’s behaviour, encouraging it to focus specifically on writing clean, runnable unit tests that match the structure and purpose of the original code.
The model analyzes classes, functions, and method signatures, and then generates test cases that follow standard patterns in Dart or C#. The result is a test file (or set of files) that a developer can immediately use as a foundation — either to run as-is or further customize as needed.

4. Front-End Experience
From the developer's point of view, using the tool is fast and intuitive. The Flutter web interface runs entirely in the browser and requires no installation. Users simply drop files or ZIP archives into the upload area and hit "Generate".
After that, they can watch job progress live through a dynamic progress indicator, powered by WebSocket updates from the server. Once the test generation is complete, a download link appears, offering either a single test file or a full ZIP archive of results — depending on what was originally uploaded. Everything is handled in a clean, minimal UI designed to keep the focus on getting results quickly and securely.
Together, these parts form a cohesive system that balances usability, performance, and security. But more than just a sum of its components, this project reflects a mindset: one that embraces smart automation, thoughtful design, and real-world developer needs.
🚀 Final Thoughts
This project represents more than just a tool — it’s a glimpse into how AI can empower developers by taking care of repetitive, time-consuming tasks like test creation, all while respecting data privacy and fitting into real-world workflows. By combining a fast, intuitive front-end with a powerful on-premise backend and a capable large language model, we’ve built something practical, secure, and developer-friendly.

But this is only the beginning. Tools like this one are stepping stones toward smarter, more automated development pipelines — where AI becomes a reliable assistant, not just a novelty. With each iteration, we aim to refine the experience, expand language support, and improve test quality even further.
Let’s Build the Future Together
At Igniscor, we don’t just build apps — we craft complete mobile experiences. From embedded systems to sleek, high-performance mobile interfaces, we turn ideas into reality with creativity and care. Have a project in mind? Let’s make it happen — contact us today






Комментарии