In my last posts, I talked a lot about UI tests. But the real meat (and the real pain) of automation often lies with the API.
\ API tests need to be fast, stable, and cover 100% of your endpoints. "Simple," you say. "Just take the Swagger schema and run requests against it."
\ Oh, if only it were that simple.
\ When I started adding API test automation to Debuggo, I realized the whole process is a series of traps. Here is how I'm solving them.
It all starts simply. I implemented a feature:
You upload a Swagger schema (only Swagger for now).
\
Debuggo parses it and automatically creates dozens of test cases:
[Positive] For every endpoint.
[Negative] For every required field.
[Negative] For every data type (field validation).
\
This already saves hours of manual work "dreaming up" negative scenarios. After this, you can pick any generated test case (e.g., [Negative] Create User with invalid email) and ask Debuggo: "Generate the steps for this."
…the first real problem begins. How does an AI know what a "bad email" is?
\ The Bad Solution: Hardcoding the knowledge that [email protected] is a bad email into the AI. This is brittle and stupid.
\ The Debuggo Solution: Smart Placeholders.
When Debuggo generates steps for a negative test, it doesn't insert a value. It inserts a placeholder.
\ For example, for a POST /users with an invalid email, it will generate a step with this body: \n
{"name": "test-user", "email": "%invalid_email_format%"}
\ Then, at the moment of execution, Debuggo itself (not the AI) expands this placeholder into real, generated data that is 100% invalid. The same goes for dropdowns, selects, etc. — the AI doesn't guess the selector, it inserts a placeholder, and Debuggo handles it.
So, we have our steps with placeholders. We run the test. And it fails.
\ The Scenario: The schema says POST /users returns 200 OK. The application actually returned 201 Created.
\ A traditional auto-test: Will just fail, giving you a "flaky" test.
\ The Debuggo Solution: A Dialogue with the User.
\ Debuggo sees the conflict: "Expected 200 from the schema, but got 201 from the app."
\ It doesn't just fail. It pauses the test and asks you:
"Hey, the schema and the real response don't match. Do you want to accept 201 as the correct response for this test?"
\ You, the user, confirm. Debuggo fixes the test case. You just fixed a brittle test without writing a single line of code.
This is the coolest feature I've implemented.
\ The Scenario: The app returns a 400 Bad Request with the response body: {"error": "name cannot contain spaces"}.
\ A traditional auto-test: Will fail, and you have to manually analyze the logs to find the hidden rule.
\ The Debuggo Solution: Adaptation on the Fly.
\ Debuggo doesn't just see the 400 error. It reads the response body and sees the rule: "name cannot contain spaces."
\ It automatically changes the placeholder for this field. It creates a new one — %stringwithoutspaces% — and re-runs the test by itself with the new, correct value.
\ The AI is learning the real business rules of your app, even if they aren't documented in Swagger.
\ What's the takeaway? I'm not just building a "Swagger parser." I'm building an assistant that: \n * Generates hundreds of positive/negative test cases. \n * Uses "Smart Placeholders" instead of hardcoded values. \n * Identifies conflicts between the schema and reality and helps you fix them. \n * Learns from the application's errors to make tests smarter.
\ This is a hellishly complex thing to implement, and I'm sure it's still raw.
\ That's why I need your help. If you have a "dirty," "old," or "incomplete" Swagger schema—you are my perfect beta tester.


