I used to work at a large software company, and unsurprisingly we went through rigorous integration and unit testing for every change we made. We used standard tools for the languages we were writing in, nothing special.
However, there were a few things that were difficult or impossible to do in CI, especially anything involving hardware, so we also had “manual tests:” do X, plug in this USB device, and ensure that Y happens. Everyone does manual tests sometimes, but what was new to me was that my team had dedicated tools specifically for tracking checklists of manual tests. Sometimes these were done by us software people, other times by full-time QA engineers. The process was systematic and rigorous, so that a major release was not considered complete until it passed both automated and manual test suites.
There’s a lot of advice out there on how to write automated tests for games, but I’m curious what people have to say about the manual aspects of testing. Up until now, I have found that I tend to do my manual testing in a very haphazard way, just making sure the change I just made has the right effect and rarely doing it systematically. I’m starting to experiment with rigorous checklists like I did at work, giving me a list of actions to run through on a regular basis to ensure I haven’t broken everything. (I’m not talking about playtesting here, more watching for bugs and regressions.) Any thoughts or experiences with this?
i feel like you really don’t hit any sort of stable testing pattern, automated or otherwise, until you’re at an industry level honestly. I think its just one of those things that until you have a studio where it’s someone’s job to lead a QA team, there’s no way to meaningfully optimize it, or at least not major reason to (manager breathing down your neck, etc).
there are definitely those out there who build unit tests - but its so conditional on the type of game really. i’ve only seen it meaningfully amount to anything in puzzle games really.
i think you may be onto something with thinking about checklists as a physical task though. this has got me thinking about a system for, say games that are hard to automate easily like platformers or multiplayer games - setting up ‘hooks’ for events you know you need to make sure happen during gameplay. then when you are testing the game yourself or asking others too, only count in testing/analysis the runs that triggered all or most those hooks.
the end - “industry” solution to something like that would be to tie those hooks to some sort of web API. Then you could have a bit of telemetry on which users are completing tasks in what order or how fast, etc.
Yeah, I haven’t really found much use for automated testing in games yet. I’ve maybe used unit tests once in a game project, when the mechanics were complex enough that it was more or less a custom engine. In general a lot of conventional software eng wisdom applies poorly to small game projects, and programmer-brain is something I’m trying to suppress as I work on my current game. Still, some things I learned in the industry have proven adaptable for game projects, particularly agile methods (I get a lot out of a basic kanban board, and it becomes especially important with multiple collaborators).
The approach I’m trying out right now for manual testing is: 1. every few commits, a short test suite to ensure I haven’t completely broken the thing (biased towards critical-path features that tend to break the most), 2. a more thorough test suite that covers most aspects of the game, even if in a shallow manner. If I try to keep a test suite in my head then I run the risk of forgetting something, and having everything written down in a simple system takes a lot of stress out as I progress into early playtesting.