I've been in the QA space for a while now, and one thing that repeatedly comes up is people neglecting their QA data. It's always an afterthought about how to get their application into the right state to be able to test their functionality properly.
So I'm curious, how do you all manage the seeded test data that you need for your QA tests?
Yes and no. You can't escape at least one "pet" environment, and unless your profit margins are so disgusting that you can rent out dedicated environments for every team, "cattle" tends to be aspirational. If realized, the cattle setup is inevitably abused (spinning up too much, leaving it running and chugging the meter etc...) You're going to have a bad time.
It's not so much about the environment itself, but the data in it. If the data you use for your tests is all hand crafted/manually created by someone who has left the company then you don't have any way to scale it (e.g. if you want to run a bunch of your tests in parallel and they may modify that data), or to make changes to address new functionality in your application, then your qa process will suffer immensely.
I wish people would think about the data they use for the tests a bit more, and how they can create it from scratch in a consistent and scalable way, that way they can always be testing against a clean environment with a known setup and avoid doing a bunch of bad things (like creating data on the fly as part of a test)
Been there, done that, hombre. Did the math, and at the last place I was at, our test data, if transcribed line by line into composition notebooks (our seed files were basically JSON), we'd be tossing 137 or so notebooks worth through the system every test run.
Can you get devs to care about what valid data looks like? Nigh impossible. Hell, I had a hard enough time keeping my testers authoring new test data in a reasonably spec compliant way. A proper data lifecycle is the key, but it will almost always be the least popular part of your process because most people just don't want to think about it.
At some point in your process someone has to know what they are doing. There is no machine that knows correct data for you. It's part of of what makes testing difficult. Everyone else can live in fantasy land, but you, as a tester, have to bring the hammer of reality crashing down. Won't make you many friends, but it is what it is. Your test data must reflect a reality. Someone has to do the footwork observe that reality. Only someone who has done so can then do the next step of authoring valid/representative test data.
I've been in the QA space for a while now, and one thing that repeatedly comes up is people neglecting their QA data. It's always an afterthought about how to get their application into the right state to be able to test their functionality properly.
So I'm curious, how do you all manage the seeded test data that you need for your QA tests?
Yes and no. You can't escape at least one "pet" environment, and unless your profit margins are so disgusting that you can rent out dedicated environments for every team, "cattle" tends to be aspirational. If realized, the cattle setup is inevitably abused (spinning up too much, leaving it running and chugging the meter etc...) You're going to have a bad time.
It's not so much about the environment itself, but the data in it. If the data you use for your tests is all hand crafted/manually created by someone who has left the company then you don't have any way to scale it (e.g. if you want to run a bunch of your tests in parallel and they may modify that data), or to make changes to address new functionality in your application, then your qa process will suffer immensely.
I wish people would think about the data they use for the tests a bit more, and how they can create it from scratch in a consistent and scalable way, that way they can always be testing against a clean environment with a known setup and avoid doing a bunch of bad things (like creating data on the fly as part of a test)
Been there, done that, hombre. Did the math, and at the last place I was at, our test data, if transcribed line by line into composition notebooks (our seed files were basically JSON), we'd be tossing 137 or so notebooks worth through the system every test run.
Can you get devs to care about what valid data looks like? Nigh impossible. Hell, I had a hard enough time keeping my testers authoring new test data in a reasonably spec compliant way. A proper data lifecycle is the key, but it will almost always be the least popular part of your process because most people just don't want to think about it.
At some point in your process someone has to know what they are doing. There is no machine that knows correct data for you. It's part of of what makes testing difficult. Everyone else can live in fantasy land, but you, as a tester, have to bring the hammer of reality crashing down. Won't make you many friends, but it is what it is. Your test data must reflect a reality. Someone has to do the footwork observe that reality. Only someone who has done so can then do the next step of authoring valid/representative test data.