I don’t think that this is specific to a language or framework, but I am using xUnit.net and C#.
I have a function that returns a random date in a certain range. I pass in a date, and the returning date is always in range of 1 to 40 years before the given date.
Now I just wonder if there is a good way to unit test this. The best approach seems to be to create a loop and let the function run i.e. 100 times and assert that every of these 100 results are in the desired range, which is my current approach.
I also realize that unless I am able to control my Random generator, there will not be a perfect solution (after all, the result IS random), but I wonder what approaches you take when you have to test functionality that returns a random result in a certain range?
In addition to testing that the function returns a date in the desired range, you want to ensure that the result is well-distributed. The test you describe would pass a function that simply returned the date you sent in!
So in addition to calling the function multiple times and testing that the result stays in the desired range, I would also try to assess the distribution, perhaps by putting the results in buckets and checking that the buckets have roughly equal numbers of results after you are done. You may need more than 100 calls to get stable results, but this doesn’t sound like an expensive (run-time wise) function, so you can easily run it for a few K iterations.
I’ve had a problem before with non-uniform ‘random’ functions.. they can be a real pain, it’s worth testing for early.