We have a an internal framework that handles our logging , data access, crypto..you name it. I'd like to start comparing the performance, of say the logging functionality to other mainstream systems i.e. nlog, log4net, serilog. Obviously I'd start with functionally that all the systems have, like logging to a file, or console.
Would BenchmarkDotNet be applicable in a situation like this? Most of the examples and papers I've read indicated usage scenarios in very tight loops and mostly use IO, like Memory and CPU, not Disk.
As a exercise, I wrote an xunit test to benchmark writing to the console using the Baseline functionality, but the tests never completed and I ended up killing the process, leading me to this Post on SO.
If I'm using BenchmarkDotNet the wrong way is there another testing suit that that's more inline with what I'm trying to accomplish?
Thank you, Stephen
BenchmarkDotNet is most appropriate for microbenchmarking of CPU-bound code. There are so many factors that can affect IO-bound code that I don't view microbenchmarking as a great approach.
Instead, I would suggest if possible that you integrate each framework into your app and measure the performance under as realistic conditions as possible... including a "catastrophic failure" condition or something where the logging is likely to take a pummeling. Also test with a "null logger" which does nothing (and does nothing as early as possible) so that you can determine a sort of baseline.
This will only tell you how these logging frameworks behave for your specific application - but that's the most important thing for you to find out, I'd suggest.
See more on this question at Stackoverflow