Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

The Archive Base

The Archive Base Logo The Archive Base Logo

The Archive Base Navigation

  • SEARCH
  • Home
  • About Us
  • Blog
  • Contact Us
Search
Ask A Question

Mobile menu

Close
Ask a Question
  • Home
  • Add group
  • Groups page
  • Feed
  • User Profile
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Buy Points
  • Users
  • Help
  • Buy Theme
  • SEARCH
Home/ Questions/Q 949921
In Process

The Archive Base Latest Questions

Editorial Team
  • 0
Editorial Team
Asked: May 15, 20262026-05-15T23:28:52+00:00 2026-05-15T23:28:52+00:00

Until now, I’ve used an improvised unit testing procedure – basically a whole load

  • 0

Until now, I’ve used an improvised unit testing procedure – basically a whole load of unit test programs run automatically by a batch file. Although a lot of these explicitly check their results, a lot more cheat – they dump out results to text files which are versioned. Any change in the test results gets flagged by subversion and I can easily identify what the change was. Many of the tests output dot files or some other form that allows me to get a visual representation of the output.

The trouble is that I’m switching to using cmake. Going with the cmake flow means using out-of-source builds, which means that convenience of dumping results out in a shared source/build folder and versioning them along with the source doesn’t really work.

As a replacement, what I’d like to do is to tell the unit test tool where to find files of expected results (in the source tree) and get it to do the comparison. On failure, it should provide the actual results and diff listings.

Is this possible, or should I take a completely different approach?

Obviously, I could ignore ctest and just adapt what I’ve always done to out-of-source builds. I could version my folder-where-all-the-builds-live, for instance (with liberal use of ‘ignore’ of course). Is that sane? Probably not, as each build would end up with a separate copy of the expected results.

Also, any advice on the recommended way to do unit testing with cmake/ctest gratefuly received. I wasted a fair bit of time with cmake, not because it’s bad, but because I didn’t understand how best to work with it.

EDIT

In the end, I decided to keep the cmake/ctest side of the unit testing as simple as possible. To test actual against expected results, I found a home for the following function in my library…

bool Check_Results (std::ostream              &p_Stream  ,
                    const char                *p_Title   ,
                    const char               **p_Expected,
                    const std::ostringstream  &p_Actual   )
{
  std::ostringstream l_Expected_Stream;

  while (*p_Expected != 0)
  {
    l_Expected_Stream << (*p_Expected) << std::endl;
    p_Expected++;
  }

  std::string l_Expected (l_Expected_Stream.str ());
  std::string l_Actual   (p_Actual.str ());

  bool l_Pass = (l_Actual == l_Expected);

  p_Stream << "Test: " << p_Title << " : ";

  if (l_Pass)
  {
    p_Stream << "Pass" << std::endl;
  }
  else
  {
    p_Stream << "*** FAIL ***" << std::endl;
    p_Stream << "===============================================================================" << std::endl;
    p_Stream << "Expected Results For: " << p_Title << std::endl;
    p_Stream << "-------------------------------------------------------------------------------" << std::endl;
    p_Stream << l_Expected;
    p_Stream << "===============================================================================" << std::endl;
    p_Stream << "Actual Results For: " << p_Title << std::endl;
    p_Stream << "-------------------------------------------------------------------------------" << std::endl;
    p_Stream << l_Actual;
    p_Stream << "===============================================================================" << std::endl;
  }

  return l_Pass;
}

A typical unit test now looks something like…

bool Test0001 ()
{
  std::ostringstream l_Actual;

  const char* l_Expected [] =
  {
    "Some",
    "Expected",
    "Results",
    0
  };

  l_Actual << "Some" << std::endl
           << "Actual" << std::endl
           << "Results" << std::endl;

  return Check_Results (std::cout, "0001 - not a sane test", l_Expected, l_Actual);
}

Where I need a re-usable data-dumping function, it takes a parameter of type std::ostream&, so it can dump to an actual-results stream.

  • 1 1 Answer
  • 0 Views
  • 0 Followers
  • 0
Share
  • Facebook
  • Report

Leave an answer
Cancel reply

You must login to add an answer.

Forgot Password?

Need An Account, Sign Up Here

1 Answer

  • Voted
  • Oldest
  • Recent
  • Random
  1. Editorial Team
    Editorial Team
    2026-05-15T23:28:53+00:00Added an answer on May 15, 2026 at 11:28 pm

    I’d use CMake’s standalone scripting mode to run the tests and compare the outputs. Normally for a unit test program, you would write add_test(testname testexecutable), but you may run any command as a test.

    If you write a script “runtest.cmake” and execute your unit test program via this, then the runtest.cmake script can do anything it likes – including using the cmake -E compare_files utility. You want something like the following in your CMakeLists.txt file:

    enable_testing()
    add_executable(testprog main.c)
    add_test(NAME runtestprog
        COMMAND ${CMAKE_COMMAND}
        -DTEST_PROG=$<TARGET_FILE:testprog>
        -DSOURCEDIR=${CMAKE_CURRENT_SOURCE_DIR}
        -P ${CMAKE_CURRENT_SOURCE_DIR}/runtest.cmake)
    

    This runs a script (cmake -P runtest.cmake) and defines 2 variables: TEST_PROG, set to the path of the test executable, and SOURCEDIR, set to the current source directory. You need the first to know which program to run, the second to know where to find the expected test result files. The contents of runtest.cmake would be:

    execute_process(COMMAND ${TEST_PROG}
                    RESULT_VARIABLE HAD_ERROR)
    if(HAD_ERROR)
        message(FATAL_ERROR "Test failed")
    endif()
    
    execute_process(COMMAND ${CMAKE_COMMAND} -E compare_files
        output.txt ${SOURCEDIR}/expected.txt
        RESULT_VARIABLE DIFFERENT)
    if(DIFFERENT)
        message(FATAL_ERROR "Test failed - files differ")
    endif()
    

    The first execute_process runs the test program, which will write out “output.txt”. If that works, then the next execute_process effectively runs cmake -E compare_files output.txt expected.txt. The file “expected.txt” is the known good result in your source tree. If there are differences, it errors out so you can see the failed test.

    What this doesn’t do is print out the differences; CMake doesn’t have a full “diff” implementation hidden away within it. At the moment you use Subversion to see what lines have changed, so an obvious solution is to change the last part to:

    if(DIFFERENT)
        configure_file(output.txt ${SOURCEDIR}/expected.txt COPYONLY)
        execute_process(COMMAND svn diff ${SOURCEDIR}/expected.txt)
        message(FATAL_ERROR "Test failed - files differ")
    endif()
    

    This overwrites the source tree with the build output on failure then runs svn diff on it. The problem is that you shouldn’t really go changing the source tree in this way. When you run the test a second time, it passes! A better way is to install some visual diff tool and run that on your output and expected file.

    • 0
    • Reply
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report

Sidebar

Explore

  • Home
  • Add group
  • Groups page
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Users
  • Help
  • SEARCH

Footer

© 2021 The Archive Base. All Rights Reserved
With Love by The Archive Base

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.