Teague Teague - 3 months ago 13
C Question

Is there a difference in source code for release and debug compiled program? [C/C++]

I've been getting more into C++ programming as of late and keep running into the whole 'debug vs release' compiled versions. Now I feel like I've got a pretty decent understanding of some of the differences between released and debug versions of compiled code. For the debug version of code, the compiler doesn't attempt to optimize the code such that you can run a debugger and step through your program line by line. Essentially the compiled code closely resembles your source code in how it is executed. When compiling in release mode, the compiler attempts to optimize the program such that it has the same functionality, but is more efficient.

However, I'm curious as to whether or not there are instances where the source code between release and debug version can be different. That is, when we refer to debug vs release, are we always just talking about the compiled code, or can there exist differences in the source code?

This question arises due to me working in a proprietary programming language in which a formal, step by step debugger doesn't exist, yet serial monitors do exist. Thus a lot of our 'debug' vs 'release' code is implemented via #defines which look something like this:

#ifdef _DEBUG

check that error didn't occur...

SerialPrint("Error occurred")

#endif


So to summarize my question, depending on your IDE, are there often settings for implementing what I've illustrated? That is, when you attempt to compile to a debug version, can it be integrated with changes in the source code? Or does release vs debug typically just refer to the compiled binaries?

Thank you!

jww jww
Answer

Is there a difference in source code for release and debug compiled program?

It depends on the source code, and the options used to compile the library or program. Below are a few differences I am aware of.

ASSERTS

The simplest of "debugging and diagnostics" is an assert. They are in effect when NDEBUG is not defined. Asserts create self-debugging code, and they snap when an unexpected condition is encountered. The trick is you have to assert everything. Everywhere you validate parameters and state, you should see an assert. Everywhere there's an assert, you should see an if to validate parameters and state.

I laugh when I see a code base without asserts. I kind of say to myself, the devs have too much time on their hands if they are wasting it under the debugger. I often ask why thy don't use asserts, and they usually answer with the following...

Posix assert sucks because it calls abort. If you are debugging a program, then you usually want to step the code to see how the code handles negative conditions that caused the assert to fire. Terminating the program runs foul with the "debugging and diagnostic" purpose. It has got to be one of the dumbest decisions in the history of C/C++. No one seems to recall the reasoning for the abort (a few years ago I tried to track down the pedigree on various C/C++ standards lists).

Usually you replace the useless Posix assert with something more useful, like an assert that raises a SIGTRAP on Linux or calls DebugBreak on Windows. See, for example, a sample trap.h. You replace the Posix assert with your assert to ensure libraries you are using get the updated behavior (if they have already been compiled, then its too late).

I also laugh when projects like ISC's BIND (the DNS server that powers the Internet) DoS's itself with its asserts (they have their own assert; they don't use Posix assert). There's a number of CVE's against BIND for its self-inflicted DoS. DoS'ing yourself is right up there with "lets abort a program being debugged".

For completeness, Microsoft Foundation Classs (MFC) used to have something like 16,000 or 20,000 asserts to help catch mistakes early. That was back in the late 1990s or mid 2000s. I don't what the state is today.

APIs

Some APIs exist that are purposefully built for "debugging and diagnostics". Other APIs can be used for it even though they are not necessarily safe to use in production.

An example of the former (purposefully built) is a Logging and DebugPrint API. Apple successfully used it to egress a user's FileVault passwords and keys. Also see os x filevault debug print.

An example of the latter (not safe for production) is Windows IsBadReadPointer and IsBadWritePointer. Its not safe for production because it suffers a race condition. But its usually fine for development because you want the extra scrutiny.

When we perform security reviews and audits, we often ask/recommend removing all non-essential logging; and ensure the logging level cannot be changed at runtime. When an app goes production, the time for debugging is over. There's no reason to log everything.

Libraries

Sometimes there are special libraries to use to help with debugging a diagnostics. Linux's Electric Fence and Microsoft's CRT Library come to mind. Both are memory checkers with APIs. In this case, you link command will be different, too.

Options

Sometimes you need additional options or defines to help with debugging and diagnostics. Glibc++ and -D_GLIBCXX_DEBUG comes to mind. Another one is concept checking, which used to be enabled by the define -D_GLIBCXX_CONCEPT_CHECKS. Its Boost code and its broken, so you should not use it. In these cases, your compile flags will be different.

Another one I often laugh at is a Release build that lacks the NDEBUG define. That includes Debian and Ubuntu as a matter of policy. The NSA, GHCQ and other 3-letter agencies thanks them for taking the sensitive information (like server keys), stripping the encryption (writing it to a file unprotected), and then egressing the sensitive information (sending them Windows Error Reporting, Apport Error Reporting, etc).

Initialization

Some development environments perform initialization with special bit patterns when a value is not explicitly initialized. Its really just a feature of the tools, like the compiler or linker. Microsoft's tools come to mind; see When and why will an OS initialise memory to 0xCD, 0xDD, etc. on malloc/free/new/delete? GCC had a feature request for it, but I don't think anything was ever done with it.

I often laugh when I disassemble a production DLL and see the Microsoft debug bit patterns bcause I know they are shipping a Debug DLL. I laugh because it often indicates the Release DLL has a memory error that the dev team was not able to clear. Adobe is notorious for doing this (not surprisingly, Adobe supplies some of the most insecure software on the planet, even though they don't supply an Operating System like Apple or Microsoft).


#ifdef _DEBUG

    check that error didn't occur...

    SerialPrint("Error occurred")

#endif

It makes me want to cry, but you still have to do this in 2016. GDB is (was?) broken under Aarch64, X32 and S/390, so you have to use printf's to debug your code.