I was running the following codes compiled together as:
gcc A.c B.c -o combined
extern int a,b;
So, I am answering my own question after a long time. Although the statement:
int b;is a decalaration and
int b = 2;is the definition
is correct but the reason everyone is giving is not clear.
Had there not been a
int b = 2;,
int b; was a definition, so what is the difference?
The difference lies in the way the linker handles multiple symbol definitions. There is a concept of weak and strong symbols.
The assembler encodes this information implicitly in the symbol table of the relocatable object file. Functions and initialized global variables get strong symbols. Uninitialized global variables get weak symbols.
int a = 1 is a strong symbol while
int b; is a weak symbol, similarly in
int b = 2 is a strong symbol and while
int a is weak.
Given this notion of strong and weak symbols, Unix linkers use the following rules for dealing with multiply defined symbols:
So, now we can argue about what is happening in the above case.
int b = 2and
int b, the former is a strong symbol while the latter is weak so b is defined with value 2.
int a = 1and
int a, a is defined as 1 (same reasoning).
Hence, the output