I am reading the book: CS-APPe2. C has unsigned and signed int type and in most architectures uses two's-complement arithmetic to implement signed value; but after learning some assembly code, I found that very few instructions distinguish between unsigned and signed. So my question are:
int s = a + b
unsigned s = a + b
ADD s d
It's quite easy. Operations like addition and subtraction don't need any adjustment for signed types in two's complement arithmetic. Just perform a mind experiment and imagine an algorithm using just the following mathematical operations:
Addition is just taking items one by one from one heap and putting them to the other heap until the first one is empty. Subtraction is taking from both of them at once, until the subtracted one is empty. In modular arithmetics, you just just treat the smallest value as the largest value plus one and it works. Two's complement is just a modular arithmetic where the smallest value is negative.
If you want to see any difference, I recommend you to try operations that aren't safe with respect to overflow. One example is comparison (
a < b).
Is it the complier's responsibility to differentiate signed and unsigned? If yes, how does it do that?
By generating different assembly whenever needed.
Who implements the two's-complement arithmetic - the CPU or the complier?
It's a difficult question. Two's complement is probably the most natural way to work with negative integers in a computer. Most operations for two's complement with overflow are the same as for unsigned integers with overflow. The sign can be extracted from a single bit. Comparison can be conceptually done by subtraction (which is signedness-agnostic), sign bit extraction and comparison to zero.
It's the CPU's arithmetic features that allow the compiler to produce computations in two's complement, though.
unsigned s = a + b
Note that the way plus is computed here don't depend on the result's type. Insead it depends on the types of variables to the right of the equal sign.
So when executing ADD s d, should the CPU treat s&d unsigned or signed?
CPU instructions don't know about the types, those are only used by the compiler. Also, there's no difference between adding two unsigned numbers and adding two signed numbers. It would be stupid to have two instructions for the same operation.