I have a large vector F with a few million entries that gives this inconsistent behaviour when taking norms.
As described in the comments, your issue is that
float16 is too small to represent the intermediate results - its maximum value is 65504. A much simpler test-case is:
To avoid overflow, you can divide by your largest value, and then remultiply:
def safe_norm(x): xmax = np.max(x) return np.linalg.norm(x / xmax) * xmax
There's perhaps an argument that
np.linalg.norm should do this by default for float16