>>> 1 .__hash__()
File "<stdin>", line 1
SyntaxError: invalid syntax
Read carefully, it says
Whitespace is needed between two tokens only if their concatenation could otherwise be interpreted as a different token (e.g., ab is one token, but a b is two tokens).
1.__hash__() is tokenized as:
import io, tokenize for token in tokenize.tokenize(io.BytesIO(b"1.__hash__()").read): print(token.string) #>>> utf-8 #>>> 1. #>>> __hash__ #>>> ( #>>> ) #>>>
Python will chose the tokens which are largest; after parsing no two tokens should be able to be combined into a valid token. The logic is very similar to that in your other question.
The confusion seems to be not recognizing the tokenizing step as a completely distinct step. If the grammar allowed splitting up tokens solely to make the parser happy then surely you'd expect
to tokenize as
_ or 1.
but there is no such rule, so it tokenizes as
_ or1 .