Will the update query using joins take more time if the fields that are joined have data types nvarchar(255), when compared to declaring them as nvarchar(50)?
If the answer to the above question is yes,then why is this happening as the data inside the field at the maximum has only 5 characters, so SQL should be allocating space for only 5 characters.
TL/DR; The simple answer is the performance should be comparable.
That being said, some databases (such as SQLLite) do not handle this gracefully as they use tree based text indexes with larger memory requirements for deeper trees.
This question has similar answers: nvarchar column size impact on performance
There are two basic types of search.
If there is an index on the table being scanned, then the database will search the index first. This is essentially like using a street directory, you go to the country, then the suburb, then alphabetically to the letter the street starts with, then scan them to find the street. This saves you reading every street in the directory. As i am saying this, i realise depending on your age you may never have seen a street directory...
Table scanning is what happens if there is no index. The database engine will go through each row, one after another (usually in order of insertion) and compare each row to the expected result. When it finds the correct one, it will return it.
The way it does the comparison is left to right, letter by letter, so with the same data, it will take the same time. If the data in the
nvarchar(255) field had the same
row count but not the same
length (i.e. there are the same number of rows, but the content is longer) then the search would take longer as it would need to scan more letters.