Key Key - 2 months ago 24x
Java Question

Difference between String.length() and String.getBytes().length

I am beginner and self-learning in Java programming.
So, I want to know about difference between

in Java.

What is more suitable to check the length of the string?


String.length() is the number of UTF-16 code units needed to represent the string. That is, it is the number of char values that are used to represent the string (thus equal to toCharArray().length). For western languages this is typically the same as the number of unicode characters (code points) in the string. The values will different any time UTF-16 surrogate pairs are used. Such pairs are needed only to encode characters outside the BMP and are rarely used in most writing.

String.getBytes().length is the number of bytes needed to represent your string in the platform's default encoding. For example, if the default encoding was UTF-16 (rare), it would be exactly 2x the value returned by String.length(). More commonly, your platform encoding will be a multi-byte encoding like UTF-8.

This means the relationship between those two lengths are more complex. For ASCII strings, the two calls will almost always produce the same result (outside of unusual default encodings that don't encode the ASCII subset in 1 byte). Outside of ASCII strings, String.getBytes().length is likely to be longer, as it counts bytes needed to represent the string, while length() counts 2-byte code units.

Which is more suitable?

Usually you'll use String.length() in concert with other string methods that take offsets into the string. E.g., to get the last character, you'd use str.charAt(str.length()-1). You'd only use the getBytes().length if for some reason you were dealing with the array-of-bytes encoding returned by getBytes.