Python: Unicodedecodeerror: 'utf8' Codec Can't Decode Byte 0xc0 In Position 0: Invalid Start Byte
Solution 1:
This is, indeed, invalid UTF-8. In UTF-8, only code points in the range U+0080 to U+07FF, inclusive, can be encoded using two bytes. Read the Wikipedia article more closely, and you will see the same thing. As a result, the byte 0xc0
may not appear in UTF-8, ever. The same is true of 0xc1
.
Some UTF-8 decoders have erroneously decoded sequences like C0 AF
as valid UTF-8, which has lead to security vulnerabilities in the past.
Solution 2:
Found one standard that actually accepts 0xc0 : encoding="ISO-8859-1"
from https://stackoverflow.com/a/27456542/4355695
But this entails making sure the rest of the file doesn't have unicode chars, so this would not be an exact answer to the question, but may be helpful for folks like me who didn't have any unicode chars in their file anyways and just wanted python to load the damn thing and both utf-8 and ascii encodings were erroring out.
More on ISO-8859-1 : What is the difference between UTF-8 and ISO-8859-1?
Post a Comment for "Python: Unicodedecodeerror: 'utf8' Codec Can't Decode Byte 0xc0 In Position 0: Invalid Start Byte"