-
-
Notifications
You must be signed in to change notification settings - Fork 30.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gh-96268: Fix loading invalid UTF-8 #96270
Conversation
This makes tokenizer.c:valid_utf8 match stringlib/codecs.h:decode_utf8. This also fixes the related test so it will always detect the expected failure and error message.
Lib/test/test_source_encoding.py
Outdated
@@ -14,11 +14,11 @@ class MiscSourceEncodingTest(unittest.TestCase): | |||
|
|||
def test_pep263(self): | |||
self.assertEqual( | |||
"�����".encode("utf-8"), | |||
"ðÉÔÏÎ".encode("utf-8"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, these were changed unintentionally by my editor. Going to revert...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, actually these were added by Github's generated commit 6d43cc which accepted @ezio-melotti's suggestion. Seems like a bug in Github, which isn't surprising, given this file is not valid UTF-8. I'll clean this up by hand.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got me nerd-sniped. :-)
Lib/test/test_source_encoding.py
Outdated
# not via a signal. | ||
self.assertGreaterEqual(rc, 1) | ||
self.assertIn(b"Non-UTF-8 code starting with", stderr) | ||
self.assertIn(b"on line 5", stderr) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Am I miscounting here? The string in the template appears to me to be on the 4th line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch. Indeed you are correct.
The generation of the error message adds 1 to tok->lineno
. I don't know if that's correct or not, but it seems like other error messages that report tok->lineno
don't do that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm. There's a comment in tokenizer.c right above the PyErr_Format() call explaining why 1 has to be added. But I wonder if your change disturbed this logic? I don't understand how, though. It could also be that the comment was wrong. Maybe @pablogsal understands this logic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC this is because the parser (or at least some parts of it) emits line numbers that start with 0 but the rest of the VM needs line numbers starting at 1 to display exceptions. But there has been some time since I had to deal with this so some details could be missing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The mystery is that in the updated test, an error in a string on line 4 is reported at line 5. Unless I misread the test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hummmm, that may be pointing to something breaking. I bet that this is pointing past the file. Without looking in detail I don't know exactly what could be going on with this specific test. Unfortunately it may be that there was some implicit contract on the reporting that these changes are breaking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I think there is some kind of bug here. These are the errors in different versions:
❯ python3.8 lel.py
File "lel.py", line 4
SyntaxError: Non-UTF-8 code starting with '\xc0' in file lel.py on line 4, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details
❯ python3.9 lel.py
SyntaxError: Non-UTF-8 code starting with '\xc0' in file /Users/pgalindo3/lel.py on line 4, but no encoding declared; see https://python.org/dev/peps/pep-0263/ for details
❯ python3.10 lel.py
SyntaxError: Non-UTF-8 code starting with '\xc0' in file /Users/pgalindo3/lel.py on line 5, but no encoding declared; see https://python.org/dev/peps/pep-0263/ for details
So something changed in 3.10 around this, it seems.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that line is just wrong because the line generated is already good for the exception. I made this change:
diff --git a/Parser/tokenizer.c b/Parser/tokenizer.c
index f2606f17d1..924c97ba8a 100644
--- a/Parser/tokenizer.c
+++ b/Parser/tokenizer.c
@@ -535,7 +535,7 @@ ensure_utf8(char *line, struct tok_state *tok)
"in file %U on line %i, "
"but no encoding declared; "
"see https://peps.python.org/pep-0263/ for details",
- badchar, tok->filename, tok->lineno + 1);
+ badchar, tok->filename, tok->lineno);
return 0;
}
return 1;
And the full (current) test suite passes without errors:
== Tests result: SUCCESS ==
407 tests OK.
29 tests skipped:
test_curses test_dbm_gnu test_devpoll test_epoll test_gdb
test_idle test_ioctl test_launcher test_msilib
test_multiprocessing_fork test_ossaudiodev test_perf_profiler
test_smtpnet test_socketserver test_spwd test_startfile test_tcl
test_tix test_tkinter test_ttk test_ttk_textonly test_turtle
test_urllib2net test_urllibnet test_winconsoleio test_winreg
test_winsound test_xmlrpc_net test_zipfile64
Total duration: 6 min 1 see
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mdboom do you want to include the fix in this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pablogsal: Yes, it makes sense to just fix this in this PR.
@pablogsal: I leave it to you to decide whether this is backported to 3.11. If we don't backport, I'll file a separate PR for 3.11 to make the tests pass on buildbots with |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll let @pablogsal decide about the 3.11 and 3.10 backports. (It would be less risky to backport just the lineno fix perhaps?)
🤖 New build scheduled with the buildbot fleet by @gvanrossum for commit f8e9e6e 🤖 If you want to schedule another build, you need to add the ":hammer: test-with-buildbots" label again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. I think it's time to merge this.
Thanks @mdboom for the PR, and @gvanrossum for merging it 🌮🎉.. I'm working now to backport this PR to: 3.11. |
GH-96668 is a backport of this pull request to the 3.11 branch. |
This makes tokenizer.c:valid_utf8 match stringlib/codecs.h:decode_utf8. It also fixes an off-by-one error introduced in 3.10 for the line number when the tokenizer reports bad UTF8. (cherry picked from commit 8bc356a) Co-authored-by: Michael Droettboom <mdboom@gmail.com>
This makes tokenizer.c:valid_utf8 match stringlib/codecs.h:decode_utf8. It also fixes an off-by-one error introduced in 3.10 for the line number when the tokenizer reports bad UTF8. (cherry picked from commit 8bc356a) Co-authored-by: Michael Droettboom <mdboom@gmail.com>
This makes tokenizer.c:valid_utf8 match stringlib/codecs.h:decode_utf8.
This also fixes the related test so it will always detect the expected failure
and error message.