Message316177
| Author | serhiy.storchaka |
|---|---|
| Recipients | adelfino, eric.smith, serhiy.storchaka |
| Date | 2018-05-04.15:25:19 |
| SpamBayes Score | -1.0 |
| Marked as misclassified | Yes |
| Message-id | <1525447519.77.0.682650639539.issue33422@psf.upfronthosting.co.za> |
| In-reply-to |
| Content | |
|---|---|
I don't think we need to support prefixes without quotes or with triple qoutes. 'ur' is not valid prefix. Using simplified code from tokenize:
_strprefixes = [''.join(u) + q
for t in ('b', 'r', 'u', 'f', 'br', 'rb', 'fr', 'rf')
for u in itertools.product(*[(c, c.upper()) for c in t])
for q in ("'", '"')]
Or you can use tokenize._all_string_prefixes() directly:
_strprefixes = [p + q
for p in tokenize._all_string_prefixes()
for q in ("'", '"')]
But it may be simple to just convert the string to lower case before looking up in the symbols dict. Then
_strprefixes = [p + q
for p in ('b', 'r', 'u', 'f', 'br', 'rb', 'fr', 'rf')
for q in ("'", '"')] |
|
| History | |||
|---|---|---|---|
| Date | User | Action | Args |
| 2018-05-04 15:25:19 | serhiy.storchaka | set | recipients: + serhiy.storchaka, eric.smith, adelfino |
| 2018-05-04 15:25:19 | serhiy.storchaka | set | messageid: <1525447519.77.0.682650639539.issue33422@psf.upfronthosting.co.za> |
| 2018-05-04 15:25:19 | serhiy.storchaka | link | issue33422 messages |
| 2018-05-04 15:25:19 | serhiy.storchaka | create | |