Message183427
| Author | ncoghlan |
|---|---|
| Recipients | Arfrever, ishimoto, loewis, methane, mrabarnett, ncoghlan, pitrou, rurpy2, serhiy.storchaka, vstinner |
| Date | 2013-03-04.09:18:55 |
| SpamBayes Score | -1.0 |
| Marked as misclassified | Yes |
| Message-id | <1362388736.02.0.958951117196.issue15216@psf.upfronthosting.co.za> |
| In-reply-to |
| Content | |
|---|---|
That's a fair point - I think it's acceptable to throw an error in the case of *already decoded* characters that haven't been read. There's also a discussion on python-ideas about an explicit API for clearing the internal buffers, and pushing data back into a stream. If that is added, then set_encoding() would be free to error out if there was any already buffered data - it would be up to the application to call clear_buffer() before calling set_encoding(), and deal with an such data appropriately (such as calling push_data() with the results of the clear_buffer() call) |
|
| History | |||
|---|---|---|---|
| Date | User | Action | Args |
| 2013-03-04 09:18:56 | ncoghlan | set | recipients: + ncoghlan, loewis, ishimoto, pitrou, vstinner, mrabarnett, Arfrever, methane, rurpy2, serhiy.storchaka |
| 2013-03-04 09:18:56 | ncoghlan | set | messageid: <1362388736.02.0.958951117196.issue15216@psf.upfronthosting.co.za> |
| 2013-03-04 09:18:56 | ncoghlan | link | issue15216 messages |
| 2013-03-04 09:18:55 | ncoghlan | create | |