feat: speed up incoming packet reader by bdraco · Pull Request #1314 · python-zeroconf/python-zeroconf

Conversation

@bdraco

This is ~14% speed up

before: Parsing 100000 incoming messages took 2.3616396670695394 seconds
after: Parsing 100000 incoming messages took 2.012439667014405 seconds

bdraco

@bdraco bdraco changed the title feat: speed up outgoing packet writer feat: speed up incoming packet reader

Nov 13, 2023

@codecov

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (55cf4cc) 99.77% compared to head (872591a) 99.77%.
Report is 2 commits behind head on master.

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #1314   +/-   ##
=======================================
  Coverage   99.77%   99.77%           
=======================================
  Files          29       29           
  Lines        3053     3079   +26     
  Branches      513      516    +3     
=======================================
+ Hits         3046     3072   +26     
  Misses          5        5           
  Partials        2        2           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@bdraco

The problem is its a 25% slow down for the pure python case

@bdraco

I think this still has potential if we use an lru since 99% of the data is the same

Type, class can be one key

@bdraco

Maybe similar to how we do it in aioesphomeapi. Use a cache wrapper that cython can convert from memory view to bytes and than py int to uint

Should work because view is really bytes on pure python anyways.

But is it faster...?

@bdraco

At least in pure python we probably reduce gc cycles by 99%

@bdraco

Maybe it's better to keep the specialized unpacks but use the views instead.

@bdraco bdraco marked this pull request as ready for review

November 15, 2023 05:41

@bdraco bdraco deleted the refactor_packets branch

November 15, 2023 05:41

1 participant

@bdraco