Concurrency models (was: Timer)
Alan Kennedy
alanmk at hotmail.com
Tue Oct 28 15:38:54 EST 2003
More information about the Python-list mailing list
Tue Oct 28 15:38:54 EST 2003
- Previous message (by thread): Concurrency models (was: Timer)
- Next message (by thread): Concurrency models (was: Timer)
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
[Alan Kennedy] >> I think threads are really a useful abstraction for programmers who >> are learning to deal with developing servers for the first time. And >> threads fulfill their function very well, to a point. >> >> But they have fundamental scalability problems: The C10K problem >> illustrates this well: Surely it should be possible for a modern >> computer, with giga-everything (!), to serve 10,000 clients >> simultaneously? [Ian Bicking] > Fundamental scalability problems hit very few people. And in fact > there's nothing fundamental about them -- the particular of the > interpreter and your concurrency model are only a very small part of > the performance of a complex application. Using threads or async and > heavy caching, maybe you could outperform by two a multi-process model, > making non-SMP scalability unimportant. Or using multiple processes, > you might be able to more simply migrate applications to other > computers. Real programs have a wide variety of constraints. In general I agree. However, with the advent of XML webservices, I have an intuition that processing all that XML and mapping it to local functionality (using reflection/introspection/etc) will place a much larger burden on CPU than we generally find today. Throw in XSLT, XQuery, WebDAV et al, and the CPU requirement could potentially become a limiting factor in server throughput. But, I could be completely wrong, of course :-) While reading about the C10K problem, I came across a great academic paper about architecting high-performance network servers using a combination of event-driven and threaded architectures, with a specially defined IPC mechanism to tie it all together. It involved the nice concept of "back pressure" (my words), whereby when a component became saturated it could notify the event-scheduler, and cut-off the sources of requests. I'd love to find that paper again. -- alan kennedy ----------------------------------------------------- check http headers here: http://xhaus.com/headers email alan: http://xhaus.com/mailto/alan
- Previous message (by thread): Concurrency models (was: Timer)
- Next message (by thread): Concurrency models (was: Timer)
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Python-list mailing list