For reasons we all know unpickling unauthenticated data received over TCP is very risky. Sending an unencrypted authentication key (as part of a pickle) over TCP would make the authentication useless.
When a proxy is pickled the authkey is deliberately dropped. When the proxy is unpickled the authkey used for the reconstructed proxy is current_process().authkey. So you can "fix" the example by setting the current_process().authkey to match the one used by the manager:
import multiprocessing
from multiprocessing import managers
import pickle
class MyManager(managers.SyncManager):
pass
def client():
mgr = MyManager(address=("localhost",2288),authkey="12345")
mgr.connect()
l = mgr.list()
multiprocessing.current_process().authkey = "12345" # <--- HERE
l = pickle.loads(pickle.dumps(l))
def server():
mgr = MyManager(address=("",2288),authkey="12345")
mgr.get_server().serve_forever()
server = multiprocessing.Process(target=server)
client = multiprocessing.Process(target=client)
server.start()
client.start()
client.join()
server.terminate()
server.join()
In practice all processes using the manager should have current_process().authkey set to the same value.
I don't claim that multiprocessing supports distributed computing very well, but as far as I can see, things are working as intended. |