LoggingHandler hangs the thread

Our application is running on Google Kubernetes Engine using gcr.io/google-appengine/jetty image and used com.google.cloud.logging.LoggingHandler to publish logs on Stackdriver. We noticed some worker threads becoming unresponsive over time. When the pod is shutting down we can see the following exception for each:

java.lang.RuntimeException: java.lang.InterruptedException
	at com.google.cloud.logging.LoggingImpl.flush(LoggingImpl.java:545)
	at com.google.cloud.logging.LoggingImpl.write(LoggingImpl.java:525)
	at com.google.cloud.logging.LoggingHandler.publish(LoggingHandler.java:273)
	at java.util.logging.Logger.log(Logger.java:738)
	at org.slf4j.impl.JDK14LoggerAdapter.log(JDK14LoggerAdapter.java:582)
	at org.slf4j.impl.JDK14LoggerAdapter.error(JDK14LoggerAdapter.java:500)
        ...
Caused by: java.lang.InterruptedException
	at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:449)
	at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:79)
	at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:63)
	at com.google.cloud.logging.LoggingImpl.flush(LoggingImpl.java:543)
	... 30 more

We'll try to extract a thread dump to see why the future never completes, but the issue seems dangerous by itself: LoggingImpl.java:543 uses the non-timeout version of Future.get() which can cause any logger call to block the current thread forever unless interrupted. Would it be possible to use the timeout version with a reasonably big timeout, e.g. 60 seconds?