我们正在开发一个需要刷新(强制压缩和发送数据)GZIPOutputStream 的程序。问题是,GZIPOutputStream 的 flush 方法没有按预期工作(强制压缩和发送数据),而是 Stream 等待更多数据以进行有效的数据压缩。
当您调用完成时,数据被压缩并通过输出流发送,但 GZIPOutputStream(不是底层流)将被关闭,因此在创建新的 GZIPOutputStream 之前我们无法写入更多数据,这会耗费时间和性能。
希望任何人都可以帮助解决这个问题。
此致。
我们正在开发一个需要刷新(强制压缩和发送数据)GZIPOutputStream 的程序。问题是,GZIPOutputStream 的 flush 方法没有按预期工作(强制压缩和发送数据),而是 Stream 等待更多数据以进行有效的数据压缩。
当您调用完成时,数据被压缩并通过输出流发送,但 GZIPOutputStream(不是底层流)将被关闭,因此在创建新的 GZIPOutputStream 之前我们无法写入更多数据,这会耗费时间和性能。
希望任何人都可以帮助解决这个问题。
此致。
我还没有尝试过,这个建议在我们手头有 Java 7 之前不会有用,但是GZIPOutputStream
'sflush()
方法继承自的文档DeflaterOutputStream
依赖于在构造时指定的刷新模式,syncFlush
参数(与 相关Deflater#SYNC_FLUSH
)决定是否刷新待压缩的待压缩数据。这个论点在施工时syncFlush
也被接受。GZIPOutputStream
听起来您想使用 anyDeflator#SYNC_FLUSH
或 even Deflater#FULL_FLUSH
,但是,在深入挖掘之前,首先尝试使用两个参数或四个参数的GZIPOutputStream
true
构造函数并传递syncFlush
参数。这将激活您想要的冲洗行为。
我没有找到其他工作的答案。它仍然拒绝刷新,因为 GZIPOutputStream 使用的本机代码保留了数据。
幸运的是,我发现有人在 Apache Tomcat 项目中实现了 FlushableGZIPOutputStream。这是神奇的部分:
@Override
public synchronized void flush() throws IOException {
if (hasLastByte) {
// - do not allow the gzip header to be flushed on its own
// - do not do anything if there is no data to send
// trick the deflater to flush
/**
* Now this is tricky: We force the Deflater to flush its data by
* switching compression level. As yet, a perplexingly simple workaround
* for
* http://developer.java.sun.com/developer/bugParade/bugs/4255743.html
*/
if (!def.finished()) {
def.setLevel(Deflater.NO_COMPRESSION);
flushLastByte();
flagReenableCompression = true;
}
}
out.flush();
}
你可以在这个 jar 中找到整个类(如果你使用 Maven):
<dependency>
<groupId>org.apache.tomcat</groupId>
<artifactId>tomcat-coyote</artifactId>
<version>7.0.8</version>
</dependency>
或者直接去获取源代码FlushableGZIPOutputStream.java
它是在 Apache-2.0 许可下发布的。
这段代码在我的应用程序中非常适合我。
public class StreamingGZIPOutputStream extends GZIPOutputStream {
public StreamingGZIPOutputStream(OutputStream out) throws IOException {
super(out);
}
@Override
protected void deflate() throws IOException {
// SYNC_FLUSH is the key here, because it causes writing to the output
// stream in a streaming manner instead of waiting until the entire
// contents of the response are known. for a large 1 MB json example
// this took the size from around 48k to around 50k, so the benefits
// of sending data to the client sooner seem to far outweigh the
// added data sent due to less efficient compression
int len = def.deflate(buf, 0, buf.length, Deflater.SYNC_FLUSH);
if (len > 0) {
out.write(buf, 0, len);
}
}
}
也有同样的问题Android
。接受者答案不起作用,因为def.setLevel(Deflater.NO_COMPRESSION);
抛出异常。根据flush
它改变压缩级别的方法Deflater
。所以我想在写入数据之前应该调用改变压缩,但我不确定。
还有 2 个其他选项:
错误 ID 4813885处理此问题。2006 年 9 月 9 日提交的“DamonHD”评论(大约是错误报告的一半)包含一个FlushableGZIPOutputStream
他在Jazzlib net.sf.jazzlib.DeflaterOutputStream
之上构建的示例。
作为参考,这是一个(重新格式化的)摘录:
/**
* Substitute for GZIPOutputStream that maximises compression and has a usable
* flush(). This is also more careful about its output writes for efficiency,
* and indeed buffers them to minimise the number of write()s downstream which
* is especially useful where each write() has a cost such as an OS call, a disc
* write, or a network packet.
*/
public class FlushableGZIPOutputStream extends net.sf.jazzlib.DeflaterOutputStream {
private final CRC32 crc = new CRC32();
private final static int GZIP_MAGIC = 0x8b1f;
private final OutputStream os;
/** Set when input has arrived and not yet been compressed and flushed downstream. */
private boolean somethingWritten;
public FlushableGZIPOutputStream(final OutputStream os) throws IOException {
this(os, 8192);
}
public FlushableGZIPOutputStream(final OutputStream os, final int bufsize) throws IOException {
super(new FilterOutputStream(new BufferedOutputStream(os, bufsize)) {
/** Suppress inappropriate/inefficient flush()es by DeflaterOutputStream. */
@Override
public void flush() {
}
}, new net.sf.jazzlib.Deflater(net.sf.jazzlib.Deflater.BEST_COMPRESSION, true));
this.os = os;
writeHeader();
crc.reset();
}
public synchronized void write(byte[] buf, int off, int len) throws IOException {
somethingWritten = true;
super.write(buf, off, len);
crc.update(buf, off, len);
}
/**
* Flush any accumulated input downstream in compressed form. We overcome
* some bugs/misfeatures here so that:
* <ul>
* <li>We won't allow the GZIP header to be flushed on its own without real compressed
* data in the same write downstream.
* <li>We ensure that any accumulated uncompressed data really is forced through the
* compressor.
* <li>We prevent spurious empty compressed blocks being produced from successive
* flush()es with no intervening new data.
* </ul>
*/
@Override
public synchronized void flush() throws IOException {
if (!somethingWritten) { return; }
// We call this to get def.flush() called,
// but suppress the (usually premature) out.flush() called internally.
super.flush();
// Since super.flush() seems to fail to reliably force output,
// possibly due to over-cautious def.needsInput() guard following def.flush(),
// we try to force the issue here by bypassing the guard.
int len;
while((len = def.deflate(buf, 0, buf.length)) > 0) {
out.write(buf, 0, len);
}
// Really flush the stream below us...
os.flush();
// Further flush()es ignored until more input data data written.
somethingWritten = false;
}
public synchronized void close() throws IOException {
if (!def.finished()) {
def.finish();
do {
int len = def.deflate(buf, 0, buf.length);
if (len <= 0) {
break;
}
out.write(buf, 0, len);
} while (!def.finished());
}
// Write trailer
out.write(generateTrailer());
out.close();
}
// ...
}
您可能会发现它很有用。
正如@seh 所说,这很好用:
ByteArrayOutputStream stream = new ByteArrayOutputStream();
// the second param need to be true
GZIPOutputStream gzip = new GZIPOutputStream(stream, true);
gzip.write( .. );
gzip.flush();
...
gzip.close()