我的程序足够快,但我宁愿放弃这种速度来优化内存,因为一个用户的最大内存使用量高达 300 MB,这意味着他们中很少有人会不断地使应用程序崩溃。我发现的大多数答案都与速度优化有关,而其他答案只是一般性的(“如果您直接从数据库写入内存,那么内存使用量应该不会太多”)。好吧,似乎有:)我在考虑不发布代码,这样我就不会“锁定”某人的想法,但另一方面,如果你没有看到我已经做过的事情,我可能会浪费你的时间所以这里是:
// First I get the data from the database in a way that I think can't be more
// optimized since i've done some testing and it seems to me that the problem
// isn't in the RS and setting FetchSize and/or direction does not help.
public static void generateAndWriteXML(String query, String oznaka, BufferedOutputStream bos, Connection conn)
throws Exception
{
ResultSet rs = null;
Statement stmt = null;
try
{
stmt = conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
rs = stmt.executeQuery(query);
writeToZip(rs, oznaka, bos);
} finally
{
ConnectionManager.close(rs, stmt, conn);
}
}
// then I open up my streams. In the next method I'll generate an XML from the
// ResultSet and I want that XML to be saved in an XML, but since its size takes up
// to 300MB, I want it to be saved in a ZIP. I'm thinking that maybe by writing
// first to file, then to zip I could get a slower but more efficient program.
private static void writeToZip(ResultSet rs, String oznaka, BufferedOutputStream bos)
throws SAXException, SQLException, IOException
{
ZipEntry ze = new ZipEntry(oznaka + ".xml");
ZipOutputStream zos = new ZipOutputStream(bos);
zos.putNextEntry(ze);
OutputStreamWriter writer = new OutputStreamWriter(zos, "UTF8");
writeXMLToWriter(rs, writer);
try
{
writer.close();
} catch (IOException e)
{
}
try
{
zos.closeEntry();
} catch (IOException e)
{
}
try
{
zos.flush();
} catch (IOException e)
{
}
try
{
bos.close();
} catch (IOException e)
{
}
}
// And finally, the method that does the actual generating and writing.
// This is the second point I think I could do the memory optimization since the
// DataWriter is custom and it extends a custom XMLWriter that extends the standard
// org.xml.sax.helpers.XMLFilterImpl I've tried with flushing at points in program,
// but the memory that is occupied remains the same, it only takes longer.
public static void writeXMLToWriter(ResultSet rs, Writer writer) throws SAXException, SQLException, IOException
{
//Set up XML
DataWriter w = new DataWriter(writer);
w.startDocument();
w.setIndentStep(2);
w.startElement(startingXMLElement);
// Get the metadata
ResultSetMetaData meta = rs.getMetaData();
int count = meta.getColumnCount();
// Iterate over the set
while (rs.next())
{
w.startElement(rowElement);
for (int i = 0; i < count; i++)
{
Object ob = rs.getObject(i + 1);
if (rs.wasNull())
{
ob = null;
}
// XML elements are repeated so they could benefit from caching
String colName = meta.getColumnLabel(i + 1).intern();
if (ob != null)
{
if (ob instanceof Timestamp)
{
w.dataElement(colName, Util.formatDate((Timestamp) ob, dateFormat));
}
else if (ob instanceof BigDecimal)
{
// Possible benefit from writing ints as strings and interning them
w.dataElement(colName, Util.transformToHTML(new Integer(((BigDecimal) ob).intValue())));
}
else
{ // there's enough of data that's repeated to validate the use of interning
w.dataElement(colName, ob.toString().intern());
}
}
else
{
w.emptyElement(colName);
}
}
w.endElement(rowElement);
}
w.endElement(startingXMLElement);
w.endDocument();
}
编辑:这是一个内存使用示例(使用visualVM):
EDIT2:数据库是 Oracle 10.2.0.4。并且我设置了 ResultSet.TYPE_FORWARD_ONLY 并获得了最大 50MB 的使用量!正如我在评论中所说,我会密切关注这一点,但这确实很有希望。
EDIT3:似乎还有另一种可能的优化可用。正如我所说,我正在生成一个 XML,这意味着重复了很多数据(如果没有别的,那就是标签),这意味着 String.intern() 可以在这里帮助我,我会在测试时回复。