我有一个数据库,其中包含一个加载了 base 64 图像的表,当数据库增长过多时,这被证明是一个问题,我们尝试导出它的数据。
为了重现极限情况,我使用 700Mb 数据库将 Java 堆空间减少到 96Mb,因此我可以保证,如果导出运行良好,它不会再出现任何问题(至少希望如此)。
此外,根据 H2 高级文档:
Storing and Reading Large Objects
If it is possible that the objects don't fit into memory, then the data type CLOB
(for textual data) or BLOB (for binary data) should be used.
For these data types, the objects are not fully read into memory, by using streams.
To store a BLOB, use PreparedStatement.setBinaryStream.
To store a CLOB, use PreparedStatement.setCharacterStream.
To read a BLOB, use ResultSet.getBinaryStream, and to read a CLOB,
use ResultSet.getCharacterStream. When using the client/server mode,
large BLOB and CLOB data is stored in a temporary file on the client side.
图像应作为字符流访问。通过这种方法,我分析了在 JVM 内存不足时 java 创建的内存报告,内存泄漏位于名为 org.h2.result.ResultDiskBuffer$ResultDiskTape 的类中。
如果不使用 Stream 方法,而是使用 JDBC 的 getString 方法,则会在名为 PageResult 的类中出现内存泄漏。
如果有人对代码感兴趣:
/**
* Exports the data implemented by us.
* @param conn
* @return
*/
private static boolean exportData(Connection conn) {
PreparedStatement st = null;
ResultSet rs = null;
try {
// Get the table names.
List<String> tables = getTableNames(conn);
// For each table...
for ( String table : tables ) {
int batch = 0;
List<String> columns = new ArrayList<String>();
System.out.println("---------------------------------------------------------");
System.out.println(" >>>>>>>>>>>>>>> Exporting table " + table);
// Close previous resources
if ( st != null ) {
st.close();
}
// Create the prepared statement.
st = conn.prepareStatement("SELECT * FROM " + table + " LIMIT ? OFFSET ?");
// Get the columns of the database using its metadata.
ResultSetMetaData metaData = st.getMetaData();
for ( int col = 0 ; col < metaData.getColumnCount() ; col++ ) {
String columnName = metaData.getColumnLabel(col + 1);
columns.add(columnName);
}
// Do while there is data in the table
do {
if ( batch % 100 == 0 ) { // Every 100 points, next line
System.out.println();
}
System.out.print(".");
if ( rs != null ) { // Close previous resources
rs.close();
}
// Clears the batch
st.clearBatch();
// Execute query to grab a batch of batch size
st.setInt(1, BATCH_SIZE);
st.setInt(2, batch * BATCH_SIZE);
rs = st.executeQuery();
// Count will hold the number of rows found in the batch. If 0, no more data in the table, so we can break
// the do while.
int count = 0;
// While data is in the result set...
while ( rs.next() ) {
// Increase counter
count ++;
startXMLElement(table);
// Write to xml the columns of it.
for ( String name : columns) {
Reader characterStream = rs.getCharacterStream(name);
if ( characterStream == null ) {
continue;
}
BufferedReader br = new BufferedReader(characterStream);
String res = "";
String line = null;
while ( (line = br.readLine()) != null ) {
res += line;
}
try {
br.close();
characterStream.close();
} catch (Exception e ) {
e.printStackTrace();
}
addXMLAttribute(name, res);
}
// Write map to xml
endXMLElement(table , columns);
}
// No more data in table. Break while.
if ( count == 0 ) {
break;
}
// Increase the batch for next iteration if data found.
batch++;
} while ( true );
System.out.println();
System.out.println(" >>>>>>>>>>>>>>> Done exporting " + table);
System.out.println("---------------------------------------------------------");
}
return true;
} catch (Exception e) {
e.printStackTrace();
return false;
} finally {
if ( st != null ) {
try {
st.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
if ( rs != null ) {
try {
rs.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
}
}
/**
* @param conn
* @return the names of the table created in the database.
* @throws SQLException
*/
private static List<String> getTableNames(Connection conn) throws SQLException {
List<String> tables = new ArrayList<String>();
tables.add("IMAGECACHE");
DatabaseMetaData md = conn.getMetaData();
ResultSet rs = md.getTables(null, null, "%", null);
while (rs.next()) {
String name = rs.getString(3);
String type = rs.getString(4);
if ( "TABLE".equals(type) && ! tables.contains(name) ) {
tables.add(name);
}
}
return tables;
}