0

我目前正在使用 RDF 模型。因此,我从数据库中查询数据,使用 Apache Jena 生成模型并使用它们。虽然,我不想每次使用模型时都要查询它们,所以我考虑将它们存储在本地。模型很大,所以我想使用 Apache Commons Compress 压缩它们。到目前为止,这有效(省略了 try-catch-blocks):

public static void write(Map<String, Model> models, String file){
   logger.info("Writing models to file " + file); 
   TarArchiveOutputStream tarOutput = null;;
    TarArchiveEntry entry = null;

   tarOutput = new TarArchiveOutputStream(new GzipCompressorOutputStream(new FileOutputStream(new File(file))));
   for(Map.Entry<String, Model> e : models.entrySet()) {
      logger.info("Packing model " + e.getKey());

      // Convert Model
      ByteArrayOutputStream baos = new ByteArrayOutputStream();
      RDFDataMgr.write(baos,e.getValue(), RDFFormat.RDFXML_PRETTY);

      // Prepare Entry
      entry = new TarArchiveEntry(e.getKey());
      entry.setSize(baos.size());
      tarOutput.putArchiveEntry(entry);

      // write into file and close
      tarOutput.write(baos.toByteArray());
      tarOutput.closeArchiveEntry();

   }
   tarOutput.close();
}

但是当我尝试另一个方向时,我得到了奇怪的 NullPointerExceptions。这是 GZip 实现中的错误还是我对 Streams 的理解错误?

public static Map<String, Model> read(String file){
   logger.info("Reading models from file " + file);
   Map<String, Model> models = new HashMap<>();


   TarArchiveInputStream tarInput = new TarArchiveInputStream(new GzipCompressorInputStream(new FileInputStream(file)));

   for(TarArchiveEntry currentEntry = tarInput.getNextTarEntry();currentEntry != null; currentEntry= tarInput.getNextTarEntry()){
      logger.info("Processing model " + currentEntry.getName());    

      // Read the current model
      Model m = ModelFactory.createDefaultModel();
      m.read(tarInput, null);

      // And add it to the output
      models.put(currentEntry.getName(),m);

      tarInput.close();
    }   
      return models;
}

这是堆栈跟踪:

Exception in thread "main" java.lang.NullPointerException
    at org.apache.commons.compress.compressors.gzip.GzipCompressorInputStream.read(GzipCompressorInputStream.java:271)
    at java.io.InputStream.skip(InputStream.java:224)
    at org.apache.commons.compress.utils.IOUtils.skip(IOUtils.java:106)
    at org.apache.commons.compress.archivers.tar.TarArchiveInputStream.skipRecordPadding(TarArchiveInputStream.java:345)
    at org.apache.commons.compress.archivers.tar.TarArchiveInputStream.getNextTarEntry(TarArchiveInputStream.java:272)
    at de.mem89.masterthesis.rdfHydra.StorageHelper.read(StorageHelper.java:88)
    at de.mem89.masterthesis.rdfHydra.StorageHelper.main(StorageHelper.java:124)
4

0 回答 0