我在批量索引数据时遇到问题。我想索引一个Article
列表,其中包含一些@IndexedEmbedded
我需要获取信息的成员。Article
从另外两个 bean 获取附加信息:Page
和Articlefulltext
.
由于 Hibernate Search Annotations,该批次正在正确更新数据库并为我的 Lucene 索引添加新内容。Document
但添加的文档字段不完整。似乎 Hibernate Search 没有看到所有的注释。
因此,当我通过 Luke 查看生成的 lucene Index 时,我有一些关于 Article 和 Page 对象的字段,但没有关于 ArticleFulltext 的字段,但我的数据库中有正确的数据,这意味着persist() 操作已正确完成。 ..
我真的需要一些帮助,因为我看不出我的 Page 和 ArticleFullText 之间有什么区别......
奇怪的是,如果我使用 a MassIndexer
,它会正确地将 Article + Page + Articlefulltext 数据添加到 lucene 索引中。但是我不想每次进行重大更新时都重建数百万个文档索引...
我将 log4j 日志记录级别设置为调试休眠搜索和 lucene。他们没有给我太多信息。
这是我的 bean 代码和批处理代码。
在此先感谢您的帮助,
文章.java:
@Entity
@Table(name = "article", catalog = "test")
@Indexed(index="articleText")
@Analyzer(impl = FrenchAnalyzer.class)
public class Article implements java.io.Serializable {
@Id
@GeneratedValue(strategy = IDENTITY)
@Column(name = "id", unique = true, nullable = false)
@DocumentId
private Integer id;
@ManyToOne(fetch = FetchType.LAZY)
@JoinColumn(name = "firstpageid", nullable = false)
@IndexedEmbedded
private Page page;
@Column(name = "heading", length = 300)
@Field(name= "title", index = Index.YES, store = Store.YES)
@Boost(2.5f)
private String heading;
@Column(name = "subheading", length = 300)
private String subheading;
@OneToOne(fetch = FetchType.LAZY, mappedBy = "article")
@IndexedEmbedded
private Articlefulltext articlefulltext;
[... bean methods etc ...]
页面.java
@Entity
@Table(name = "page", catalog = "test")
public class Page implements java.io.Serializable {
private Integer id;
@IndexedEmbedded
private Issue issue;
@ContainedIn
private Set<Article> articles = new HashSet<Article>(0);
[... bean method ...]
文章全文.java
@Entity
@Table(name = "articlefulltext", catalog = "test")
@Analyzer(impl = FrenchAnalyzer.class)
public class Articlefulltext implements java.io.Serializable {
@GenericGenerator(name = "generator", strategy = "foreign", parameters = @Parameter(name = "property", value = "article"))
@Id
@GeneratedValue(generator = "generator")
@Column(name = "aid", unique = true, nullable = false)
private int aid;
@OneToOne(fetch = FetchType.LAZY)
@PrimaryKeyJoinColumn
@ContainedIn
private Article article;
@Column(name = "fulltextcontents", nullable = false)
@Field(store=Store.YES, index=Index.YES, analyzer = @Analyzer(impl = FrenchAnalyzer.class), bridge= @FieldBridge(impl = FulltextSplitBridge.class))
// This Field is not add to the Resulting Document ! I put a log into FulltextSplitBridge, and it's never called during a batch process. But if I use a MassIndexer, i see that FulltextSplitBridge is called for each Articlefulltext ...
private String fulltextcontents;
[... bean method ...]
这是用于更新数据库和 Lucene 索引的代码
批处理源代码:
FullTextEntityManager em = null;
@Override
protected void executeInternal(JobExecutionContext arg0) throws JobExecutionException {
ApplicationContext ap = null;
EntityManagerFactory emf = null;
EntityTransaction tx = null;
try {
ap = (ApplicationContext) arg0.getScheduler().getContext().get("applicationContext");
emf = (EntityManagerFactory) ap.getBean("entityManagerFactory", EntityManagerFactory.class);
em = Search.getFullTextEntityManager(emf.createEntityManager());
tx = em.getTransaction();
tx.begin();
// [... em.persist() some things which aren't lucene related, so i skip them ....]
for(File xmlFile : xmlList){
Reel reel = new Reel(title, reelpath);
em.persist(reel);
Article article = new Article();
// [... set Article fields, so i skip them ....]
Articlefulltext ft = new Articlefulltext();
// [... set Articlefulltext fields, so i skip them ....]
ft.setArticle(article);
ft.setFulltextcontents(bufferBlock.toString());
em.persist(ft); // i persist ft before article because of FK issues
em.persist(article); // there, the Annotation update Lucene index, but there's not updating fultextContent (see my first post)
if ( nbFileDone % 50 == 0 ) {
//flush a batch of inserts and release memory:
em.flush();
em.clear();
}
}
tx.commit();
}
catch(Exception e){
tx.rollback();
}
em.close();
}