1

I'm looking into chunking my data source for optimial data import into solr and was wondering if it was possible to use a master url that chunked data into sections.

For example File 1 may have

<chunks>
  <chunk url="http://localhost/chunker?start=0&stop=100" />
  <chunk url="http://localhost/chunker?start=100&stop=200" />
  <chunk url="http://localhost/chunker?start=200&stop=300" />
  <chunk url="http://localhost/chunker?start=300&stop=400" />
  <chunk url="http://localhost/chunker?start=400&stop=500" />
  <chunk url="http://localhost/chunker?start=500&stop=600" />
</chunks>

with each chunk url leading to something like

<items>
   <item data1="info1" />
   <item data1="info2" />
   <item data1="info3" />
   <item data1="info4" />
</iems>

I'm working with 500+ million records so I think that the data will need to be chunked to avoid memory issues (ran into that when using the SQLEntityProcessor). I would also like to avoid making 500+ Million web requests as that could get expensive I think

4

2 回答 2

7

由于互联网上缺乏示例,我想我会发布我最终使用的内容

<?xml version="1.0" encoding="utf-8"?>
<result>
  <dataCollection func="chunked">
    <data info="test" info2="test" />
    <data info="test" info2="test" />
    <data info="test" info2="test" />
    <data info="test" info2="test" />
    <data info="test" info2="test" />
    <data info="test" info2="test" />
    <data hasmore="true" nexturl="http://server.domain.com/handler?start=0&amp;end=1000000000&amp;page=1&amp;pagesize=10"
  </dataCollection>
</result>

需要注意的是,我使用指定下一页有更多内容并提供下一页的 url。这与DataImportHandlers 的 Solr 文档一致。请注意,文档指定分页提要应告诉系统它有更多信息以及从何处获取下一批。

<dataConfig>
    <dataSource name="b" type="URLDataSource" baseUrl="http://server/" encoding="UTF-8" />
    <document>
        <entity name="continue"
                dataSource="b"
                url="handler?start=${dataimport.request.startrecord}&amp;end=${dataimport.request.stoprecord}&amp;pagesize=100000"
                stream="true"
                processor="XPathEntityProcessor"
                forEach="/result/dataCollection/data"
                transformer="DateFormatTransformer"
                connectionTimeout="120000"
                readTimeout="300000"
                >
            <field column="id"  xpath="/result/dataCollection/data/@info" />
            <field column="id"  xpath="/result/dataCollection/data/@info" />
            <field column="$hasMore" xpath="/result/dataCollection/data/@hasmore" />
            <field column="$nextUrl" xpath="/result/dataCollection/data/@nexturl" />
        </entity>
    </document>

注意 $hasMore 和 $nextUrl 字段。您可能希望放置超时。我还建议允许指定页面大小(它有助于 tweeking 设置以获得最佳处理速度)。我在具有四核 Xeon 处理器和 32GB 内存的单个服务器上使用多核 (3) solr 实例每秒索引大约 12.5K 条记录。

对结果进行分页的应用程序使用与存储数据的 SQL 服务器相同的系统。当我们最终对 solr 服务器进行负载平衡时,我还传递了开始和停止位置以最小化配置更改......

于 2011-05-14T00:12:48.430 回答
1

The entity can be nested to do what you wanted originally. The inner entity can refer to the outter field like this url="${chunk.link}" where chunk is the outter entity name & link is the field name.

<?xml version="1.0" encoding="windows-1250"?>
<dataConfig>
  <dataSource name="b" type="URLDataSource" baseUrl="http://server/" encoding="UTF-8" />
  <document>
    <entity name="chunk"
      dataSource="b"
      url="path/to/chunk.xml"
      stream="true"
      processor="XPathEntityProcessor"
      forEach="/chunks/chunk"
      transformer="DateFormatTransformer"
      connectionTimeout="120000"
      readTimeout="300000" >
      <field column="link" xpath="/chunks/chunk/@url" />
      <entity name="item"
        dataSource="b"
        url="${chunk.link}"
        stream="true"
        processor="XPathEntityProcessor"
        forEach="/items/item"
        transformer="DateFormatTransformer"
        connectionTimeout="120000"
        readTimeout="300000" >
        <field column="info"  xpath="/items/item/@info" />
      </entity>
    </entity>
</document>
</dataConfig>
于 2012-03-26T10:41:50.940 回答