4

我一直在尝试编写自己的协处理器,使用 prePut 挂钩创建二级索引。首先,我一直在尝试让 prePut 协处理器工作。到目前为止,我可以让协处理器添加到传递给它的 put 对象中。我发现我无法让协处理器写入与传入的 put 对象正在写入的内容分开的行。显然,要创建二级索引,我需要弄清楚这一点。

下面是我的协处理器的代码,但它不起作用。
是的,所有表都存在,“colfam1”也存在。
HBase 版本:来自 Cloudera 的 CDH4 的 HBase 0.92.1-cdh4.1.2

有谁知道问题是什么?

    @Override
        public void prePut(final ObserverContext<RegionCoprocessorEnvironment> e,final Put put, final WALEdit edit, final boolean writeToWAL) throws IOException {          
            KeyValue kv = new KeyValue(Bytes.toBytes("COPROCESSORROW"), Bytes.toBytes("colfam1"),Bytes.toBytes("COPROCESSOR: "+System.currentTimeMillis()),Bytes.toBytes("IT WORKED"));
            put.add(kv);
        }

我收到以下错误:

    ERROR: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues:

更新:

我已将协处理器修改为以下内容,但仍然出现错误。现在post-Put(二级索引)写好了,但是还是有超时错误。
该区域的整个表也崩溃了,需要我重新启动该区域。有时区域重新启动不起作用,整个区域(所有表)都已损坏,需要重新构建服务器。

我不知道为什么...!?

@Override
      public void start(CoprocessorEnvironment env) throws IOException {        
        LOG.info("(start)");
        pool = new HTablePool(env.getConfiguration(), 10);
     }

    @Override
    public void postPut(final ObserverContext<RegionCoprocessorEnvironment> observerContext,final Put put,final WALEdit edit,final boolean writeToWAL) throws IOException {
        byte[] tableName  = observerContext.getEnvironment().getRegion().getRegionInfo().getTableName();

        //not necessary though if you register the coprocessor for the specific table , SOURCE_TBL
        if (!Bytes.equals(tableName, Bytes.toBytes(SOURCE_TABLE))) 
            return;         

        try {           
            LOG.info("STARTING postPut");
            HTableInterface table = pool.getTable(Bytes.toBytes(INDEX_TABLE));
            LOG.info("TURN OFF AUTOFLUSH");
            table.setAutoFlush(false);
            //create row              
            LOG.info("Creating new row");            
            byte [] rowkey = Bytes.toBytes("COPROCESSOR ROW");
            Put indexput  = new Put(rowkey); 
            indexput.add(Bytes.toBytes ( "data"),  Bytes.toBytes("CP: "+System.currentTimeMillis()),  Bytes.toBytes("IT WORKED!"));
            LOG.info("Writing to table");
            table.put(indexput);
            LOG.info("flushing commits");            
            table.flushCommits();
            LOG.info("close table");
            table.close();

        } catch ( IllegalArgumentException ex) {

            //handle excepion.
        }

      }


      @Override
      public void stop(CoprocessorEnvironment env) throws IOException {
        LOG.info("(stop)");
        pool.close();
      }

这是区域服务器日志:(注意我的日志注释)

2013-01-30 19:30:39,754 INFO my.package.MyCoprocessor: STARTING postPut
2013-01-30 19:30:39,754 INFO my.package.MyCoprocessor: TURN OFF AUTOFLUSH
2013-01-30 19:30:39,755 INFO my.package.MyCoprocessor: Creating new row
2013-01-30 19:30:39,755 INFO my.package.MyCoprocessor: Writing to table
2013-01-30 19:30:39,755 INFO my.package.MyCoprocessor: flushing commits
2013-01-30 19:31:39,813 WARN org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: Failed all from region=test_table,,1359573731255.d41b77b31fafa6502a8f09db9c56b9d8., hostname=node01, port=60020
java.util.concurrent.ExecutionException: java.net.SocketTimeoutException: Call to node01/<private_ip>:60020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/<private_ip>:56390 remote=node01/<private_ip>:60020]
    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1557)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1409)
    at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:949)
    at org.apache.hadoop.hbase.client.HTablePool$PooledHTable.flushCommits(HTablePool.java:449)
    at my.package.MyCoprocessor.postPut(MyCoprocessor.java:81)
    at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postPut(RegionCoprocessorHost.java:682)
    at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:1901)
    at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1742)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3102)
    at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1345)

已解决:我试图在协处理器中写入协处理器正在处理的同一张表:简而言之,当我写一个单元格时,CP 写了一个单元格,导致 CP 再次触发并写入另一个,依此类推。我通过写 CP 行的行检查 b4 来阻止它,以防止这个循环。

4

2 回答 2

5

下面是关于我们如何在 Hbase 中使用协处理器来创建二级索引的代码片段。可以对你有所帮助。

public class TestCoprocessor extends BaseRegionObserver{

    private HTablePool pool = null;

    private final static String  INDEX_TABLE  = "INDEX_TBL";
    private final static String  SOURCE_TABLE = "SOURCE_TBL";

    @Override
    public void start(CoprocessorEnvironment env) throws IOException {  
        pool = new HTablePool(env.getConfiguration(), 10);
    }

    @Override
    public void postPut(
        final ObserverContext<RegionCoprocessorEnvironment> observerContext,
        final Put put,
        final WALEdit edit,
        final boolean writeToWAL)
        throws IOException {

        byte[] table = observerContext.getEnvironment(
            ).getRegion().getRegionInfo().getTableName();

        // Not necessary though if you register the coprocessor
        // for the specific table, SOURCE_TBL
        if (!Bytes.equals(table, Bytes.toBytes(SOURCE_TABLE))) {
            return; 
        }

        try {
            final List<KeyValue> filteredList = put.get(
                Bytes.toBytes ( "colfam1"), Bytes.toBytes(" qaul"));
            filteredList.get( 0 ); //get the column value

            // get the values 
            HTableInterface table = pool.getTable(Bytes.toBytes(INDEX_TABLE));

            // create row key             
            byte [] rowkey = mkRowKey () //make the row key
            Put indexput = new Put(rowkey); 
            indexput.add(
                Bytes.toBytes( "colfam1"),
                Bytes.toBytes(" qaul"),
                Bytes.toBytes(" value.."));

            table.put(indexput);
            table.close();

        } catch ( IllegalArgumentException ex) {
            // handle excepion.
        }

    }


    @Override
    public void stop(CoprocessorEnvironment env) throws IOException {
        pool.close();
    }

}

要在 SOURCE_BL 上注册上述协处理器,请转到 hbase shell 并按照以下步骤操作

  1. 禁用“SOURCE_TBL”
  2. 改变'SOURCE_TBL', METHOD => 'table_att','coprocessor'=>'file:///path/to/coprocessor.jar|TestCoprocessor|1001'
  3. 启用“SOURCE_TBL”
于 2013-01-27T06:48:38.520 回答
-2

二级索引现已内置到 HBase 中。看看这个博客条目是一样的。没有必要在 HBase 中使用 CoProcessors。

于 2013-01-27T02:30:10.000 回答