无论我做什么,我都无法摆脱这个错误。我知道 snappy 是一个快速的,因此比其他选项更可取的压缩/解压缩库。我想使用这个库进行处理。据我所知,Google 在内部将其用于他们的 BigTables、MapReduce(基本上用于他们所有的杀手级应用程序)。我自己研究过。人们建议不要使用它,或者 java-snappy 作为选项,但我想坚持使用 hadoop snappy。我的设置中有相应的库。(我的意思是在lib下)
有人可以解决这个错误吗?我看到不管这个错误,工作都成功完成了。
****hdfs://localhost:54310/user/hduser/gutenberg
12/06/01 18:18:54 INFO input.FileInputFormat: Total input paths to process : 3
12/06/01 18:18:54 INFO util.NativeCodeLoader: Loaded the native-hadoop library
12/06/01 18:18:54 WARN snappy.LoadSnappy: Snappy native library not loaded
12/06/01 18:18:54 INFO mapred.JobClient: Running job: job_201206011229_0008
12/06/01 18:18:55 INFO mapred.JobClient: map 0% reduce 0%
12/06/01 18:19:08 INFO mapred.JobClient: map 66% reduce 0%
12/06/01 18:19:14 INFO mapred.JobClient: map 100% reduce 0%
12/06/01 18:19:17 INFO mapred.JobClient: map 100% reduce 22%
12/06/01 18:19:23 INFO mapred.JobClient: map 100% reduce 100%
12/06/01 18:19:28 INFO mapred.JobClient: Job complete: job_201206011229_0008
12/06/01 18:19:28 INFO mapred.JobClient: Counters: 29
12/06/01 18:19:28 INFO mapred.JobClient: Job Counters
12/06/01 18:19:28 INFO mapred.JobClient: Launched reduce tasks=1
12/06/01 18:19:28 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=22810
12/06/01 18:19:28 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
12/06/01 18:19:28 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
12/06/01 18:19:28 INFO mapred.JobClient: Launched map tasks=3
12/06/01 18:19:28 INFO mapred.JobClient: Data-local map tasks=3
12/06/01 18:19:28 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=14345
12/06/01 18:19:28 INFO mapred.JobClient: File Output Format Counters
12/06/01 18:19:28 INFO mapred.JobClient: Bytes Written=880838
12/06/01 18:19:28 INFO mapred.JobClient: FileSystemCounters
12/06/01 18:19:28 INFO mapred.JobClient: FILE_BYTES_READ=2214849
12/06/01 18:19:28 INFO mapred.JobClient: HDFS_BYTES_READ=3671878
12/06/01 18:19:28 INFO mapred.JobClient: FILE_BYTES_WRITTEN=3775339
12/06/01 18:19:28 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=880838
12/06/01 18:19:28 INFO mapred.JobClient: File Input Format Counters
12/06/01 18:19:28 INFO mapred.JobClient: Bytes Read=3671517
12/06/01 18:19:28 INFO mapred.JobClient: Map-Reduce Framework
12/06/01 18:19:28 INFO mapred.JobClient: Map output materialized bytes=1474341
12/06/01 18:19:28 INFO mapred.JobClient: Map input records=77932
12/06/01 18:19:28 INFO mapred.JobClient: Reduce shuffle bytes=1207328
12/06/01 18:19:28 INFO mapred.JobClient: Spilled Records=255962
12/06/01 18:19:28 INFO mapred.JobClient: Map output bytes=6076095
12/06/01 18:19:28 INFO mapred.JobClient: CPU time spent (ms)=12100
12/06/01 18:19:28 INFO mapred.JobClient: Total committed heap usage (bytes)=516882432
12/06/01 18:19:28 INFO mapred.JobClient: Combine input records=629172
12/06/01 18:19:28 INFO mapred.JobClient: SPLIT_RAW_BYTES=361
12/06/01 18:19:28 INFO mapred.JobClient: Reduce input records=102322
12/06/01 18:19:28 INFO mapred.JobClient: Reduce input groups=82335
12/06/01 18:19:28 INFO mapred.JobClient: Combine output records=102322
12/06/01 18:19:28 INFO mapred.JobClient: Physical memory (bytes) snapshot=605229056
12/06/01 18:19:28 INFO mapred.JobClient: Reduce output records=82335
12/06/01 18:19:28 INFO mapred.JobClient: Virtual memory (bytes) snapshot=2276663296
12/06/01 18:19:28 INFO mapred.JobClient: Map output records=629172
PS:目前,我正在使用一个小型数据集,其中快速压缩和解压缩并不重要。但是一旦我有了一个工作流程,我就会用大型数据集加载它。