我在 Cassandra 数据库中有两个结构相同的表。它们都在同一个键空间中。我需要将数据从一个表移动到另一个表。我创建了一个标准的CSV文件COPY/TO
,现在我想COPY/FROM
在另一个 Cassandra 表中上传内容。但是,我收到以下错误:
Failed to import 1926 rows: AttributeError - 'NoneType' object has no attribute 'is_up', given up after 1 attempts
Exceeded maximum number of insert errors 1000
我用了什么?
- cqlsh 5.0.1
- 卡桑德拉 3.11.2
- CQL 规范 3.4.4
- 本机协议 v4
我使用这样的命令在我的本地机器上创建 CSV 文件:
COPY "keyspace_1"."table_1" (column_1,column_2,column_3,column_4,column_5,column_6,column_7,column_8,column_9,column_10,column_11,column_12,column_13,column_14,column_15) TO 'test.csv' WITH delimiter=';' AND header=TRUE;
上面的命令创建 CSV 文件没有任何问题。我没有很多数据。第一个表只有 1926 个条目。用于导入的 CSV 文件的前 5 行示例:
column_1;column_2;column_3;column_4;column_5;column_6;column_7;column_8;column_9;column_10;column_11;column_12;column_13;column_14;column_15
a83aaa26-2f0d-11eb-9330-af4bd388f154;a829040d-2f1d-11eb-9a4c-0b934b0a1818;791d6ed2-e5ec-4860-a165-e25b77dcb075;69f2f19a-3647-4719-abea-315fcba0c29b;2020-11-25 12:56:38.676+0000;;False;True;True;Hello!;2020-11-25 12:56:38.676+0000;;;;
a83aaa26-2f0d-11eb-9330-af4bd388f154;ea7d7c94-2f1c-11eb-a27a-0b934b0a1818;c0bc8368-644b-4238-b629-773f7f3163d8;69f2f19a-3647-4719-abea-315fcba0c29b;2020-11-25 12:51:20.466+0000;;False;False;True;dddd;2020-11-25 12:51:20.467+0000;;;;
a83aaa26-2f0d-11eb-9330-af4bd388f154;e702d2d4-2f1c-11eb-ae91-0b934b0a1818;791d6ed2-e5ec-4860-a165-e25b77dcb075;69f2f19a-3647-4719-abea-315fcba0c29b;2020-11-25 12:51:14.625+0000;;True;True;True;d;2020-11-25 12:51:14.625+0000;;;;
a83aaa26-2f0d-11eb-9330-af4bd388f154;e45d01eb-2f1c-11eb-b7a1-0b934b0a1818;791d6ed2-e5ec-4860-a165-e25b77dcb075;69f2f19a-3647-4719-abea-315fcba0c29b;2020-11-25 12:51:10.187+0000;;True;True;True;1;2020-11-25 12:51:10.187+0000;;;;
a83aaa26-2f0d-11eb-9330-af4bd388f154;7da3e5ae-2f0f-11eb-87a2-5120df6c4a8a;791d6ed2-e5ec-4860-a165-e25b77dcb075;69f2f19a-3647-4719-abea-315fcba0c29b;2020-11-25 11:15:14.385+0000;;True;True;True;123;2020-11-25 11:15:14.385+0000;;;;
之后,我运行第二个命令,该命令必须将内容上传到第二个表:
COPY "keyspace_1"."table_2" (column_1,column_2,column_3,column_4,column_5,column_6,column_7,column_8,column_9,column_10,column_11,column_12,column_13,column_14,column_15) FROM 'test.csv' WITH delimiter=';' AND header=TRUE;
这个问题的原因是什么以及如何解决?
我使用下面的 CQL 查询创建了第一个表。第二个表具有相同的结构。
create table table_1 (
column_1 uuid,
column_2 timeuuid,
column_3 uuid,
column_4 uuid,
column_11 text,
column_14 uuid,
column_12 uuid,
column_13 uuid,
column_15 text,
column_10 boolean,
column_8 boolean,
column_9 boolean,
column_5 timestamp,
column_6 timestamp,
column_7 timestamp,
primary key (
column_1,
column_2
)
) with clustering order by (
column_2 desc
);
编辑 1:
我在终端中使用了这样的命令:
dsbulk load -url '/my_path/data.csv' -h '"my_host"' -port my_port -k 'keyspace_1' -t 'table_1' -header true -delim ';' -m '0=column_1,1=column_2,2=column_3'
错误消息:
[driver] Error connecting to Node(endPoint=my_host/x.xxx.xx.xxx:xxxx, hostId=null, hashCode=7edbe679), trying next node (ConnectionInitException: [driver|control|id: 0x7bfdbb2f, L:/xxx.xxx.x.xx:xxxxx - R:my_host/x.xxx.xx.xxx:xxxx] Protocol initialization request, step 1 (OPTIONS): unexpected failure (com.datastax.oss.driver.api.core.connection.ClosedConnectionException: Lost connection to remote peer))
Operation LOAD_20210211-073148-547063 failed: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=my_host/x.xxx.xx.xxx:xxxx, hostId=null, hashCode=7edbe679): [com.datastax.oss.driver.api.core.connection.ConnectionInitException: [driver|control|id: 0x7bfdbb2f, L:/xxx.xxx.x.xx:xxxxx - R:my_host/x.xxx.xx.xxx:xxxx] Protocol initialization request, step 1 (OPTIONS): unexpected failure (com.datastax.oss.driver.api.core.connection.ClosedConnectionException: Lost connection to remote peer)].
Suppressed: [driver|control|id: 0x7bfdbb2f, L:/xxx.xxx.x.xx:xxxxx - R:my_host/x.xxx.xx.xxx:xxxx] Protocol initialization request, step 1 (OPTIONS): unexpected failure (com.datastax.oss.driver.api.core.connection.ClosedConnectionException: Lost connection to remote peer).
Caused by: Lost connection to remote peer.