这个问题是对前面一个问题的补充。如果您需要更多背景知识,可以在此处查看原始问题:
使用从 lxml xpath 命令获得的数据填充 Python 列表。
我已将@ihor-kaharlichenko 的出色建议(来自我的原始问题)合并到修改后的代码中,在这里:
from lxml import etree as ET
from datetime import datetime
xmlDoc = ET.parse('http://192.168.1.198/Bench_read_scalar.xml')
response = xmlDoc.getroot()
tags = (
'address',
'status',
'flow',
'dp',
'inPressure',
'actVal',
'temp',
'valveOnPercent',
)
dmtVal = []
for dmt in response.iter('dmt'):
val = [str(dmt.xpath('./%s/text()' % tag)) for tag in tags]
val.insert(0, str(datetime.now())) #Add timestamp at beginning of each record
dmtVal.append(val)
for item in dmtVal:
str(item).strip('[')
str(item).strip(']')
str(item).strip('"')
最后一个街区是我遇到问题的地方。我得到的数据dmtVal
看起来像:
[['2012-08-16 12:38:45.152222', "['0x46']", "['0x32']", "['1.234']", "['5.678']", "['9.123']", "['4.567']", "['0x98']", "['0x97']"], ['2012-08-16 12:38:45.152519', "['0x47']", "['0x33']", "['8.901']", "['2.345']", "['6.789']", "['0.123']", "['0x96']", "['0x95']"]]
但是,我真的希望数据看起来像这样:
[['2012-08-16 12:38:45.152222', '0x46', '0x32', '1.234', '5.678', '9.123', '4.567', '0x98', '0x97'], ['2012-08-16 12:38:45.152519', '0x47', '0x33', '8.901', '2.345', '6.789', '0.123', '0x96', '0x95']]
我认为这是一个相当简单的字符串剥离工作,我在原始迭代中尝试了代码(dmtVal
最初填充的地方),但这不起作用,所以我在循环之外进行了剥离操作,如上所示,它是还是行不通。我在想我正在犯某种菜鸟错误,但找不到。欢迎大家提出意见!
感谢大家的及时和有用的回复。这是更正后的代码:
from lxml import etree as ET
from datetime import datetime
xmlDoc = ET.parse('http://192.168.1.198/Bench_read_scalar.xml')
print '...Starting to parse XML nodes'
response = xmlDoc.getroot()
tags = (
'address',
'status',
'flow',
'dp',
'inPressure',
'actVal',
'temp',
'valveOnPercent',
)
dmtVal = []
for dmt in response.iter('dmt'):
val = [' '.join(dmt.xpath('./%s/text()' % tag)) for tag in tags]
val.insert(0, str(datetime.now())) #Add timestamp at beginning of each record
dmtVal.append(val)
产生:
...Starting to parse XML nodes
[['2012-08-16 14:41:10.442776', '0x46', '0x32', '1.234', '5.678', '9.123', '4.567', '0x98', '0x97'], ['2012-08-16 14:41:10.443052', '0x47', '0x33', '8.901', '2.345', '6.789', '0.123', '0x96', '0x95']]
...Done
谢谢大家!