0

我正在从以下网站解析 html:http ://www.asusparts.eu/partfinder/Asus/All In One/E 系列 我只是想知道是否有任何方法可以探索 python 中的解析属性?例如.. 下面的代码输出以下内容:

datas = s.find(id='accordion')

    a = datas.findAll('a')

    for data in a:

            if(data.has_attr('onclick')):
                model_info.append(data['onclick'])
                print data 

[输出]

<a href="#Bracket" onclick="getProductsBasedOnCategoryID('Asus','Bracket','ET10B','7138', this, 'E Series')">Bracket</a>

这些是我想检索的值:

nCategoryID = Bracket

nModelID = ET10B

family = E Series

由于页面是从 AJAX 呈现的,因此他们使用脚本源,从而从脚本文件中生成以下 url:

url = 'http://json.zandparts.com/api/category/GetCategories/' + country + '/' + currency + '/' + nModelID + '/' + family + '/' + nCategoryID + '/' + brandName + '/' + null

我怎样才能只检索上面列出的 3 个值?


[编辑]


import string, urllib2, urlparse, csv, sys
from urllib import quote
from urlparse import urljoin
from bs4 import BeautifulSoup
from ast import literal_eval

changable_url = 'http://www.asusparts.eu/partfinder/Asus/All%20In%20One/E%20Series'
page = urllib2.urlopen(changable_url)
base_url = 'http://www.asusparts.eu'
soup = BeautifulSoup(page)

#Array to hold all options
redirects = []
#Array to hold all data
model_info = []

print "FETCHING OPTIONS"
select = soup.find(id='myselectListModel')
#print select.get_text()


options = select.findAll('option')

for option in options:
    if(option.has_attr('redirectvalue')):
       redirects.append(option['redirectvalue'])

for r in redirects:
    rpage = urllib2.urlopen(urljoin(base_url, quote(r)))
    s = BeautifulSoup(rpage)
    #print s



    print "FETCHING MAIN TITLE"
    #Finding all the headings for each specific Model
    maintitle = s.find(id='puffBreadCrumbs')
    print maintitle.get_text()

    #Find entire HTML container holding all data, rendered by AJAX
    datas = s.find(id='accordion')

    #Find all 'a' tags inside data container
    a = datas.findAll('a')

    #Find all 'span' tags inside data container
    content = datas.findAll('span')

    print "FETCHING CATEGORY" 

    #Find all 'a' tags which have an attribute of 'onclick' Error:(doesn't display anything, can't seem to find
    #'onclick' attr
    if(hasattr(a, 'onclick')):
        arguments = literal_eval('(' + a['onclick'].replace(', this', '').split('(', 1)[1])
        model_info.append(arguments)
        print arguments #arguments[1] + " " + arguments[3] + " " + arguments[4] 


    print "FETCHING DATA"
    for complete in content:
        #Find all 'class' attributes inside 'span' tags
        if(complete.has_attr('class')):
            model_info.append(complete['class'])

            print complete.get_text()

    #Find all 'table data cells' inside table held in data container       
    print "FETCHING IMAGES"
    img = s.find('td')

    #Find all 'img' tags held inside these 'td' cells and print out
    images = img.findAll('img')
    print images

我在问题所在的地方添加了一个错误行...

4

2 回答 2

1

类似于 Martijn 的答案,但原始使用pyparsing(即,可以对其进行改进以识别函数并仅采用带括号的引用字符串):

from bs4 import BeautifulSoup
from pyparsing import QuotedString
from itertools import chain

s = '''<a href="#Bracket" onclick="getProductsBasedOnCategoryID('Asus','Bracket','ET10B','7138', this, 'E Series')">Bracket</a>'''
soup = BeautifulSoup(s)
for a in soup('a', onclick=True):
    print list(chain.from_iterable(QuotedString("'", unquoteResults=True).searchString(a['onclick'])))
# ['Asus', 'Bracket', 'ET10B', '7138', 'E Series']
于 2013-04-22T13:01:30.323 回答
1

如果您从中删除部分,则可以将其解析为 Python 文字,并且this,取括号之间的所有内容:

from ast import literal_eval

if data.has_attr('onclick'):
    arguments = literal_eval('(' + data['onclick'].replace(', this', '').split('(', 1)[1])
    model_info.append(arguments)
    print arguments

我们删除了该this参数,因为它不是有效的 Python 字符串文字,而且您无论如何都不想拥有它。

演示:

>>> literal_eval('(' + "getProductsBasedOnCategoryID('Asus','Bracket','ET10B','7138', this, 'E Series')".replace(', this', '').split('(', 1)[1])
('Asus', 'Bracket', 'ET10B', '7138', 'E Series')

现在你有了一个 Python 元组,可以选择你喜欢的任何值。

您需要索引 1、2 和 4 处的值,例如:

nCategoryID, nModelID, family = arguments[1], arguments[3], arguments[4]
于 2013-04-22T12:49:08.817 回答