0

我正在使用 BeautifulSoup 从 HTML 页面中提取类别和子类别。html 看起来像这样:

<a class='menuitem submenuheader' href='#'>Beverages</a><div class='submenu'><ul><li><a href='productlist.aspx?parentid=053&catid=055'>Juice</a></li></ul></div><a class='menuitem submenuheader' href='#'>DIY</a><div class='submenu'><ul><li><a href='productlist.aspx?parentid=007&catid=052'>Miscellaneous</a></li><li><a href='productlist.aspx?parentid=007&catid=047'>Sockets</a></li><li><a href='productlist.aspx?parentid=007&catid=046'>Spanners</a></li><li><a href='productlist.aspx?parentid=007&catid=045'>Tool Boxes</a></li></ul></div><a class='menuitem submenuheader' href='#'>Electronics</a><div class='submenu'><ul><li><a href='productlist.aspx?parentid=003&catid=019'>Audio/Video</a></li><li><a href='productlist.aspx?parentid=003&catid=027'>Cameras</a></li><li><a href='productlist.aspx?parentid=003&catid=023'>Cookers</a></li><li><a href='productlist.aspx?parentid=003&catid=024'>Freezers</a></li><li><a href='productlist.aspx?parentid=003&catid=025'>Kitchen Appliances</a></li><li><a href='productlist.aspx?parentid=003&catid=048'>Measuring Instruments</a></li><li><a href='productlist.aspx?parentid=003&catid=020'>Microwaves</a></li><li><a href='productlist.aspx?parentid=003&catid=050'>Miscellaneous</a></li><li><a href='productlist.aspx?parentid=003&catid=026'>Personal Care</a></li><li><a href='productlist.aspx?parentid=003&catid=021'>Refrigerators</a></li><li><a href='productlist.aspx?parentid=003&catid=018'>TV</a></li><li><a href='productlist.aspx?parentid=003&catid=022'>Washers/Dryers/Vacuum Cleaners</a></li></ul></div>

其中饮料是类别,果汁是子类别。

我有以下代码来提取类别:

from bs4 import BeautifulSoup
import re
import urllib2


url = "http://www.myprod.com"

def main():
  response = urllib2.urlopen(url)
  html = response.read()

  soup = BeautifulSoup(html)
  categories = soup.findAll("a", {"class" :'menuitem submenuheader'})
  for cat in categories:
    print cat.contents[0]

我将如何获得这种格式的子类别?

[Beverages = Category]
 [Juice = Sub]
[DIY = Category]
 [Miscellaneous = Sub]
 [Spanners = Sub]
 [Sockets = Sub]
[Electronics]
 [Audio = Sub]
 [Cameras]
4

3 回答 3

0

从每个类别 html 中,您必须找到下一个元素,然后从那里找到它的 li 元素:

print cat.findNext().findAll('li')
于 2012-07-07T17:27:05.200 回答
0

考虑到您的 html 始终具有这些子菜单div,最好以cats[i]对应于subcats[i]或根据您想要的方式返回字典的方式返回一个类别列表和另一个子类别列表。

在 Python 外壳中:

>>> from BeautifulSoup import BeautifulSoup
>>> html = '''<a class="menuitem submenuheader" href="#">Beverages</a>
... <div class="submenu">
...  <ul>
...   <li><a href="productlist.aspx?parentid=053&amp;catid=055">Juice</a></li>
...   <li><a href="productlist.aspx?parentid=053&amp;catid=055">Milk</a></li>
...  </ul>
... </div>
... <a class="menuitem submenuheader" href="#">DIY</a>
... <div class="submenu">
...  <ul>
...   <li><a href="productlist.aspx?parentid=053&amp;catid=055">Micellaneous</a></li>
...   <li><a href="productlist.aspx?parentid=053&amp;catid=055">Spanners</a></li>
...   <li><a href="productlist.aspx?parentid=053&amp;catid=055">Sockets</a></li>
...  </ul>
... </div>'''
>>> soup = BeautifulSoup(html)
>>> categories = soup.findAll("a", {"class": 'menuitem submenuheader'})
>>> cats = [cat.text for cat in categories]
>>> sub_menus = soup.findAll("div", {"class": "submenu"})
>>> subcats = []
>>> for menu in sub_menus:
...     subcat = [item.text for item in menu.findAll('li')]
...     subcats.append(subcat)
... 
>>> print cats
[u'Beverages', u'DIY']
>>> print subcats
[[u'Juice', u'Milk'], [u'Micellaneous', u'Spanners', u'Sockets']]
>>> cat_dict = dict(zip(cats,subcats))
>>> print cat_dict
{u'Beverages': [u'Juice', u'Milk'], u'DIY': [u'Micellaneous', u'Spanners', u'Sockets']}
于 2012-07-07T19:02:14.100 回答
0

查看有问题的网页,看起来所有新闻都在h3带有 class 的标签中item-heading。您可以使用 BeautifulSoup 选择所有故事标题,然后向上访问a href它们所包含的内容:

In [54]: [i.parent.attrs["href"] for i in soup.select('a > h3.item-heading')]
Out[55]:
[{'href': '/news/us-news/civil-rights-groups-fight-trump-s-refugee-ban-uncertainty-continues-n713811'},
 {'href': '/news/us-news/protests-erupt-nationwide-second-day-over-trump-s-travel-ban-n713771'},
 {'href': '/politics/politics-news/some-republicans-criticize-trump-s-immigration-order-n713826'},
...  # trimmed for readability
]

我使用了列表理解,但您可以拆分为复合步骤:

# select all `h3` tags with the matching class that are contained within an `a` link.
# This excludes any random links elsewhere on the page.
story_headers = soup.select('a > h3.item-heading')

# Iterate through all the matching `h3` items and access their parent `a` tag.
# Then, within the parent you have access to the `href` attribute.
list_of_links = [i.parent.attrs for i in story_headers]

# Finally, extract the links into a tidy list
links = [i["href"] for i in list_of_links]

获得链接列表后,您可以遍历它以检查第一个字符是否为/仅匹配本地链接而不匹配外部链接的字符。

于 2017-01-30T10:42:06.220 回答