0

这是一个很长的问题,我可能会遗漏一些东西,所以如果需要更多信息,请询问。

我一直在使用 scaperwiki 从谷歌学者那里抓取数据,直到最近我只是把所有的网址都放在这样的地方。

elec_urls = """http://1.hidemyass.com/ip-5/encoded/Oi8vc2Nob2xhci5nb29nbGUuY29tL2NpdGF0aW9ucz91c2VyPWo0YnRpeXNBQUFBSiZobD1lbg%3D%3D&f=norefer
http://4.hidemyass.com/ip-1/encoded/Oi8vc2Nob2xhci5nb29nbGUuY29tL2NpdGF0aW9ucz91c2VyPVZXaFJiZEFBQUFBSiZobD1lbg%3D%3D&f=norefer
http://4.hidemyass.com/ip-2/encoded/Oi8vc2Nob2xhci5nb29nbGUuY29tL2NpdGF0aW9ucz91c2VyPV84X09JSWNBQUFBSiZobD1lbg%3D%3D&f=norefer
http://1.hidemyass.com/ip-4/encoded/Oi8vc2Nob2xhci5nb29nbGUuY29tL2NpdGF0aW9ucz91c2VyPUh3WHdmTGtBQUFBSiZobD1lbg%3D%3D&f=norefer
http://4.hidemyass.com/ip-1/encoded/Oi8vc2Nob2xhci5nb29nbGUuY29tL2NpdGF0aW9ucz91c2VyPXU1NWFWZEFBQUFBSiZobD1lbg%3D%3D&f=norefer
""".strip()

elec_urls = elec_urls.splitlines()

然后我抓取每一页并将我想要的信息放入字典列表中,对其进行排序,删除重复项,然后使用不同的键再次对其进行排序,然后将我想要的信息导出到谷歌文档电子表格。这 100% 有效。

我试图改变它,这样我就可以拥有另一个谷歌文档电子表格,从这里我可以把所有的网址都放进去,它会做同样的事情。以下是我到目前为止所做的。

def InputUrls(Entered_doc, EnteredURL):
    username = 'myemail'
    password = 'mypassword'
    doc_name = Entered_doc
    spreadsheet_id = Entered_doc
    worksheet_id = 'od6'

    # Connect to Google
    gd_client = gdata.spreadsheet.service.SpreadsheetsService()
    gd_client.email = username 
    gd_client.password = password  
    gd_client.source = EnteredURL
    gd_client.ProgrammaticLogin()

    #Now that we're connected, we query the spreadsheet by name, and extract the unique spreadsheet and worksheet IDs.

    rows = gd_client.GetListFeed(spreadsheet_id, worksheet_id).entry
    #At this point, you have a row iterator which will yield rows for the spreadsheet. This example will print everything out, keyed by column names:
    urlslist = []
    for row in rows:
        for key in row.custom:
            urlslist.append(row.custom[key].text)
        return urlslist

def URLStoScrape(ToScrape):
    Dep = []
    for i in range(0,len(ToScrape)):
        Department_urls = ToScrape[i].strip()
        Department_urls = Department_urls.splitlines() 
        Done = MainScraper(Department_urls)
        Dep.append(Done)
    
return Dep

ElectricalDoc = '0AkGb10ekJtfQdG9EOHN0VzRDdVhWaG1kNVEtdVpyRlE'
ElectricalUrl = 'https://docs.google.com/spreadsheet/ccc?    '
ToScrape_Elec = InputUrls(ElectricalDoc, ElectricalUrl)

这似乎很好,但是当程序进行排序时,我得到以下错误。

回溯(最后一次调用):文件“./code/scraper”,第 230 行,在 Total_and_Hindex_Electrical = GetTotalCitations(Electrical) 文件“./code/scraper”,第 89 行,在 GetTotalCitations Wrt_CitationURL = Sorting(Department, "CitationURL" ) 文件“./code/scraper”,第 15 行,在 SortedData = sorted(Unsorted, reverse = True, key = lambda k: k[pivot]) 文件“./code/scraper”,第 15 行,在 SortedData = sorted(Unsorted, reverse = True, key = lambda k: k[pivot]) TypeError: list indices must be integers, not str

我认为,几乎可以肯定,它与 URLStoScrape 函数有关,但我不知道如何修复它,任何帮助都会很棒。

谢谢让我知道是否需要更多信息

4

1 回答 1

0

我认为问题出在第 89 行,

GetTotalCitations Wrt_CitationURL = Sorting(Department, "CitationURL")

“CitationUrl”应该是一个整数索引,或者传递给 key函数的值sorted()应该是一个字典。

于 2013-08-21T04:27:59.107 回答