2

我正在编写一个代码(是的,我是新手)从 facebook 上的页面中提取信息。我正在使用 facebook-scraper 来获取信息。我需要创建一个 CSV 文件来存储这些信息,但我总是空着。

原始代码

from facebook_scraper import get_posts
for post in get_posts('bibliotecaunespbauru', pages=66):
    print(post['time']) # não funciona
    print(post['post_id'])
    print(post['text'])
    print(post['image'])
    print(post['video'])
    print(post['likes'])
    print(post['comments'])
    print(post['shares'])
    print(post['link'])

存储在 CSV 文件中的代码。

import csv
from facebook_scraper import get_posts
for post in get_posts('bibliotecaunespbauru', pages=10):
    data = [print(post['post_id']), print(post['text']), print(post['image'])]
with open("facebook.csv", "w", newline="") as f:
   writer = csv.writer(f)
   writer.writerow(data)
with open('facebook.csv', newline='') as csvfile:
    data = csv.reader(csvfile, delimiter=' ')
    for row in data:
        print(', '.join(row))

嘿,非常感谢。现在很有意义。但是,它仍然不起作用,因为它现在只检索一个请求,而不是 10 页。

import csv
from facebook_scraper import get_posts
for post in get_posts('bibliotecaunespbauru', pages=10):
     data = [post['post_id'], post['text'], post['image']]
with open("facebook.csv", "a", newline="") as f:
   writer = csv.writer(f)
   writer.writerow(data)
with open('facebook.csv', newline='') as csvfile:
    data = csv.reader(csvfile, delimiter=' ')
    for row in data:
        print(', '.join(row))

第三次尝试。仍然只收到一个帖子。


import csv
from facebook_scraper import get_posts
for post in get_posts('bibliotecaunespbauru', pages=10):
     data = [post['post_id'], post['text'], post['image']]
with open("facebook.csv", "a", newline="") as f:
   writer = csv.writer(f)
   writer.writerow(data)
with open('facebook.csv', newline='') as csvfile:
    data = csv.reader(csvfile, delimiter=' ')
    for row in data:
        print(', '.join(row))

第四次尝试。

import csv
from facebook_scraper import get_posts
for post in get_posts('bibliotecaunespbauru', pages=10):
    data = [post['post_id'], post['text'], post['image']]
    with open("facebook.csv", "a", newline="") as f:
        writer = csv.writer(f)
        writer.writerow(data)

退货

UnicodeEncodeError                        Traceback (most recent call last)
<ipython-input-46-b4f7f9df1e02> in <module>
      5     with open("facebook.csv", "a", newline="") as f:
      6         writer = csv.writer(f)
----> 7         writer.writerow(data)

~\AppData\Local\Programs\Python\Python39\lib\encodings\cp1252.py in encode(self, input, final)
     17 class IncrementalEncoder(codecs.IncrementalEncoder):
     18     def encode(self, input, final=False):
---> 19         return codecs.charmap_encode(input,self.errors,encoding_table)[0]
     20 
     21 class IncrementalDecoder(codecs.IncrementalDecoder):

UnicodeEncodeError: 'charmap' codec can't encode characters in position 76-77: character maps to <undefined>
4

1 回答 1

1

您的代码有两个问题。

第一个问题是如何创建data

错误的

[print(post['post_id']), print(post['text']), print(post['image'])]

为什么

在这一行中,您在获取值时正在打印,return打印的值是None并且因此None存储在列表中。

data每次迭代的旧输出:[None, None, None]

更正:

[post['post_id'], post['text'], post['image']].

新的输出data['2092819824183367', 'Biblioteca da Unesp em Bauru ganha nova identidade visual ❤️\n\nhttps://youtu.be/dTCGp1eGmtM\n\nYOUTUBE.COM\nBiblioteca da Unesp em Bauru ganha nova identidade visual', None]

(PS : idk 那是什么意思)

第二个问题是您写入文件的方式。

open("facebook.csv", "w", newline="")

注意aopen("facebook.csv", "a", newline="")写入文件时,这用于打开文件以“追加”模式打开文件,以模式打开它w(您的旧代码)将在每个循环覆盖文件,在每个循环中导致一个新的空白文件循环,这种行为不是你所需要的。

所以整合所有的变化和缩进,这是你所期望的完整代码

import csv
from facebook_scraper import get_posts
for post in get_posts('bibliotecaunespbauru', pages=10):
    data = [post['post_id'], post['text'], post['image']]
    with open("facebook.csv", "a", newline="") as f:
        writer = csv.writer(f)
        writer.writerow(data)

关于 unicode 错误

您可以open("facebook.csv", "a", newline="",encoding="utf-8")在打开文件时使用

于 2020-11-08T14:29:49.407 回答