0

我正在尝试解析客户数据(姓名和电子邮件)的 JSON 响应,并构建一个列标题相同的 csv 文件。

出于某种原因,每次运行此代码时,我都会得到一个 CSV 文件,其中包含一个单元格中所有名字的列表(名称之间没有分隔......只是一串名称相互附加)并且相同姓氏的东西。以下代码不包括添加电子邮件(稍后我会担心)。

代码:

 def self.fetch_emails

    access_token ||= AssistlyArticle.remote_setup
    cust_response = access_token.get("https://blah.desk.com/api/v1/customers.json")
    cust_ids = JSON.parse(cust_response.body)["results"].map{|w| w["customer"]["id"].to_i}

 FasterCSV.open("/Users/default/file.csv", "wb") do |csv|
      # header row
      csv << ["First name", "Last Name"]
      # data rows
      cust_ids.each do |cust_firstname|
        json = JSON.parse(cust_response.body)["results"]
        csv << [json.map{|x| x["customer"]["first_name"]}, json.map{|x| x["customer"]["last_name"]}]
      end 
    end
  end

输出:

First Name        | Last Name
JohnJillJamesBill   SearsStevensSethBing

等等...

期望的输出:

First Name | Last Name
John       | Sears
Jill       | Stevens
James      | Seth
Bill       | Bing

示例 JSON:

{
    "page":1,
    "count":20,
    "total":541,
    "results":
    [
        {
            "customer":
            {
                "custom_test":null,
                "addresses":
                [
                    {
                        "address":
                        {
                            "region":"NY",
                            "city":"Commack",
                            "location":"67 Harned Road,
                             Commack,
                             NY 11725,
                             USA",
                            "created_at":"2009-12-22T16:21:23-05:00",
                            "street_2":null,
                            "country":"US",
                            "updated_at":"2009-12-22T16:32:37-05:00",
                            "postalcode":"11725",
                            "street":"67 Harned Road",
                            "lng":"-73.196225",
                            "customer_contact_type":"home",
                            "lat":"40.716894"
                        }
                    }
                ],
                "phones":
                [
                ],
                "last_name":"Suriel",
                "custom_order":"4",
                "first_name":"Jeremy",
                "custom_t2":"",
                "custom_i":"",
                "custom_t3":null,
                "custom_t":"",
                "emails":
                [
                    {
                        "email":
                        {
                            "verified_at":"2009-11-27T21:41:11-05:00",
                            "created_at":"2009-11-27T21:40:55-05:00",
                            "updated_at":"2009-11-27T21:41:11-05:00",
                            "customer_contact_type":"home",
                            "email":"jeremysuriel+twitter@gmail.com"
                        }
                    }
                ],
                "id":8,
                "twitters":
                [
                    {
                        "twitter":
                        {
                            "profile_image_url":"http://a3.twimg.com...",
                            "created_at":"2009-11-25T10:35:56-05:00",
                            "updated_at":"2010-05-29T22:41:55-04:00",
                            "twitter_user_id":12267802,
                            "followers_count":93,
                            "verified":false,
                            "login":"jrmey"
                        }
                    }
                ]
            }
        },
        {
            "customer":
            {
                "custom_test":null,
                "addresses":
                [
                ],
                "phones":
                [
                ],
                "last_name":"",
                "custom_order":null,
                "first_name":"jeremy@example.com",
                "custom_t2":null,
                "custom_i":null,
                "custom_t3":null,
                "custom_t":null,
                "emails":
                [
                    {
                        "email":
                        {
                            "verified_at":null,
                            "created_at":"2009-12-05T20:39:00-05:00",
                            "updated_at":"2009-12-05T20:39:00-05:00",
                            "customer_contact_type":"home",
                            "email":"jeremy@example.com"
                        }
                    }
                ],
                "id":27,
                "twitters":
                [
                    null
                ]
            }
        }
    ]
}

是否可以更好地使用 FasterCSV 来实现这一点?我假设 << 每次都会添加到一个新行......但它似乎不起作用。我将不胜感激任何帮助!

4

1 回答 1

1

您已经以某种方式将所有内容都弄乱了,您解析 json 的次数太多(并且在循环中!)让我们让它更简单:

customers = JSON.parse(data)["results"].map{|x| x['customer']}
customers.each do |c|
  csv << [c['first_name'], c['last_name']]
end

'wb' 也是 csv 的错误模式 - 只是 'w'。

于 2012-09-05T02:26:03.460 回答