28

I've been trying to create a simple script that will take a list of queries from a .txt file, append the main url variable, then scrape the content and output it to a text file.

Here's what I have so far:

#!/bin/bash

url="example.com/?q="
for i in $(cat query.txt); do
    content=$(curl -o $url $i)
    echo $url $i
    echo $content >> output.txt
done

list:

images
news
stuff
other

error log:

curl: (6) Could not resolve host: other; nodename nor servname provided, or not known
example.com/?q= other

If I use this command straight from the command line I get some output into the file:

curl -L http://example.com/?q=other >> output.txt

Ultimately I would like the output to be:

fetched:    http://example.com/?q=other
content:    the output of the page

followed by the next query in the list.
4

2 回答 2

36

使用更多报价!

试试这个:

url="example.com/?q="
for i in $(cat query.txt); do
    content="$(curl -s "$url/$i")"
    echo "$content" >> output.txt
done
于 2013-04-21T12:58:41.967 回答
5

你有嵌套引号,试试这样的:

#!/bin/bash

url=https://www.google.fr/?q=
while read query
do
    content=$(curl "{$url}${query}")
    echo $query
    echo $content >> output.txt
done < query.txt
于 2013-04-21T13:07:35.270 回答