As the title suggests, this is just a simple script I figured out for downloading files. The list of urls for downloading are located in an xml document that I'm parsing, storing briefly, looping through and downloading what I wanted to a predetermined location (though that could be modified for user input).
Script:
read_dom () {
local IFS=\>
read -d \< ENTITY CONTENT
}
while read_dom; do
if [[ $ENTITY = "xmlTag" ]]; then
echo $CONTENT
if [[ $CONTENT != "null" ]]; then
wget -Nrkpl 0 $CONTENT -nd -Pdownloads
fi
fi
done < listOfXML.xml > temporaryXMLPage.txt
echo "File cleanup"
rm temporaryXMLPage.txt
source:
http://stackoverflow.com/questions/893585/how-to-parse-xml-in-bash
and
http://stackoverflow.com/questions/15407884/javascript-download-images-from-url
No comments:
Post a Comment