分页: 1 / 1

如何在ubuntu下成批下载文件?

发表于 : 2009-05-11 17:11
reozen
:em20
我在google.com/ig里面订阅了很多的BBC的播客,全是mp3格式的音频文件,我也经常听。
但是装了ubuntu之后,却才发现ubuntu没有(或者说是我没有找到类似的软件)带有像迅雷那样的右击网页,出现“下载所有链接”之类的功能的下载软件,请大家帮帮我,我是个绝对的菜鸟。
下面是google/ig的页面。里面每一个小窗口里面都有音频文件的下载。

Re: 如何在ubuntu下成批下载文件?

发表于 : 2009-05-11 17:12
shellex
Firefox插件,DownThemAll

Re: 如何在ubuntu下成批下载文件?

发表于 : 2009-05-11 17:14
reozen
在哪里找?

Re: 如何在ubuntu下成批下载文件?

发表于 : 2009-05-11 17:16
qiang_liu8183
reozen 写了:在哪里找?
google上找

Re: 如何在ubuntu下成批下载文件?

发表于 : 2009-05-11 17:19
cafee
打开网页:http://addons.mozine.cn/firefox/88/
点击立即安装Fasterfox (这个扩展用于加速firefox)
打开网页:http://addons.mozine.cn/firefox/89/
点击立即安装DownThemAll!
打开网页:http://addons.mozine.cn/firefox/377
立即安装DownloadHelper (用于下载优酷,土豆,等视频网站的flv视频)
打开网页:http://addons.mozine.cn/firefox/373/
立即安装Batch Download(用于批量下载图片)

Re: 如何在ubuntu下成批下载文件?

发表于 : 2009-05-11 17:24
reozen
论坛上有没有技术帖可以参看的?

------------------------------------------

问题解决,已经开始批量下载。谢谢大家! :em11

Re: 如何在ubuntu下成批下载文件?

发表于 : 2009-05-11 20:09
dynamic0603
reozen 写了::em20
我在google.com/ig里面订阅了很多的BBC的播客,全是mp3格式的音频文件,我也经常听。
但是装了ubuntu之后,却才发现ubuntu没有(或者说是我没有找到类似的软件)带有像迅雷那样的右击网页,出现“下载所有链接”之类的功能的下载软件,请大家帮帮我,我是个绝对的菜鸟。
下面是google/ig的页面。里面每一个小窗口里面都有音频文件的下载。
你也听BBC的播客啊?我给你一个自动下载播客BBC播客的脚本,自己改一下播客下载目录。
用法:
把脚本放到某个目录下,打开终端,cd到你保存脚本的目录,输入chmod 777 BBC_Podcast ,然后输入./BBC_Podcast。
注意:你需要自己修改那个脚本里的下载目录,还有你可以自己修改播客的url地址,你也可以自己添加,依葫芦画瓢就行了。
最后你可以把那个脚本放到/usr/locla/bin/目录下,在终端下输入sudo cp BBC_Podcast /usr/local/bin/。

Re: 如何在ubuntu下成批下载文件?

发表于 : 2009-05-11 20:10
dynamic0603
dynamic0603 写了:
reozen 写了::em20
我在google.com/ig里面订阅了很多的BBC的播客,全是mp3格式的音频文件,我也经常听。
但是装了ubuntu之后,却才发现ubuntu没有(或者说是我没有找到类似的软件)带有像迅雷那样的右击网页,出现“下载所有链接”之类的功能的下载软件,请大家帮帮我,我是个绝对的菜鸟。
下面是google/ig的页面。里面每一个小窗口里面都有音频文件的下载。
你也听BBC的播客啊?我给你一个自动下载播客BBC播客的脚本,自己改一下播客下载目录。
用法:
把脚本放到某个目录下,打开终端,cd到你保存脚本的目录,输入chmod 777 BBC_Podcast ,然后输入./BBC_Podcast。
注意:你需要自己修改那个脚本里的下载目录,还有你可以自己修改播客的url地址,你也可以自己添加,依葫芦画瓢就行了。
最后你可以把那个脚本放到/usr/locla/bin/目录下,在终端下输入sudo cp BBC_Podcast /usr/local/bin/。

Re: 如何在ubuntu下成批下载文件?

发表于 : 2009-05-11 20:11
dynamic0603
#!/bin/bash
# 感谢中国科大瀚海星云users(laiwei)同学提供脚本的主体部分
# 可能发帖到BBS上时会将某些"号给过滤掉了,请自行检查
# 感谢中国科大瀚海星云sudo(阿囧)解决这个问题:
# 能不能在脚本里加一句,如果path里已经有了要下载的文件,则跳过,不下载那个文件?
# sudo(阿囧)同学提出用wget -nc
# 60-Second Science
echo
ping -c 1 -W 5 www.ustc.edu.cn
if [[ $? -ne 0 ]]
then
echo "No connection to the internet!"
exit 0
else
echo "Check updates for 60-Second Science"
cd ~/Music/Podcast/60-Second\ Science
url="http://rss.sciam.com/sciam/60secsciencepodcast"
last_record=.SSS_rss.xml.last
curr_record=.SSS_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for 60-Second Science"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for 60-Second Science"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Adam and Joe
echo
echo "Check updates for Adam and Joe"
cd ~/Music/Podcast/Adam\ and\ Joe
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6mu ... oe/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Adam and Joe"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Adam and Joe"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}


# Business Daily
echo
echo "Check updates for Business Daily"
cd ~/Music/Podcast/Business\ Daily
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ly/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Business Daily"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Business Daily"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# China Reel
echo
echo "Check updates for China Reel"
cd ~/Music/Podcast/China\ Reel\ \(Mandarin\)
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... el/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for China Reel"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for China Reel"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Digital Planet
echo
echo "Check updates for Digital Planet"
cd ~/Music/Podcast/Digital\ Planet
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... lp/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Digital Planet"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Digital Planet"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Discovery
echo
echo "Check updates for Discovery"
cd ~/Music/Podcast/Discovery
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ry/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Discovery"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Discovery"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Documentaries
echo
echo "Check updates for Documentaries"
cd ~/Music/Podcast/Documentaries
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ve/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Documentaries"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Documentaries"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Forum - A World of Ideas
echo
echo "Check updates for Forum - A World of Ideas"
cd ~/Music/Podcast/Forum\ -\ A\ World\ of\ Ideas
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... um/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Forum - A World of Ideas"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Forum - A World of Ideas"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# From Our Own Correspondent
echo
echo "Check updates for From Our Own Correspondent"
cd ~/Music/Podcast/From\ Our\ Own\ Correspondent
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/radio4/fooc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for From Our Own Correspondent"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for From Our Own Correspondent"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Global Arts and Entertainment
echo
echo "Check updates for Global Arts and Entertainment"
cd ~/Music/Podcast/Global\ Arts\ and\ Entertainment
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ts/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Global Arts and Entertainment"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Global Arts and Entertainment"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Health Check
echo
echo "Check updates for Health Check"
cd ~/Music/Podcast/Health\ Check
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... hc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Health Check"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Health Check"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Interview
echo
echo "Check updates for Interview"
cd ~/Music/Podcast/Interview
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ew/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Interview"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Interview"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Jon Richardson
echo
echo "Check updates fo Jon Richardsonr"
cd ~/Music/Podcast/Jon\ Richardson
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6music/rh6m/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Jon Richardson"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Jon Richardson"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Music Week
echo
echo "Check updates for Music Week"
cd ~/Music/Podcast/Music\ Week
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6mu ... ek/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Music Week"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Music Week"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Nature Podcast
echo
echo "Check updates for Nature Podcast"
cd ~/Music/Podcast/Nature\ Podcast
url="http://rss.streamos.com/streamos/rss/ge ... ame=nature"
last_record=.Nature_rss.xml.last
curr_record=.Nature_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Toddy no update for Nature Podcast"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Nature Podcast"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# One Planet
echo
echo "Check updates for One Planet"
cd ~/Music/Podcast/One\ Planet
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... et/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for One Planet"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for One Planet"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Science in Action
echo
echo "Check updates for Science in Action"
cd ~/Music/Podcast/Science\ \in\ Action
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ia/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Science in Action"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Science in Action"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Science Magazine Podcast
echo
echo "Check updates for Science Magazine Podcast"
cd ~/Music/Podcast/Science\ Magazine\ Podcast
url="http://www.sciencemag.org/rss/podcast.xml"
last_record=.Science_rss.xml.last
curr_record=.Science_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Science Magazine Podcast"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Science Magazine Podcast"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Tom Robinson Introducing...
echo
echo "Check updates for Tom Robinson Introducing..."
cd ~/Music/Podcast/Tom\ Robinson\ Introducing...
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6mu ... ro/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Tom Robinson Introducing..."
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Tom Robinson Introducing..."
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World Book Club
echo
echo "Check updates for World Book Club"
cd ~/Music/Podcast/World\ Book\ Club
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... bc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World Book Club"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World Book Club"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World Business News
echo
echo "Check updates for World Business News"
cd ~/Music/Podcast/World\ Business\ News
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ws/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World Business News"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World Business News"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World Have Your Say
echo
echo "Check updates for World Have Your Say"
cd ~/Music/Podcast/World\ Have\ Your\ Say
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ys/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World Have Your Say"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World Have Your Say"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World News For Children
echo
echo "Check updates for World News For Children"
cd ~/Music/Podcast/World\ News\ For\ Children
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/bbc7/wnc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World News For Children"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World News For Children"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}
fi

Re: 如何在ubuntu下成批下载文件?

发表于 : 2009-05-11 20:13
dynamic0603
dynamic0603 写了:#!/bin/bash
# 感谢中国科大瀚海星云users(laiwei)同学提供脚本的主体部分
# 可能发帖到BBS上时会将某些"号给过滤掉了,请自行检查
# 感谢中国科大瀚海星云sudo(阿囧)解决这个问题:
# 能不能在脚本里加一句,如果path里已经有了要下载的文件,则跳过,不下载那个文件?
# sudo(阿囧)同学提出用wget -nc
# 60-Second Science
echo
ping -c 1 -W 5 http://www.ustc.edu.cn
if [[ $? -ne 0 ]]
then
echo "No connection to the internet!"
exit 0
else
echo "Check updates for 60-Second Science"
cd ~/Music/Podcast/60-Second\ Science
url="http://rss.sciam.com/sciam/60secsciencepodcast"
last_record=.SSS_rss.xml.last
curr_record=.SSS_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for 60-Second Science"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for 60-Second Science"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Adam and Joe
echo
echo "Check updates for Adam and Joe"
cd ~/Music/Podcast/Adam\ and\ Joe
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6mu ... oe/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Adam and Joe"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Adam and Joe"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}


# Business Daily
echo
echo "Check updates for Business Daily"
cd ~/Music/Podcast/Business\ Daily
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ly/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Business Daily"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Business Daily"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# China Reel
echo
echo "Check updates for China Reel"
cd ~/Music/Podcast/China\ Reel\ \(Mandarin\)
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... el/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for China Reel"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for China Reel"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Digital Planet
echo
echo "Check updates for Digital Planet"
cd ~/Music/Podcast/Digital\ Planet
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... lp/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Digital Planet"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Digital Planet"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Discovery
echo
echo "Check updates for Discovery"
cd ~/Music/Podcast/Discovery
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ry/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Discovery"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Discovery"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Documentaries
echo
echo "Check updates for Documentaries"
cd ~/Music/Podcast/Documentaries
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ve/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Documentaries"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Documentaries"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Forum - A World of Ideas
echo
echo "Check updates for Forum - A World of Ideas"
cd ~/Music/Podcast/Forum\ -\ A\ World\ of\ Ideas
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... um/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Forum - A World of Ideas"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Forum - A World of Ideas"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# From Our Own Correspondent
echo
echo "Check updates for From Our Own Correspondent"
cd ~/Music/Podcast/From\ Our\ Own\ Correspondent
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/radio4/fooc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for From Our Own Correspondent"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for From Our Own Correspondent"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Global Arts and Entertainment
echo
echo "Check updates for Global Arts and Entertainment"
cd ~/Music/Podcast/Global\ Arts\ and\ Entertainment
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ts/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Global Arts and Entertainment"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Global Arts and Entertainment"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Health Check
echo
echo "Check updates for Health Check"
cd ~/Music/Podcast/Health\ Check
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... hc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Health Check"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Health Check"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Interview
echo
echo "Check updates for Interview"
cd ~/Music/Podcast/Interview
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ew/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Interview"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Interview"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Jon Richardson
echo
echo "Check updates fo Jon Richardsonr"
cd ~/Music/Podcast/Jon\ Richardson
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6music/rh6m/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Jon Richardson"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Jon Richardson"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Music Week
echo
echo "Check updates for Music Week"
cd ~/Music/Podcast/Music\ Week
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6mu ... ek/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Music Week"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Music Week"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Nature Podcast
echo
echo "Check updates for Nature Podcast"
cd ~/Music/Podcast/Nature\ Podcast
url="http://rss.streamos.com/streamos/rss/ge ... ame=nature"
last_record=.Nature_rss.xml.last
curr_record=.Nature_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Toddy no update for Nature Podcast"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Nature Podcast"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# One Planet
echo
echo "Check updates for One Planet"
cd ~/Music/Podcast/One\ Planet
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... et/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for One Planet"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for One Planet"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Science in Action
echo
echo "Check updates for Science in Action"
cd ~/Music/Podcast/Science\ \in\ Action
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ia/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Science in Action"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Science in Action"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Science Magazine Podcast
echo
echo "Check updates for Science Magazine Podcast"
cd ~/Music/Podcast/Science\ Magazine\ Podcast
url="http://www.sciencemag.org/rss/podcast.xml"
last_record=.Science_rss.xml.last
curr_record=.Science_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Science Magazine Podcast"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Science Magazine Podcast"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Tom Robinson Introducing...
echo
echo "Check updates for Tom Robinson Introducing..."
cd ~/Music/Podcast/Tom\ Robinson\ Introducing...
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6mu ... ro/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Tom Robinson Introducing..."
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Tom Robinson Introducing..."
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World Book Club
echo
echo "Check updates for World Book Club"
cd ~/Music/Podcast/World\ Book\ Club
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... bc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World Book Club"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World Book Club"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World Business News
echo
echo "Check updates for World Business News"
cd ~/Music/Podcast/World\ Business\ News
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ws/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World Business News"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World Business News"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World Have Your Say
echo
echo "Check updates for World Have Your Say"
cd ~/Music/Podcast/World\ Have\ Your\ Say
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ys/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World Have Your Say"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World Have Your Say"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World News For Children
echo
echo "Check updates for World News For Children"
cd ~/Music/Podcast/World\ News\ For\ Children
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/bbc7/wnc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World News For Children"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World News For Children"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}
fi
脚本最后一行的fi不能去掉,因为这个fi是和一开头的那个检查网络连接的if匹配的。