如何在ubuntu下成批下载文件?

系统安装、升级讨论
版面规则
我们都知道新人的确很菜,也喜欢抱怨,并且带有浓厚的Windows习惯,但既然在这里询问,我们就应该有责任帮助他们解决问题,而不是直接泼冷水、简单的否定或发表对解决问题没有任何帮助的帖子。乐于分享,以人为本,这正是Ubuntu的精神所在。
回复
头像
reozen
帖子: 32
注册时间: 2009-05-07 9:58

如何在ubuntu下成批下载文件?

#1

帖子 reozen » 2009-05-11 17:11

:em20
我在google.com/ig里面订阅了很多的BBC的播客,全是mp3格式的音频文件,我也经常听。
但是装了ubuntu之后,却才发现ubuntu没有(或者说是我没有找到类似的软件)带有像迅雷那样的右击网页,出现“下载所有链接”之类的功能的下载软件,请大家帮帮我,我是个绝对的菜鸟。
下面是google/ig的页面。里面每一个小窗口里面都有音频文件的下载。
附件
我的google/ig的页面
我的google/ig的页面
每次使用Xubuntu,都有一种幸福感,这种幸福感是用Windows时没有的。
每次更换Xubuntu的系统语言,都有一种充实感,这种充实感是只用汉语当系统语言所没有的。
头像
shellex
帖子: 2180
注册时间: 2007-02-18 19:33
系统: OSX
来自: lyric.im
联系:

Re: 如何在ubuntu下成批下载文件?

#2

帖子 shellex » 2009-05-11 17:12

Firefox插件,DownThemAll
既然你诚心诚意地问了
我就大慈大悲地告诉你
为了防止世界被破坏
为了维护世界的和平
贯彻爱与真实的罪恶
可爱而又迷人的反派角色
武藏,小次郎
我们是穿越银河的火箭队,白洞白色的明天在等着我们。就是这样!!喵~~
头像
reozen
帖子: 32
注册时间: 2009-05-07 9:58

Re: 如何在ubuntu下成批下载文件?

#3

帖子 reozen » 2009-05-11 17:14

在哪里找?
每次使用Xubuntu,都有一种幸福感,这种幸福感是用Windows时没有的。
每次更换Xubuntu的系统语言,都有一种充实感,这种充实感是只用汉语当系统语言所没有的。
头像
qiang_liu8183
论坛版主
帖子: 10699
注册时间: 2006-09-10 22:36
系统: Arch Debian
来自: 北京

Re: 如何在ubuntu下成批下载文件?

#4

帖子 qiang_liu8183 » 2009-05-11 17:16

reozen 写了:在哪里找?
google上找
看破、放下、自在、随缘、念佛
真诚、清净、平等、正觉、慈悲
头像
cafee
帖子: 14
注册时间: 2009-05-11 9:38

Re: 如何在ubuntu下成批下载文件?

#5

帖子 cafee » 2009-05-11 17:19

打开网页:http://addons.mozine.cn/firefox/88/
点击立即安装Fasterfox (这个扩展用于加速firefox)
打开网页:http://addons.mozine.cn/firefox/89/
点击立即安装DownThemAll!
打开网页:http://addons.mozine.cn/firefox/377
立即安装DownloadHelper (用于下载优酷,土豆,等视频网站的flv视频)
打开网页:http://addons.mozine.cn/firefox/373/
立即安装Batch Download(用于批量下载图片)
情难自禁,我其实属于极度咸湿的男人~~
头像
reozen
帖子: 32
注册时间: 2009-05-07 9:58

Re: 如何在ubuntu下成批下载文件?

#6

帖子 reozen » 2009-05-11 17:24

论坛上有没有技术帖可以参看的?

------------------------------------------

问题解决,已经开始批量下载。谢谢大家! :em11
每次使用Xubuntu,都有一种幸福感,这种幸福感是用Windows时没有的。
每次更换Xubuntu的系统语言,都有一种充实感,这种充实感是只用汉语当系统语言所没有的。
dynamic0603
帖子: 259
注册时间: 2008-11-14 20:35

Re: 如何在ubuntu下成批下载文件?

#7

帖子 dynamic0603 » 2009-05-11 20:09

reozen 写了::em20
我在google.com/ig里面订阅了很多的BBC的播客,全是mp3格式的音频文件,我也经常听。
但是装了ubuntu之后,却才发现ubuntu没有(或者说是我没有找到类似的软件)带有像迅雷那样的右击网页,出现“下载所有链接”之类的功能的下载软件,请大家帮帮我,我是个绝对的菜鸟。
下面是google/ig的页面。里面每一个小窗口里面都有音频文件的下载。
你也听BBC的播客啊?我给你一个自动下载播客BBC播客的脚本,自己改一下播客下载目录。
用法:
把脚本放到某个目录下,打开终端,cd到你保存脚本的目录,输入chmod 777 BBC_Podcast ,然后输入./BBC_Podcast。
注意:你需要自己修改那个脚本里的下载目录,还有你可以自己修改播客的url地址,你也可以自己添加,依葫芦画瓢就行了。
最后你可以把那个脚本放到/usr/locla/bin/目录下,在终端下输入sudo cp BBC_Podcast /usr/local/bin/。
dynamic0603
帖子: 259
注册时间: 2008-11-14 20:35

Re: 如何在ubuntu下成批下载文件?

#8

帖子 dynamic0603 » 2009-05-11 20:10

dynamic0603 写了:
reozen 写了::em20
我在google.com/ig里面订阅了很多的BBC的播客,全是mp3格式的音频文件,我也经常听。
但是装了ubuntu之后,却才发现ubuntu没有(或者说是我没有找到类似的软件)带有像迅雷那样的右击网页,出现“下载所有链接”之类的功能的下载软件,请大家帮帮我,我是个绝对的菜鸟。
下面是google/ig的页面。里面每一个小窗口里面都有音频文件的下载。
你也听BBC的播客啊?我给你一个自动下载播客BBC播客的脚本,自己改一下播客下载目录。
用法:
把脚本放到某个目录下,打开终端,cd到你保存脚本的目录,输入chmod 777 BBC_Podcast ,然后输入./BBC_Podcast。
注意:你需要自己修改那个脚本里的下载目录,还有你可以自己修改播客的url地址,你也可以自己添加,依葫芦画瓢就行了。
最后你可以把那个脚本放到/usr/locla/bin/目录下,在终端下输入sudo cp BBC_Podcast /usr/local/bin/。
dynamic0603
帖子: 259
注册时间: 2008-11-14 20:35

Re: 如何在ubuntu下成批下载文件?

#9

帖子 dynamic0603 » 2009-05-11 20:11

#!/bin/bash
# 感谢中国科大瀚海星云users(laiwei)同学提供脚本的主体部分
# 可能发帖到BBS上时会将某些"号给过滤掉了,请自行检查
# 感谢中国科大瀚海星云sudo(阿囧)解决这个问题:
# 能不能在脚本里加一句,如果path里已经有了要下载的文件,则跳过,不下载那个文件?
# sudo(阿囧)同学提出用wget -nc
# 60-Second Science
echo
ping -c 1 -W 5 www.ustc.edu.cn
if [[ $? -ne 0 ]]
then
echo "No connection to the internet!"
exit 0
else
echo "Check updates for 60-Second Science"
cd ~/Music/Podcast/60-Second\ Science
url="http://rss.sciam.com/sciam/60secsciencepodcast"
last_record=.SSS_rss.xml.last
curr_record=.SSS_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for 60-Second Science"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for 60-Second Science"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Adam and Joe
echo
echo "Check updates for Adam and Joe"
cd ~/Music/Podcast/Adam\ and\ Joe
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6mu ... oe/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Adam and Joe"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Adam and Joe"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}


# Business Daily
echo
echo "Check updates for Business Daily"
cd ~/Music/Podcast/Business\ Daily
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ly/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Business Daily"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Business Daily"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# China Reel
echo
echo "Check updates for China Reel"
cd ~/Music/Podcast/China\ Reel\ \(Mandarin\)
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... el/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for China Reel"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for China Reel"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Digital Planet
echo
echo "Check updates for Digital Planet"
cd ~/Music/Podcast/Digital\ Planet
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... lp/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Digital Planet"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Digital Planet"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Discovery
echo
echo "Check updates for Discovery"
cd ~/Music/Podcast/Discovery
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ry/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Discovery"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Discovery"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Documentaries
echo
echo "Check updates for Documentaries"
cd ~/Music/Podcast/Documentaries
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ve/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Documentaries"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Documentaries"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Forum - A World of Ideas
echo
echo "Check updates for Forum - A World of Ideas"
cd ~/Music/Podcast/Forum\ -\ A\ World\ of\ Ideas
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... um/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Forum - A World of Ideas"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Forum - A World of Ideas"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# From Our Own Correspondent
echo
echo "Check updates for From Our Own Correspondent"
cd ~/Music/Podcast/From\ Our\ Own\ Correspondent
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/radio4/fooc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for From Our Own Correspondent"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for From Our Own Correspondent"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Global Arts and Entertainment
echo
echo "Check updates for Global Arts and Entertainment"
cd ~/Music/Podcast/Global\ Arts\ and\ Entertainment
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ts/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Global Arts and Entertainment"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Global Arts and Entertainment"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Health Check
echo
echo "Check updates for Health Check"
cd ~/Music/Podcast/Health\ Check
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... hc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Health Check"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Health Check"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Interview
echo
echo "Check updates for Interview"
cd ~/Music/Podcast/Interview
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ew/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Interview"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Interview"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Jon Richardson
echo
echo "Check updates fo Jon Richardsonr"
cd ~/Music/Podcast/Jon\ Richardson
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6music/rh6m/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Jon Richardson"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Jon Richardson"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Music Week
echo
echo "Check updates for Music Week"
cd ~/Music/Podcast/Music\ Week
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6mu ... ek/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Music Week"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Music Week"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Nature Podcast
echo
echo "Check updates for Nature Podcast"
cd ~/Music/Podcast/Nature\ Podcast
url="http://rss.streamos.com/streamos/rss/ge ... ame=nature"
last_record=.Nature_rss.xml.last
curr_record=.Nature_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Toddy no update for Nature Podcast"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Nature Podcast"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# One Planet
echo
echo "Check updates for One Planet"
cd ~/Music/Podcast/One\ Planet
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... et/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for One Planet"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for One Planet"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Science in Action
echo
echo "Check updates for Science in Action"
cd ~/Music/Podcast/Science\ \in\ Action
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ia/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Science in Action"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Science in Action"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Science Magazine Podcast
echo
echo "Check updates for Science Magazine Podcast"
cd ~/Music/Podcast/Science\ Magazine\ Podcast
url="http://www.sciencemag.org/rss/podcast.xml"
last_record=.Science_rss.xml.last
curr_record=.Science_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Science Magazine Podcast"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Science Magazine Podcast"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Tom Robinson Introducing...
echo
echo "Check updates for Tom Robinson Introducing..."
cd ~/Music/Podcast/Tom\ Robinson\ Introducing...
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6mu ... ro/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Tom Robinson Introducing..."
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Tom Robinson Introducing..."
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World Book Club
echo
echo "Check updates for World Book Club"
cd ~/Music/Podcast/World\ Book\ Club
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... bc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World Book Club"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World Book Club"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World Business News
echo
echo "Check updates for World Business News"
cd ~/Music/Podcast/World\ Business\ News
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ws/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World Business News"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World Business News"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World Have Your Say
echo
echo "Check updates for World Have Your Say"
cd ~/Music/Podcast/World\ Have\ Your\ Say
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ys/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World Have Your Say"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World Have Your Say"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World News For Children
echo
echo "Check updates for World News For Children"
cd ~/Music/Podcast/World\ News\ For\ Children
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/bbc7/wnc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World News For Children"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World News For Children"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}
fi
dynamic0603
帖子: 259
注册时间: 2008-11-14 20:35

Re: 如何在ubuntu下成批下载文件?

#10

帖子 dynamic0603 » 2009-05-11 20:13

dynamic0603 写了:#!/bin/bash
# 感谢中国科大瀚海星云users(laiwei)同学提供脚本的主体部分
# 可能发帖到BBS上时会将某些"号给过滤掉了,请自行检查
# 感谢中国科大瀚海星云sudo(阿囧)解决这个问题:
# 能不能在脚本里加一句,如果path里已经有了要下载的文件,则跳过,不下载那个文件?
# sudo(阿囧)同学提出用wget -nc
# 60-Second Science
echo
ping -c 1 -W 5 http://www.ustc.edu.cn
if [[ $? -ne 0 ]]
then
echo "No connection to the internet!"
exit 0
else
echo "Check updates for 60-Second Science"
cd ~/Music/Podcast/60-Second\ Science
url="http://rss.sciam.com/sciam/60secsciencepodcast"
last_record=.SSS_rss.xml.last
curr_record=.SSS_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for 60-Second Science"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for 60-Second Science"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Adam and Joe
echo
echo "Check updates for Adam and Joe"
cd ~/Music/Podcast/Adam\ and\ Joe
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6mu ... oe/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Adam and Joe"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Adam and Joe"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}


# Business Daily
echo
echo "Check updates for Business Daily"
cd ~/Music/Podcast/Business\ Daily
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ly/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Business Daily"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Business Daily"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# China Reel
echo
echo "Check updates for China Reel"
cd ~/Music/Podcast/China\ Reel\ \(Mandarin\)
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... el/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for China Reel"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for China Reel"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Digital Planet
echo
echo "Check updates for Digital Planet"
cd ~/Music/Podcast/Digital\ Planet
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... lp/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Digital Planet"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Digital Planet"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Discovery
echo
echo "Check updates for Discovery"
cd ~/Music/Podcast/Discovery
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ry/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Discovery"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Discovery"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Documentaries
echo
echo "Check updates for Documentaries"
cd ~/Music/Podcast/Documentaries
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ve/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Documentaries"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Documentaries"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Forum - A World of Ideas
echo
echo "Check updates for Forum - A World of Ideas"
cd ~/Music/Podcast/Forum\ -\ A\ World\ of\ Ideas
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... um/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Forum - A World of Ideas"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Forum - A World of Ideas"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# From Our Own Correspondent
echo
echo "Check updates for From Our Own Correspondent"
cd ~/Music/Podcast/From\ Our\ Own\ Correspondent
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/radio4/fooc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for From Our Own Correspondent"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for From Our Own Correspondent"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Global Arts and Entertainment
echo
echo "Check updates for Global Arts and Entertainment"
cd ~/Music/Podcast/Global\ Arts\ and\ Entertainment
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ts/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Global Arts and Entertainment"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Global Arts and Entertainment"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Health Check
echo
echo "Check updates for Health Check"
cd ~/Music/Podcast/Health\ Check
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... hc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Health Check"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Health Check"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Interview
echo
echo "Check updates for Interview"
cd ~/Music/Podcast/Interview
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ew/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Interview"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Interview"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Jon Richardson
echo
echo "Check updates fo Jon Richardsonr"
cd ~/Music/Podcast/Jon\ Richardson
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6music/rh6m/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Jon Richardson"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Jon Richardson"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Music Week
echo
echo "Check updates for Music Week"
cd ~/Music/Podcast/Music\ Week
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6mu ... ek/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Music Week"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Music Week"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Nature Podcast
echo
echo "Check updates for Nature Podcast"
cd ~/Music/Podcast/Nature\ Podcast
url="http://rss.streamos.com/streamos/rss/ge ... ame=nature"
last_record=.Nature_rss.xml.last
curr_record=.Nature_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Toddy no update for Nature Podcast"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Nature Podcast"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# One Planet
echo
echo "Check updates for One Planet"
cd ~/Music/Podcast/One\ Planet
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... et/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for One Planet"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for One Planet"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Science in Action
echo
echo "Check updates for Science in Action"
cd ~/Music/Podcast/Science\ \in\ Action
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ia/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Science in Action"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Science in Action"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Science Magazine Podcast
echo
echo "Check updates for Science Magazine Podcast"
cd ~/Music/Podcast/Science\ Magazine\ Podcast
url="http://www.sciencemag.org/rss/podcast.xml"
last_record=.Science_rss.xml.last
curr_record=.Science_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Science Magazine Podcast"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Science Magazine Podcast"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Tom Robinson Introducing...
echo
echo "Check updates for Tom Robinson Introducing..."
cd ~/Music/Podcast/Tom\ Robinson\ Introducing...
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6mu ... ro/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Tom Robinson Introducing..."
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Tom Robinson Introducing..."
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World Book Club
echo
echo "Check updates for World Book Club"
cd ~/Music/Podcast/World\ Book\ Club
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... bc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World Book Club"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World Book Club"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World Business News
echo
echo "Check updates for World Business News"
cd ~/Music/Podcast/World\ Business\ News
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ws/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World Business News"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World Business News"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World Have Your Say
echo
echo "Check updates for World Have Your Say"
cd ~/Music/Podcast/World\ Have\ Your\ Say
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/wor ... ys/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World Have Your Say"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World Have Your Say"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World News For Children
echo
echo "Check updates for World News For Children"
cd ~/Music/Podcast/World\ News\ For\ Children
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/bbc7/wnc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World News For Children"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World News For Children"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}
fi
脚本最后一行的fi不能去掉,因为这个fi是和一开头的那个检查网络连接的if匹配的。
回复