当前时区为 UTC + 8 小时



发表新帖 回复这个主题  [ 10 篇帖子 ] 
作者 内容
1 楼 
 文章标题 : 如何在ubuntu下成批下载文件?
帖子发表于 : 2009-05-11 17:11 
头像

注册: 2009-05-07 9:58
帖子: 32
送出感谢: 0 次
接收感谢: 0 次
:em20
我在google.com/ig里面订阅了很多的BBC的播客,全是mp3格式的音频文件,我也经常听。
但是装了ubuntu之后,却才发现ubuntu没有(或者说是我没有找到类似的软件)带有像迅雷那样的右击网页,出现“下载所有链接”之类的功能的下载软件,请大家帮帮我,我是个绝对的菜鸟。
下面是google/ig的页面。里面每一个小窗口里面都有音频文件的下载。


附件:
文件注释: 我的google/ig的页面
Screenshot-iGoogle - Mozilla Firefox.png
Screenshot-iGoogle - Mozilla Firefox.png [ 297.75 KiB | 被浏览 208 次 ]



_________________
每次使用Xubuntu,都有一种幸福感,这种幸福感是用Windows时没有的。
每次更换Xubuntu的系统语言,都有一种充实感,这种充实感是只用汉语当系统语言所没有的。
页首
 用户资料  
 
2 楼 
 文章标题 : Re: 如何在ubuntu下成批下载文件?
帖子发表于 : 2009-05-11 17:12 
头像

注册: 2007-02-18 19:33
帖子: 2180
地址: lyric.im
系统: OSX
送出感谢: 0 次
接收感谢: 1
Firefox插件,DownThemAll


_________________
既然你诚心诚意地问了
我就大慈大悲地告诉你
为了防止世界被破坏
为了维护世界的和平
贯彻爱与真实的罪恶
可爱而又迷人的反派角色
武藏,小次郎
我们是穿越银河的火箭队,白洞白色的明天在等着我们。就是这样!!喵~~


页首
 用户资料  
 
3 楼 
 文章标题 : Re: 如何在ubuntu下成批下载文件?
帖子发表于 : 2009-05-11 17:14 
头像

注册: 2009-05-07 9:58
帖子: 32
送出感谢: 0 次
接收感谢: 0 次
在哪里找?


_________________
每次使用Xubuntu,都有一种幸福感,这种幸福感是用Windows时没有的。
每次更换Xubuntu的系统语言,都有一种充实感,这种充实感是只用汉语当系统语言所没有的。


页首
 用户资料  
 
4 楼 
 文章标题 : Re: 如何在ubuntu下成批下载文件?
帖子发表于 : 2009-05-11 17:16 
头像

注册: 2006-09-10 22:36
帖子: 10663
地址: 北京
送出感谢: 1
接收感谢: 16
reozen 写道:
在哪里找?

google上找


_________________
看破、放下、自在、随缘、念佛
真诚、清净、平等、正觉、慈悲


页首
 用户资料  
 
5 楼 
 文章标题 : Re: 如何在ubuntu下成批下载文件?
帖子发表于 : 2009-05-11 17:19 
头像

注册: 2009-05-11 9:38
帖子: 14
送出感谢: 1
接收感谢: 0 次
打开网页:http://addons.mozine.cn/firefox/88/
点击立即安装Fasterfox (这个扩展用于加速firefox)
打开网页:http://addons.mozine.cn/firefox/89/
点击立即安装DownThemAll!
打开网页:http://addons.mozine.cn/firefox/377
立即安装DownloadHelper (用于下载优酷,土豆,等视频网站的flv视频)
打开网页:http://addons.mozine.cn/firefox/373/
立即安装Batch Download(用于批量下载图片)


_________________
情难自禁,我其实属于极度咸湿的男人~~


页首
 用户资料  
 
6 楼 
 文章标题 : Re: 如何在ubuntu下成批下载文件?
帖子发表于 : 2009-05-11 17:24 
头像

注册: 2009-05-07 9:58
帖子: 32
送出感谢: 0 次
接收感谢: 0 次
论坛上有没有技术帖可以参看的?

------------------------------------------

问题解决,已经开始批量下载。谢谢大家! :em11


_________________
每次使用Xubuntu,都有一种幸福感,这种幸福感是用Windows时没有的。
每次更换Xubuntu的系统语言,都有一种充实感,这种充实感是只用汉语当系统语言所没有的。


页首
 用户资料  
 
7 楼 
 文章标题 : Re: 如何在ubuntu下成批下载文件?
帖子发表于 : 2009-05-11 20:09 

注册: 2008-11-14 20:35
帖子: 259
送出感谢: 0 次
接收感谢: 1
reozen 写道:
:em20
我在google.com/ig里面订阅了很多的BBC的播客,全是mp3格式的音频文件,我也经常听。
但是装了ubuntu之后,却才发现ubuntu没有(或者说是我没有找到类似的软件)带有像迅雷那样的右击网页,出现“下载所有链接”之类的功能的下载软件,请大家帮帮我,我是个绝对的菜鸟。
下面是google/ig的页面。里面每一个小窗口里面都有音频文件的下载。

你也听BBC的播客啊?我给你一个自动下载播客BBC播客的脚本,自己改一下播客下载目录。
用法:
把脚本放到某个目录下,打开终端,cd到你保存脚本的目录,输入chmod 777 BBC_Podcast ,然后输入./BBC_Podcast。
注意:你需要自己修改那个脚本里的下载目录,还有你可以自己修改播客的url地址,你也可以自己添加,依葫芦画瓢就行了。
最后你可以把那个脚本放到/usr/locla/bin/目录下,在终端下输入sudo cp BBC_Podcast /usr/local/bin/。


页首
 用户资料  
 
8 楼 
 文章标题 : Re: 如何在ubuntu下成批下载文件?
帖子发表于 : 2009-05-11 20:10 

注册: 2008-11-14 20:35
帖子: 259
送出感谢: 0 次
接收感谢: 1
dynamic0603 写道:
reozen 写道:
:em20
我在google.com/ig里面订阅了很多的BBC的播客,全是mp3格式的音频文件,我也经常听。
但是装了ubuntu之后,却才发现ubuntu没有(或者说是我没有找到类似的软件)带有像迅雷那样的右击网页,出现“下载所有链接”之类的功能的下载软件,请大家帮帮我,我是个绝对的菜鸟。
下面是google/ig的页面。里面每一个小窗口里面都有音频文件的下载。

你也听BBC的播客啊?我给你一个自动下载播客BBC播客的脚本,自己改一下播客下载目录。
用法:
把脚本放到某个目录下,打开终端,cd到你保存脚本的目录,输入chmod 777 BBC_Podcast ,然后输入./BBC_Podcast。
注意:你需要自己修改那个脚本里的下载目录,还有你可以自己修改播客的url地址,你也可以自己添加,依葫芦画瓢就行了。
最后你可以把那个脚本放到/usr/locla/bin/目录下,在终端下输入sudo cp BBC_Podcast /usr/local/bin/。


页首
 用户资料  
 
9 楼 
 文章标题 : Re: 如何在ubuntu下成批下载文件?
帖子发表于 : 2009-05-11 20:11 

注册: 2008-11-14 20:35
帖子: 259
送出感谢: 0 次
接收感谢: 1
#!/bin/bash
# 感谢中国科大瀚海星云users(laiwei)同学提供脚本的主体部分
# 可能发帖到BBS上时会将某些"号给过滤掉了,请自行检查
# 感谢中国科大瀚海星云sudo(阿囧)解决这个问题:
# 能不能在脚本里加一句,如果path里已经有了要下载的文件,则跳过,不下载那个文件?
# sudo(阿囧)同学提出用wget -nc
# 60-Second Science
echo
ping -c 1 -W 5 www.ustc.edu.cn
if [[ $? -ne 0 ]]
then
echo "No connection to the internet!"
exit 0
else
echo "Check updates for 60-Second Science"
cd ~/Music/Podcast/60-Second\ Science
url="http://rss.sciam.com/sciam/60secsciencepodcast"
last_record=.SSS_rss.xml.last
curr_record=.SSS_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for 60-Second Science"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for 60-Second Science"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Adam and Joe
echo
echo "Check updates for Adam and Joe"
cd ~/Music/Podcast/Adam\ and\ Joe
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6music/adamandjoe/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Adam and Joe"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Adam and Joe"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}


# Business Daily
echo
echo "Check updates for Business Daily"
cd ~/Music/Podcast/Business\ Daily
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/bizdaily/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Business Daily"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Business Daily"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# China Reel
echo
echo "Check updates for China Reel"
cd ~/Music/Podcast/China\ Reel\ \(Mandarin\)
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/chinareel/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for China Reel"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for China Reel"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Digital Planet
echo
echo "Check updates for Digital Planet"
cd ~/Music/Podcast/Digital\ Planet
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/digitalp/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Digital Planet"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Digital Planet"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Discovery
echo
echo "Check updates for Discovery"
cd ~/Music/Podcast/Discovery
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/discovery/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Discovery"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Discovery"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Documentaries
echo
echo "Check updates for Documentaries"
cd ~/Music/Podcast/Documentaries
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/docarchive/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Documentaries"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Documentaries"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Forum - A World of Ideas
echo
echo "Check updates for Forum - A World of Ideas"
cd ~/Music/Podcast/Forum\ -\ A\ World\ of\ Ideas
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/forum/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Forum - A World of Ideas"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Forum - A World of Ideas"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# From Our Own Correspondent
echo
echo "Check updates for From Our Own Correspondent"
cd ~/Music/Podcast/From\ Our\ Own\ Correspondent
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/radio4/fooc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for From Our Own Correspondent"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for From Our Own Correspondent"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Global Arts and Entertainment
echo
echo "Check updates for Global Arts and Entertainment"
cd ~/Music/Podcast/Global\ Arts\ and\ Entertainment
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/globalarts/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Global Arts and Entertainment"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Global Arts and Entertainment"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Health Check
echo
echo "Check updates for Health Check"
cd ~/Music/Podcast/Health\ Check
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/healthc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Health Check"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Health Check"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Interview
echo
echo "Check updates for Interview"
cd ~/Music/Podcast/Interview
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/interview/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Interview"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Interview"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Jon Richardson
echo
echo "Check updates fo Jon Richardsonr"
cd ~/Music/Podcast/Jon\ Richardson
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6music/rh6m/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Jon Richardson"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Jon Richardson"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Music Week
echo
echo "Check updates for Music Week"
cd ~/Music/Podcast/Music\ Week
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6music/musicweek/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Music Week"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Music Week"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Nature Podcast
echo
echo "Check updates for Nature Podcast"
cd ~/Music/Podcast/Nature\ Podcast
url="http://rss.streamos.com/streamos/rss/genfeed.php?feedid=360&groupname=nature"
last_record=.Nature_rss.xml.last
curr_record=.Nature_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Toddy no update for Nature Podcast"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Nature Podcast"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# One Planet
echo
echo "Check updates for One Planet"
cd ~/Music/Podcast/One\ Planet
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/oneplanet/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for One Planet"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for One Planet"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Science in Action
echo
echo "Check updates for Science in Action"
cd ~/Music/Podcast/Science\ \in\ Action
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/scia/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Science in Action"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Science in Action"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Science Magazine Podcast
echo
echo "Check updates for Science Magazine Podcast"
cd ~/Music/Podcast/Science\ Magazine\ Podcast
url="http://www.sciencemag.org/rss/podcast.xml"
last_record=.Science_rss.xml.last
curr_record=.Science_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Science Magazine Podcast"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Science Magazine Podcast"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Tom Robinson Introducing...
echo
echo "Check updates for Tom Robinson Introducing..."
cd ~/Music/Podcast/Tom\ Robinson\ Introducing...
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6music/trintro/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Tom Robinson Introducing..."
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Tom Robinson Introducing..."
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World Book Club
echo
echo "Check updates for World Book Club"
cd ~/Music/Podcast/World\ Book\ Club
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/wbc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World Book Club"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World Book Club"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World Business News
echo
echo "Check updates for World Business News"
cd ~/Music/Podcast/World\ Business\ News
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/wbnews/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World Business News"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World Business News"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World Have Your Say
echo
echo "Check updates for World Have Your Say"
cd ~/Music/Podcast/World\ Have\ Your\ Say
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/whys/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World Have Your Say"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World Have Your Say"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World News For Children
echo
echo "Check updates for World News For Children"
cd ~/Music/Podcast/World\ News\ For\ Children
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/bbc7/wnc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World News For Children"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World News For Children"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}
fi


页首
 用户资料  
 
10 楼 
 文章标题 : Re: 如何在ubuntu下成批下载文件?
帖子发表于 : 2009-05-11 20:13 

注册: 2008-11-14 20:35
帖子: 259
送出感谢: 0 次
接收感谢: 1
dynamic0603 写道:
#!/bin/bash
# 感谢中国科大瀚海星云users(laiwei)同学提供脚本的主体部分
# 可能发帖到BBS上时会将某些"号给过滤掉了,请自行检查
# 感谢中国科大瀚海星云sudo(阿囧)解决这个问题:
# 能不能在脚本里加一句,如果path里已经有了要下载的文件,则跳过,不下载那个文件?
# sudo(阿囧)同学提出用wget -nc
# 60-Second Science
echo
ping -c 1 -W 5 http://www.ustc.edu.cn
if [[ $? -ne 0 ]]
then
echo "No connection to the internet!"
exit 0
else
echo "Check updates for 60-Second Science"
cd ~/Music/Podcast/60-Second\ Science
url="http://rss.sciam.com/sciam/60secsciencepodcast"
last_record=.SSS_rss.xml.last
curr_record=.SSS_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for 60-Second Science"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for 60-Second Science"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Adam and Joe
echo
echo "Check updates for Adam and Joe"
cd ~/Music/Podcast/Adam\ and\ Joe
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6music/adamandjoe/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Adam and Joe"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Adam and Joe"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}


# Business Daily
echo
echo "Check updates for Business Daily"
cd ~/Music/Podcast/Business\ Daily
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/bizdaily/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Business Daily"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Business Daily"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# China Reel
echo
echo "Check updates for China Reel"
cd ~/Music/Podcast/China\ Reel\ \(Mandarin\)
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/chinareel/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for China Reel"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for China Reel"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Digital Planet
echo
echo "Check updates for Digital Planet"
cd ~/Music/Podcast/Digital\ Planet
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/digitalp/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Digital Planet"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Digital Planet"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Discovery
echo
echo "Check updates for Discovery"
cd ~/Music/Podcast/Discovery
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/discovery/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Discovery"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Discovery"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Documentaries
echo
echo "Check updates for Documentaries"
cd ~/Music/Podcast/Documentaries
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/docarchive/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Documentaries"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Documentaries"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Forum - A World of Ideas
echo
echo "Check updates for Forum - A World of Ideas"
cd ~/Music/Podcast/Forum\ -\ A\ World\ of\ Ideas
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/forum/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Forum - A World of Ideas"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Forum - A World of Ideas"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# From Our Own Correspondent
echo
echo "Check updates for From Our Own Correspondent"
cd ~/Music/Podcast/From\ Our\ Own\ Correspondent
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/radio4/fooc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for From Our Own Correspondent"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for From Our Own Correspondent"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Global Arts and Entertainment
echo
echo "Check updates for Global Arts and Entertainment"
cd ~/Music/Podcast/Global\ Arts\ and\ Entertainment
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/globalarts/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Global Arts and Entertainment"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Global Arts and Entertainment"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Health Check
echo
echo "Check updates for Health Check"
cd ~/Music/Podcast/Health\ Check
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/healthc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Health Check"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Health Check"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Interview
echo
echo "Check updates for Interview"
cd ~/Music/Podcast/Interview
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/interview/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Interview"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Interview"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Jon Richardson
echo
echo "Check updates fo Jon Richardsonr"
cd ~/Music/Podcast/Jon\ Richardson
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6music/rh6m/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Jon Richardson"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Jon Richardson"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Music Week
echo
echo "Check updates for Music Week"
cd ~/Music/Podcast/Music\ Week
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6music/musicweek/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Music Week"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Music Week"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Nature Podcast
echo
echo "Check updates for Nature Podcast"
cd ~/Music/Podcast/Nature\ Podcast
url="http://rss.streamos.com/streamos/rss/genfeed.php?feedid=360&groupname=nature"
last_record=.Nature_rss.xml.last
curr_record=.Nature_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Toddy no update for Nature Podcast"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Nature Podcast"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# One Planet
echo
echo "Check updates for One Planet"
cd ~/Music/Podcast/One\ Planet
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/oneplanet/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for One Planet"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for One Planet"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Science in Action
echo
echo "Check updates for Science in Action"
cd ~/Music/Podcast/Science\ \in\ Action
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/scia/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Science in Action"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Science in Action"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Science Magazine Podcast
echo
echo "Check updates for Science Magazine Podcast"
cd ~/Music/Podcast/Science\ Magazine\ Podcast
url="http://www.sciencemag.org/rss/podcast.xml"
last_record=.Science_rss.xml.last
curr_record=.Science_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Science Magazine Podcast"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Science Magazine Podcast"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# Tom Robinson Introducing...
echo
echo "Check updates for Tom Robinson Introducing..."
cd ~/Music/Podcast/Tom\ Robinson\ Introducing...
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/6music/trintro/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for Tom Robinson Introducing..."
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for Tom Robinson Introducing..."
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World Book Club
echo
echo "Check updates for World Book Club"
cd ~/Music/Podcast/World\ Book\ Club
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/wbc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World Book Club"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World Book Club"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World Business News
echo
echo "Check updates for World Business News"
cd ~/Music/Podcast/World\ Business\ News
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/wbnews/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World Business News"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World Business News"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World Have Your Say
echo
echo "Check updates for World Have Your Say"
cd ~/Music/Podcast/World\ Have\ Your\ Say
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/worldservice/whys/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World Have Your Say"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World Have Your Say"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}

# World News For Children
echo
echo "Check updates for World News For Children"
cd ~/Music/Podcast/World\ News\ For\ Children
url="http://downloads.囗囗囗囗囗囗囗囗囗/podcasts/bbc7/wnc/rss.xml"
last_record=.bbc_rss.xml.last
curr_record=.bbc_rss.xml
path=.

PID=$$

wget -O ${curr_record}.$PID -q $url && {
if [ -e $curr_record ];then
cp $curr_record $last_record
else
touch $last_record
fi
grep -Eo "http://[^<]+\.mp3" ${curr_record}.$PID |sort|uniq > ${curr_record}
diff $curr_record $last_record >.diff.$PID
if [ $? -eq 0 ];then
echo "Today no updates for World News For Children"
else
grep -E "<" .diff.$PID | sed 's/<//' >.today.update.$PID
if [ $? -eq 1 ];then
echo "Toddy no add item for World News For Children"
else
echo "Today add item:" `wc -l .today.update.$PID|awk '{print $1}'`
cat .today.update.$PID
for mp3 in `cat .today.update.$PID`;do
(
mkdir -p $path && cd $path && wget -c $mp3
)
done
fi
fi
rm -f wget* .diff.$PID .today.update.$PID ${curr_record}.$PID &>/dev/null
}
fi

脚本最后一行的fi不能去掉,因为这个fi是和一开头的那个检查网络连接的if匹配的。


页首
 用户资料  
 
显示帖子 :  排序  
发表新帖 回复这个主题  [ 10 篇帖子 ] 

当前时区为 UTC + 8 小时


在线用户

正在浏览此版面的用户:没有注册用户 和 4 位游客


不能 在这个版面发表主题
不能 在这个版面回复主题
不能 在这个版面编辑帖子
不能 在这个版面删除帖子
不能 在这个版面提交附件

前往 :  
本站点为公益性站点,用于推广开源自由软件,由 DiaHosting VPSBudgetVM VPS 提供服务。
我们认为:软件应可免费取得,软件工具在各种语言环境下皆可使用,且不会有任何功能上的差异;
人们应有定制和修改软件的自由,且方式不受限制,只要他们自认为合适。

Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group
简体中文语系由 王笑宇 翻译