python入门之爬虫篇 爬取图片,文章,网页

一,首先看看Python是如何简单的爬取网页的
1,准备工做
项目用的BeautifulSoup4和chardet模块属于三方扩展包,若是没有请自行pip安装,我是用pycharm来作的安装,下面简单讲下用pycharm安装chardet和BeautifulSoup4

html

在pycharm的设置里按照下图的步骤操做
在这里插入图片描述
以下图搜索你要的扩展类库,如咱们这里须要安装chardet直接搜索就行,而后点击install package, BeautifulSoup4作同样的操做就行
在这里插入图片描述
安装成功后就会出如今在安装列表中,到此就说明咱们安装网络爬虫扩展库成功
在这里插入图片描述
二,由浅入深,咱们先抓取网页
咱们这里以抓取简书首页为例:http://www.jianshu.com/
在这里插入图片描述
因为抓取的html文档比较长,这里简单贴出来一部分给你们看下








html5

1.<!DOCTYPE html>
2.<!--[if IE 6]><html class="ie lt-ie8"><![endif]-->
3.<!--[if IE 7]><html class="ie lt-ie8"><![endif]-->
4.<!--[if IE 8]><html class="ie ie8"><![endif]-->
5.<!--[if IE 9]><html class="ie ie9"><![endif]-->
6.<!--[if !IE]><!--> <html> <!--<![endif]-->
7.
8.<head>
9. <meta charset="utf-8">
10. 10.<meta http-equiv="X-UA-Compatible" content="IE=Edge">
11.<meta name="viewport" content="width=device-width, initial-scale=1.0,user
      scalable=no">
12.
13.<!-- Start of Baidu Transcode -->
14.<meta http-equiv="Cache-Control" content="no-siteapp" />
15.<meta http-equiv="Cache-Control" content="no-transform" />
11. <meta name="applicable-device" content="pc,mobile">
17.<meta name="MobileOptimized" content="width"/>
18.<meta name="HandheldFriendly" content="true"/>
19.<meta name="mobile-agent" content="format=html5;url=http://localhost/">
20.<!-- End of Baidu Transcode -->
21.
12.    <meta name="description"  content="简书是一个优质的创做社区,在这里,你能够任性地创做,一篇短文、一张照片、一首诗、一幅画……咱们相信,每一个人都是生活中的艺术家,有着无穷的创造力。">
23.<meta name="keywords"  content="简书,简书官网,图文编辑软件,简书下载,图文创做,创做软件,原创社区,小说,散文,写做,阅读">
24...........后面省略一大堆

这就是Python3的爬虫简单入门,是否是很简单,建议你们多敲几遍python

三,Python3爬取网页里的图片并把图片保存到本地文件夹
目标
数据库

爬取百度贴吧里的图片
把图片保存到本地,都是妹子图片奥
很少说,直接上代码,代码里的注释很详细。你们仔细阅读注释就能够理解了
在这里插入图片描述
火烧眉毛的看下都爬取到了些什么美图



api

在这里插入图片描述
就这么轻易的爬取到了24个妹子的图片。是否是很简单。
网络

四,Python3爬取新闻网站新闻列表app

这里咱们只爬取新闻标题,新闻url,新闻图片连接。
爬取到的数据目前只作展现,等我学完Python操做数据库之后会把爬取到的数据保存到数据库。
到这里稍微复杂点,就分布给你们讲解

网站

1 这里咱们须要先爬取到html网页上面第一步有讲怎么抓取网页
2分析咱们要抓取的html标签
ui

在这里插入图片描述
分析上图咱们要抓取的信息再div中的a标签和img标签里,因此咱们要想的就是怎么获取到这些信息
编码

这里就要用到咱们导入的BeautifulSoup4库了,这里的关键代码
在这里插入图片描述
上面代码获取到的allList就是咱们要获取的新闻列表,抓取到的以下

1.[<div class="hot-article-img">
2.<a href="/article/211390.html" target="_blank">
3.![](https://imgconvert.csdnimg.cn/aHR0cHM6Ly9pbWcuaHV4aXVjZG4uY29tL2FydGljbGUvY292ZXIvMjAxNzA4LzIyLzE3MzUzNTg2MjgyMS5qcGc?x-oss-process=image/format,png)
4.</a>
5.</div>, <div class="hot-article-img">
6.<a href="/article/214982.html" target="_blank" title="TFBOYS成员各自飞,商业价值天花板已现?">
7.<!--视频和图片保留一个-->
8.![](https://imgconvert.csdnimg.cn/aHR0cHM6Ly9pbWcuaHV4aXVjZG4uY29tL2FydGljbGUvY292ZXIvMjAxNzA5LzE3LzA5NDg1NjM3ODQyMC5qcGc?x-oss-process=image/format,png)
9.</a>
10.</div>, <div class="hot-article-img">
11.<a href="/article/213703.html" target="_blank" title="买手店江湖">
12.<!--视频和图片保留一个-->
13.![](https://imgconvert.csdnimg.cn/aHR0cHM6Ly9pbWcuaHV4aXVjZG4uY29tL2FydGljbGUvY292ZXIvMjAxNzA5LzE3LzEyMjY1NTAzNDQ1MC5qcGc?x-oss-process=image/format,png)
14.</a>
15.</div>, <div class="hot-article-img">
16<a href="/article/214679.html" target="_blank" title="iPhone X正式告诉咱们,手机和相机开始分道扬镳">
17.<!--视频和图片保留一个-->
18.![](https://imgconvert.csdnimg.cn/aHR0cHM6Ly9pbWcuaHV4aXVjZG4uY29tL2FydGljbGUvY292ZXIvMjAxNzA5LzE0LzE4MjE1MTMwMDI5Mi5qcGc?x-oss-process=image/format,png)
19.</a>
20.</div>, <div class="hot-article-img">
21.<a href="/article/214962.html" target="_blank" title="信用已被透支殆尽,乐视汽车或成贾跃亭弃子">
22.<!--视频和图片保留一个-->
23.![](https://imgconvert.csdnimg.cn/aHR0cHM6Ly9pbWcuaHV4aXVjZG4uY29tL2FydGljbGUvY292ZXIvMjAxNzA5LzE2LzIxMDUxODY5NjM1Mi5qcGc?x-oss-process=image/format,png)
24.</a>
25.</div>, <div class="hot-article-img">
26.<a href="/article/214867.html" target="_blank" title="别小看“搞笑诺贝尔奖”,要向好奇心致敬">
27.<!--视频和图片保留一个-->
28.![](https://imgconvert.csdnimg.cn/aHR0cHM6Ly9pbWcuaHV4aXVjZG4uY29tL2FydGljbGUvY292ZXIvMjAxNzA5LzE1LzE4MDYyMDc4MzAyMC5qcGc?x-oss-process=image/format,png)
29.</a>
30.</div>, <div class="hot-article-img">
31.<a href="/article/214954.html" target="_blank" title="10 年前改变世界的,可不止有 iPhone | 发车">
32.<!--视频和图片保留一个-->
33.![](https://imgconvert.csdnimg.cn/aHR0cHM6Ly9pbWcuaHV4aXVjZG4uY29tL2FydGljbGUvY292ZXIvMjAxNzA5LzE2LzE2MjA0OTA5NjAxNS5qcGc?x-oss-process=image/format,png)
34.</a>
35.</div>, <div class="hot-article-img">
36.<a href="/article/214908.html" target="_blank" title="感谢微博替我作主">
37.<!--视频和图片保留一个-->
38.![](https://imgconvert.csdnimg.cn/aHR0cHM6Ly9pbWcuaHV4aXVjZG4uY29tL2FydGljbGUvY292ZXIvMjAxNzA5LzE2LzAxMDQxMDkxMzE5Mi5qcGc?x-oss-process=image/format,png)
39.</a>
40.</div>, <div class="hot-article-img">
41.<a href="/article/215001.html" target="_blank" title="苹果确认取消打赏抽成,但还有多少内容让你以为值得掏腰包?">
42.<!--视频和图片保留一个-->
43.![](https://imgconvert.csdnimg.cn/aHR0cHM6Ly9pbWcuaHV4aXVjZG4uY29tL2FydGljbGUvY292ZXIvMjAxNzA5LzE3LzE1NDE0NzEwNTIxNy5qcGc?x-oss-process=image/format,png)
44.</a>
45.</div>, <div class="hot-article-img">
46.<a href="/article/214969.html" target="_blank" title="中国音乐的“全面付费”时代即将到来?">
47.<!--视频和图片保留一个-->
48.![](https://imgconvert.csdnimg.cn/aHR0cHM6Ly9pbWcuaHV4aXVjZG4uY29tL2FydGljbGUvY292ZXIvMjAxNzA5LzE3LzEwMTIxODMxNzk1My5qcGc?x-oss-process=image/format,png)
49.</a>
50.</div>, <div class="hot-article-img">
51.<a href="/article/214964.html" target="_blank" title="百丽退市启示录:“一代鞋王”如何与新生代消费者渐行渐远">
52.<!--视频和图片保留一个-->
53.![](https://imgconvert.csdnimg.cn/aHR0cHM6Ly9pbWcuaHV4aXVjZG4uY29tL2FydGljbGUvY292ZXIvMjAxNzA5LzE2LzIxMzQwMDE2MjgxOC5qcGc?x-oss-process=image/format,png)
54.</a>
55.</div>]
这里数据是抓取到了,可是太乱了,而且还有不少不是咱们想要的,下面就经过遍从来提炼出咱们的有效信息

3 提取有效信息

1.#遍历列表,获取有效信息
2.for news in allList:
3. aaa = news.select(‘a’)
4. # 只选择长度大于0的结果
5. if len(aaa) > 0:
6. # 文章连接
7. try:#若是抛出异常就表明为空
8. href = url + aaa[0][‘href’]
9. except Exception:
10. href=’’
11. # 文章图片url
12. try:
13. imgUrl = aaa[0].select(‘img’)[0][‘src’]
14. except Exception:
15. imgUrl=""
16. # 新闻标题
17. try:
18. title = aaa[0][‘title’]
19. except Exception:
20. title = “标题为空”
21. print(“标题”,title,"\nurl:",href,"\n图片地址:",imgUrl)
22. print("==============================================================================================")
``
这里添加异常处理,主要是有的新闻可能没有标题,没有url或者图片,若是不作异常处理,可能致使咱们爬取的中断。






















过滤后的有效信息

标题 标题为空 
url: https://www.huxiu.com/article/211390.html 
图片地址: https://img.huxiucdn.com/article/cover/201708/22/173535862821.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 TFBOYS成员各自飞,商业价值天花板已现? 
url: https://www.huxiu.com/article/214982.html 
图片地址: https://img.huxiucdn.com/article/cover/201709/17/094856378420.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 买手店江湖 
url: https://www.huxiu.com/article/213703.html 
图片地址: https://img.huxiucdn.com/article/cover/201709/17/122655034450.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 iPhone X正式告诉咱们,手机和相机开始分道扬镳 
url: https://www.huxiu.com/article/214679.html 
图片地址: https://img.huxiucdn.com/article/cover/201709/14/182151300292.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 信用已被透支殆尽,乐视汽车或成贾跃亭弃子 
url: https://www.huxiu.com/article/214962.html 
图片地址: https://img.huxiucdn.com/article/cover/201709/16/210518696352.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 别小看“搞笑诺贝尔奖”,要向好奇心致敬 
url: https://www.huxiu.com/article/214867.html 
图片地址: https://img.huxiucdn.com/article/cover/201709/15/180620783020.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 10 年前改变世界的,可不止有 iPhone | 发车 
url: https://www.huxiu.com/article/214954.html 
图片地址: https://img.huxiucdn.com/article/cover/201709/16/162049096015.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 感谢微博替我作主 
url: https://www.huxiu.com/article/214908.html 
图片地址: https://img.huxiucdn.com/article/cover/201709/16/010410913192.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 苹果确认取消打赏抽成,但还有多少内容让你以为值得掏腰包? 
url: https://www.huxiu.com/article/215001.html 
图片地址: https://img.huxiucdn.com/article/cover/201709/17/154147105217.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 中国音乐的“全面付费”时代即将到来? 
url: https://www.huxiu.com/article/214969.html 
图片地址: https://img.huxiucdn.com/article/cover/201709/17/101218317953.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 百丽退市启示录:“一代鞋王”如何与新生代消费者渐行渐远 
url: https://www.huxiu.com/article/214964.html 
图片地址: https://img.huxiucdn.com/article/cover/201709/16/213400162818.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================

到这里咱们抓取新闻网站新闻信息就大功告成了,下面贴出来完整代码

from bs4 import BeautifulSoup
from urllib import request
import chardet

url = "https://www.huxiu.com"
response = request.urlopen(url)
html = response.read()
charset = chardet.detect(html)
html = html.decode(str(charset["encoding"]))  # 设置抓取到的html的编码方式

# 使用剖析器为html.parser
soup = BeautifulSoup(html, 'html.parser')
# 获取到每个class=hot-article-img的a节点
allList = soup.select('.hot-article-img')
#遍历列表,获取有效信息
for news in allList:
    aaa = news.select('a')
    # 只选择长度大于0的结果
    if len(aaa) > 0:
        # 文章连接
        try:#若是抛出异常就表明为空
            href = url + aaa[0]['href']
        except Exception:
            href=''
        # 文章图片url
        try:
            imgUrl = aaa[0].select('img')[0]['src']
        except Exception:
            imgUrl=""
        # 新闻标题
        try:
            title = aaa[0]['title']
        except Exception:
            title = "标题为空"
        print("标题",title,"\nurl:",href,"\n图片地址:",imgUrl)
        print("=============================================================================================="
      
数据获取到了咱们还要把数据存到数据库,只要存到咱们的数据库里,数据库里有数据了,就能够作后面的数据分析处理,也能够用这些爬取来的文章,给app提供新闻api接口
最后给你你们分享一些小福利吧
连接:https://pan.baidu.com/s/1sMxwTn7P2lhvzvWRwBjFrQ

提取码:kt2v

连接容易被举报过时,若是失效了就加企鹅群654234959 领取吧