爬取即刻的收藏

由于平时会收藏一些小黄图,可是即刻的登陆机制对我比较费劲,好不容易找到网上的一片文章,可是看起来也比较费劲,大概知道怎么回事了之后,本身撸了一套web

即刻的机制是这样的,用户扫码进入网站,而后会获得一个access_token,几分钟后会获得一个refresh_token,而后之后再登陆的时候会调用一个接口,把refresh_token发给后台获得刷新的新的access_token。而第一个refresh_token须要去浏览器里复制出来。代码以下:json

#将传入的refresh_token发给后台去获取
def refresh_token(refresh_token):
    user_agent = getUserAgent()#构造一个user_agent
    url = "https://app.jike.ruguoapp.com/app_auth_tokens.refresh"
    headers = {"Origin":"https://web.okjike.com",
               "Referer":"https://web.okjike.com/collection",
               "User-Agent":user_agent}
    headers["x-jike-refresh-token"] = str(refresh_token)
    r = requests.get(url,headers= headers)
    # print(r.text)
    content = r.text
    return content
复制代码

构造user_agent st是我本身的工具类浏览器

from tools import Tools as tl
from tools import Settings as st

def getUserAgent():
    agentList = st.user_agent_list
    random_num = random.randint(1,len(agentList))
    user_agent = agentList[random_num-1]
    return user_agent
复制代码

拿到最新的token以后就能够想干吗干吗了bash

if __name__ == '__main__':      
    access_token =  refresh_token('eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJkYXRhIjoibjV0dVlqcVMrV0VVSDJKYTMwY0JYOTNcL1p4RTlqRExGTW1PZGRXcU9iaWZqOEZ3M3RrNjNNXC81enJsTUQ5ajNVMFVJRHZSNjlzYmhOWTBDejlQTXdXalwvSzBUcHRpRXJFMFZnXC9NSFVOYjFHaDVGajFzSEVKWm42TzR5aUk3XC9IaklrUENNeHNsSXRmNm1nVWdTUGZBbG1jZkNkdUdsblwvTGRvVGQ1UFJjQ3FNPSIsInYiOjMsIml2IjoiTWFQdTlpRUJqbUVcLzlIZURGdVVhZUE9PSIsImlhdCI6MTU1MzQwNzgwMS43NDl9.jWG-7-dUjZqSrgMJVnj1pIf52tqoSMHav_mop0_aABI')
    dic =  json.loads(access_token)
    startSpider('https://app.jike.ruguoapp.com/1.0/users/collections/list',dic['x-jike-access-token'])
复制代码

爬虫跑起来!app

loadMoreKey = None#这个全局变量是用来跑分页的,即刻的分页须要传这个

def startSpider(url,access_token):
    user_agent = getUserAgent()
    headers = {"Accept":"application/json",
    "App-Version":"5.3.0",
    "Content-Type":"application/json",
    "Origin":"https://web.okjike.com",
    "platform":"web",
    "Referer":"https://web.okjike.com/collection",
    "User-Agent":user_agent}
    headers["x-jike-access-token"] = access_token
    tl.UsingHeaders = headers#这个是用来保存请求头的,用来在下载的时候保持请求头一致,能够去掉
    data = {'limit':20,'loadMoreKey':loadMoreKey}

    response = requests.post(url,headers= headers, data= json.dumps(data))
    response.enconding = "ascii"
    print(response.status_code)
    data = json.loads(response.content.decode("utf-8"))
    global loadMoreKey
    loadMoreKey = data['loadMoreKey']
    data_list = data['data']
    for dic in data_list:
        pictures = dic['pictures']
        for picDic in pictures:
            picurl = picDic['picUrl']
            tl.downLoadFile(picurl)
    print('------结束20记录------')
    startSpider(url,access_token)
复制代码