您的位置:首页 > 游戏 > 手游 > Python爬虫实作篇

Python爬虫实作篇

2024/9/24 1:24:06 来源:https://blog.csdn.net/KUBET9/article/details/140044767  浏览:    关键词:Python爬虫实作篇

本次以Dcard宠物版为例

  1. 程式码撰写逻辑
    1. 先以宠物版首页将所以文章连结爬下来存到list里,那就会考虑到浏览器往下滑换页,所以要送一个GET
    2. 从list将连结一个一个取出来组合成正确的网址,并GET下来,寻找图片的标签
    3. 如果有图片的标签就在本地端新增档案,并将图片存档
    4. 将每一个显示在页面是给使用者看得,都存到一个记事本中当作图片目录

2.先将Python套件import进来requests BeautifulSoup json

import requests
from bs4 import BeautifulSoup
import json

3.一开始需要先GET宠物版首页,并使用BeautifulSoup处理成html样式,还有开一个txt存目录

test = open("spider/pet/test.txt","w",encoding='UTF-8')p = requests.Session()
url=requests.get("https://www.dcard.tw/f/pet")
soup = BeautifulSoup(url.text,"html.parser")

4.将首页文章的连结网址存入list中

sel = soup.select("div.PostList_wrapper_2BLUM a.PostEntry_root_V6g0r")
a=[]
for s in sel:a.append(s["href"])
url = "https://www.dcard.tw"+ a[2]
5.根据把首页往下滑,发现网站会对server送一个GET请求下一个30篇文章
for k in range(0,10):post_data={"before":a[-1][9:18],"limit":"30","popular":"true"}r = p.get("https://www.dcard.tw/_api/forums/pet/posts",params=post_data, headers = { "Referer": "https://www.dcard.tw/", "User-Agent": "Mozilla/5.0" })
6.发现GET来的档案格式是JSON,那小编比较不会处理JSON怎办?可以把他转成Python就好啦,并组合好网址放进list
data2 = json.loads(r.text)for u in range(len(data2)):Temporary_url = "/f/pet/p/"+ str(data2[u]["id"]) + "-" + str(data2[u]["title"].replace(" ","-"))a.append(Temporary_url)
7.接下来就是将list里的URL,GET网页出来
j=0
q=0
for i in a[2:]:url = "https://www.dcard.tw"+ij+=1print ("第",j,"页的URL为:"+url)#file.write("temperature is {} wet is {}%\n".format(temperature, humidity))test.write("第 {} 页的URL为: {} \n".format(j,url))url=requests.get(url)soup = BeautifulSoup(url.text,"html.parser")
8.运用BeautifulSoup查看是否有符合图片的标签,然后运用上次所学,将图片存档
sel_jpg = soup.select("div.Post_content_NKEl9 div div div img.GalleryImage_image_3lGzO")
for c in sel_jpg:q+=1print("第",q,"张:",c["src"])test.write("%\n""第 {} 张: {} \n".format(q,c["src"])) pic=requests.get(c["src"])img2 = pic.contentpic_out = open("spider/pet/"+str(q)+".png",'wb')pic_out.write(img2)pic_out.close()

9.记得将刚刚的txt目录关档,并写一个爬虫结束提醒自己

test.close()
print("爬重结束")
10.完整的程式码(可改range将文章变多)
#藉由首页取得所有文章的URL
import requests
from bs4 import BeautifulSoup
import jsontest = open("spider/pet/test.txt","w",encoding='UTF-8')p = requests.Session()
url=requests.get("https://www.dcard.tw/f/pet")
soup = BeautifulSoup(url.text,"html.parser")
sel = soup.select("div.PostList_wrapper_2BLUM a.PostEntry_root_V6g0r")
a=[]
for s in sel:a.append(s["href"])
url = "https://www.dcard.tw"+ a[2]for k in range(0,10):post_data={"before":a[-1][9:18],"limit":"30","popular":"true"}r = p.get("https://www.dcard.tw/_api/forums/pet/posts",params=post_data, headers = { "Referer": "https://www.dcard.tw/", "User-Agent": "Mozilla/5.0" })data2 = json.loads(r.text)for u in range(len(data2)):Temporary_url = "/f/pet/p/"+ str(data2[u]["id"]) + "-" + str(data2[u]["title"].replace(" ","-") )a.append(Temporary_url)
j=0 #为了印页数
q=0 #为了印张数
for i in a[2:]:url = "https://www.dcard.tw"+ij+=1print ("第",j,"页的URL为:"+url)#file.write("temperature is {} wet is {}%\n".format(temperature, humidity))test.write("第 {} 页的URL为: {} \n".format(j,url))url=requests.get(url)soup = BeautifulSoup(url.text,"html.parser")sel_jpg = soup.select("div.Post_content_NKEl9 div div div img.GalleryImage_image_3lGzO")for c in sel_jpg:q+=1print("第",q,"张:",c["src"])test.write("%\n""第 {} 张: {} \n".format(q,c["src"])) pic=requests.get(c["src"])img2 = pic.contentpic_out = open("spider/pet/"+str(q)+".png",'wb')pic_out.write(img2)pic_out.close()test.close()
print("爬虫结束")

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com