• Home
    • English
    • 中文
  • About Us
  • Services
    • SEO Services
    • Website Design Service
  • Projects
  • Docs
  • Blog
    • Affiliate
    • Ecommerce
    • Frontend
    • linux
      • nginx
    • PHP
      • Magento
      • wordpress
    • Python
    • SEO
    • Web
  • Contact Us

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

Design a plugin for wordpress woocommerce to display a tab to show attachment download

2024-04-06

TranslatePress v2.6.9 – WordPress Translation Plugin

2023-12-25

A Linux batch script converting pictures to webp format

2023-07-10
Facebook Twitter Instagram
  • 中文
  • English
Facebook Twitter Instagram Pinterest VKontakte
Weilai Tech Weilai Tech
  • Home
    • English
    • 中文
  • About Us
  • Services
    • SEO Services
    • Website Design Service
  • Projects
  • Docs
  • Blog
    • Affiliate
    • Ecommerce
    • Frontend
    • linux
      • nginx
    • PHP
      • Magento
      • wordpress
    • Python
    • SEO
    • Web
  • Contact Us
Weilai Tech Weilai Tech
Home»Python»python login and craw email address
Python

python login and craw email address

OxfordBy Oxford2019-12-18No Comments2 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email

1、需求
最近受人之托,帮忙用python开发程序实现爬取大连海事大学信箱的2000条数据(主题和意见建议)保存到excel中。

2、项目分析
首先,我们打开信箱列表链接http://oa.dlmu.edu.cn/echoWall/listEchoWall.do如图所示:

但是列表页中只有主题,获取意见建议必须进入到详情页面,当点击某一条时发现跳转到登录页面                     https://id.dlmu.edu.cn/cas/login,这就说明获取想要的数据必须登录之后才可以,如图所示:

我们打开chrome的开发者工具,填写好提前准备好的账户密码,点击登录,在开发者工具中查看登录的请求如下所示:

通过观察我们发现登录过程中除了输入的账户密码还有其他的几个参数,其中有几个参数为空,说明可以不用传。

接着往下看登录成功之后,直接跳转到详情页面:

通过观察我们可以看到在请求详情页面时候带有cookie,还有其他参数4个,经过测试我们发现只要带pkId这个参数就可以获取到详情页面,pkId可以在列表页中获取到。

至此,整个请求的过程我们已经分析完毕。

首先,我们来简单介绍一下项目中使用到的几个工具包:requests、bs4、xlwt

requests 是一个很实用的Python HTTP客户端库,由于这个过程需要登录,并且获取详情时候需要带有cookie信息,所以我们决定采用requests工具包,requests的Session可以自动保持cookie,不需要手动维护cookie的内容。

bs4 是一个可以从HTML或XML文件中提取数据的Python库,这是一个特别方便的工具包,亲测好用。

xlwt 是Python用来写Excel文件的包。

其次,简单讲解一下实现过程:

1、通过列表页获取详情页id

由于列表页不需要登录就可以访问,我们直接可通过request.get()方法就可以获取到列表页的内容、获取到内容之后通过bs4工具包方法很方便就可以获取到详情页id。

2、模拟登录获取session

通过requests.Session()获取到一个session对象,并且利用这个session对象和按照上面看到的登录的参数和请求头部发起模拟登录请求,与服务器维持一个session连接。

3、获取数据

利用上面两个步骤获取的session 对象和id可直接发起请求获取到详情页面内容,然后再次通过bs4工具包的方法就可以获取的主题和意见建议。

4、写入excel

将第3步获取到的数据利用xlwt写入到excel表当中。

3、运行结果
4、代码
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# datetime:2019/3/28 9:47
# software: PyCharm
' comment '

__author__ = 'amx'

import requests
#html标签解析工具
from bs4 import BeautifulSoup
#excel写入工具
import xlwt
#获取指定页码列表的详情页id
def get_currentPage_list(x):
res = requests.get('http://oa.dlmu.edu.cn/echoWall/listEchoWall.do?page='+str(x))
# print(res.text)
#解析html并获取每一条的详情页id
soup = BeautifulSoup(res.text, "html.parser")
table = soup.find_all('table')
ids=[]
for x in table[1].find_all(class_='choose'):
ids.append(x.attrs['value'])
# print(ids)
return ids
#获取所有数据的详情页面id
allIds=[]
for i in range(200):
allIds+=get_currentPage_list(i)
print(len(allIds))

#请求头部
REQ_HEADERS = {
"User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36",
'Connection': 'keep-alive'
}
#登录参数
REQ_PAYLOAD = {
"username": "username",
"password": "password",
"type": 1,
"_eventId": "submit",
"execution": "a1bf168c-955a-4e97-bb57-50e6c8fae89a_ZXlKaGJHY2lPaUpJVXpVeE1pSjkuT0E2NU5OVU90VVVVMWoyVUdIZWViVXg0eVdMU242ZDZaMU9KQXI1Nk9wS0cvdDNPaVF2MkduVVFuR1k1Z0tFVFdnVkZ0M1g1UEIwR01ZaEsxTFd1SGdLbXdsdjNPOUpFUEsxWTVQNFNpZzNjQ29VTCtPZ1BYOFNLL3NTbDBzdCswRHVLblNCMnVLZUNja3k2TXY1c3BERkRhSjFZK0l2dXN0VjZOU3QrRlhEeWhSWXpSQnd5cm5kblk4OVJLQlRGOE8xNno0MTJMTGh1R3BvQkd0cFUxaHEzSDNqQTBEU1FJZ1Zjcmt4L00ra2s5YnlDRUl4amNXOEU3RG5ZTjd3czgrajZwSGw2UXNuSjFIK3dIeEVKL3JPT3FZZFBrQm83SE5VbkFVYWlhQ0NnaEJ0SnM0TURiUGpRQUpoaVVDbHl5RWszeDFIc1BGSTB1clg2clBuM2xyR2wvdkVDOHNVSlFoZVlvaWhLRlJ1WVlEY0IzbTRhRjhyS0t6STQzZ3ROemxmYndJcmxNSHhGU1lYLy9rTEpzTnBVRDUrSFNCakNqd3JoUTk1NGtVeDBJNk5zcy9LalIxeldlT1d0QkpaQnVJeUJsVnBMdWE1dkhvcjFoME1WU1VGbk1mT2tqb3VjUTVVdUg4cHZrU1JLbTVOeXRsZzN2UWpQMVZLc3NmckozdFk3NHFrUEdIMEFhRTM3dnYxWFdxODdSYUxBb1IzMW9hRlltbk1hTVdjZzN0cHVHaUpXQWlLVzVlZytxWHNZVU1EUy9IS1ZpWnFJanlsS1lxTGQvdUE2cWpNWHkwODJQRi9LOEhOQ2FOS09XRnU3bUdTWk85eTNIK3ljQXBvWGEwTGJlM0RtNFVlOTI3aUIvMGh2cEpVeGFqcWQ5aE5ReFZpMmJtVjQ4M29icWhWaFRFTVo3dWp1RmFTRlJPd1dkMGdnbUZOblUreVpVd0dvQXIxNlVUcXlyQmZsaCtvSnh6Z2xQRkl2d2tsbTdVQTltaStFa0pZNWpmZ2tHUGFHNjJ0M3J3aUlXNlNzajlxY2FWT1Y5Uzk4Yy9CMy94WXlCNGp3c3Jnby95U2pRenRUczRRbzQzNUtYaUt1c050THllTURZU0RzODlXUG9zTU5rRWh0SGE0QnhWMFpOZXFwd0N6N01ObXgvaGQySW1hSFVYUmNBbUpPcVlLK09nNnYwNVdSUUhwUHQrMDNaVW9PQThPTnNWdkxpcEVFZ2lWWGZJSTBMSStWTFVueUE4bFYxcVk2MnJWeVlCSytNVEdqcDlnaGVmTythQ3doTE5iOHpvaWU2aWVBL2VGR1RDSmJ1ZUVGc3pydU1EZFBzNXg4c1VFeVJvbElmRUVXZEoxTFBtZG8ydXVLc3hKcU1HMUhkS1dLRG1RNzM3amhXaS9mKzY4cUxGaW85YUlXR2hDdTY3ejE3ZGNOMGpPTmxTS0dMS1c2S0xFcllneXpBMDVUTXBRcnhGQXcwaFd0N25zQ3pPNmdFbmw5d0JXVjRFU1VNbkU3NGpYV2krSlRFQkZnWmUxM05xVXZpbVk5bjF1VEdaMEJtdkVSNnR4cG83ZkhXbXVCcURGdG9zTUdaTlpFTlgyR21ZeGtQaTBRakVHL1EzMmp5SllyUlFETlU3eTV1WmNwTWU4RGgyYU1OM29OWXp0TFYvZXN3UEF5d3lYamlUSS9hWWI0RktFd1g1eW44dUNtYzdTTWpvbnZ6Um9XZlRLUTZ5TGVoaTNaQTgrVlg3L1RrVGQxSUxjZXNncnFUa3dPQ0pNbU9sL3M4WU9hbFpuOXF3cGhJeEFxOXk2Mm14Wkx1Sm5yVUMyMUM0Y3FZU1h6ZWdnQ0pKaz0ueU9Ub1VOMGt0OF9ib1g3OVd6aEx0b2dwVVFnelBkQkRGSDhrT3dYTFdORW01MlVFYVVxX0huQ0NTbFZScW1VZW9vSkxFemVKcFFHQXBHMWVwaXQzckE="
}
# requests的Session可以自动保持cookie,不需要自己维护cookie内容
S = requests.Session()
# 先访问login页面,拿到cookies信息 再携带cookie信息访问详情页(否则直接访问会跳转到登录页面)
# 发起登录
R_login = S.post("https://id.dlmu.edu.cn/cas/login",headers=REQ_HEADERS,data=REQ_PAYLOAD)
#设置编码格式
R_login.encoding = "UTF-8"
# print(R_login.text)
result = []
#创建excel并设置编码格式
wb = xlwt.Workbook(encoding='ascii')
#穿件sheet
ws = wb.add_sheet('dataSheet')
#设置列宽
ws.col(0).width = 8888
ws.col(1).width = 8888*6
#标题样式
# 设置水平居中垂直居中
titleStyle = xlwt.XFStyle()
titleal = xlwt.Alignment()
titleal.vert = 0x01
titleal.horz = 0x02
titleStyle.alignment = titleal
# 设置字体大小
font = xlwt.Font()
font.height = 500
titleStyle.font=font
#表头样式
headStyle = xlwt.XFStyle()
headal = xlwt.Alignment()
headal.vert = 0x01
headal.horz = 0x02
headStyle.alignment = headal
# 设置字体大小
font = xlwt.Font()
font.height = 350
headStyle.font=font
#内容样式
#设置自动换行
contentStyle = xlwt.XFStyle()
contental = xlwt.Alignment()
contental.wrap = xlwt.Alignment.WRAP_AT_RIGHT
contentStyle.alignment = contental
#设置字体大小
font = xlwt.Font()
font.height = 220
contentStyle.font=font
#主题样式
subjectStyle = xlwt.XFStyle()
subjectal = xlwt.Alignment()
subjectal.vert = 0x01
subjectal.horz = 0x02
subjectStyle.alignment = subjectal
#设置字体大小
font = xlwt.Font()
font.height = 220
subjectStyle.font=font
#合并第一行作为标题栏
ws.write_merge(0,0,0,2,'大连海事大学信箱数据',titleStyle)
ws.write(1, 0, '主题',headStyle)
ws.write(1, 1, '内容',headStyle)
#循环查询详情内容并且写入到excel
for i in range(len(allIds)):
#发起详情页面请求
R_detail = S.post("http://oa.dlmu.edu.cn/echoWall/detailLetter.do?pkId="+str(allIds[i]))
# print(R_detail.text)
#解析html获取想要的内容并输出到excel中
soup = BeautifulSoup(R_detail.text, "html.parser")
table = soup.find_all('table')
if(len(table)>=1):
tds = table[0].find_all('td')
if(len(tds)>5):
#获取到的字符串去除空格、换行、回车、制表符
subject=tds[0].text.replace("\n","").replace("\r","").replace("\t","").strip()
content=tds[4].text.replace("\n","").replace("\r","").replace("\t","").strip()
ws.write(i+2, 0, subject,subjectStyle)
ws.write(i+2, 1, content,contentStyle)
string={'主题':subject,'内容':content}
print(string)
result.append(string)
#保存数据到指定excel文件中
wb.save('E:\大连海事大学信箱数据.xlsx')
#循环输出数据在控制台
for resultItem in result:
print(resultItem)

5、说明
由于这个项目确实比较小而且3个工具包都特别常用,故没有对工具包的使用做过多的介绍,相关资料网上一大把,在此只是简单介绍了一下思路并附上代码。本人之前写博客比较少,如果写的不好,还请海涵。
————————————————
版权声明:本文为CSDN博主「amxliu」的原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/weixin_39080216/article/details/88869191

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Avatar photo
Oxford

Related Posts

How To Scrape Amazon at Scale With Python Scrapy, And Never Get Banned

2022-01-08

linked-in get email info via python

2019-12-31

sh auto push static content to github -gogit

2018-12-29

How To Crawl A Web Page with Scrapy and Python 3

2018-11-29
Recent Posts
  • Design a plugin for wordpress woocommerce to display a tab to show attachment download
  • TranslatePress v2.6.9 – WordPress Translation Plugin
  • A Linux batch script converting pictures to webp format
  • Hearing aid listed company official website SEO case
  • how to use docker to run php5.6 plus apache
December 2019
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
3031  
« Nov   Jan »
Tags
app branding design digital Docly docs etc faq fix github Helpdesk Image issue magento Manual marketing memecached Photography planing seo sequrity tips Travel ui/ux web WordPress 爬虫
Editors Picks
About Us

Guangzhou Weilai Technology is a foreign trade integrated marketing service provider focusing on Google as the drainage center and marketing self-built website as the carrier.

Email Us: [email protected]
Contact: +86 18676917505

Facebook Pinterest YouTube LinkedIn
Recent Posts
  • Design a plugin for wordpress woocommerce to display a tab to show attachment download
  • TranslatePress v2.6.9 – WordPress Translation Plugin
  • A Linux batch script converting pictures to webp format
  • Hearing aid listed company official website SEO case
  • how to use docker to run php5.6 plus apache
From Flickr
Website Design Case
© 2024 Copyright by Guangzhou Weilai Technology Co.,Ltd..
  • Home
  • About Us
  • SEO Services
  • Website Design Service
  • Projects
  • Blog
  • Contact Us

Type above and press Enter to search. Press Esc to cancel.