简介:本文介绍使用python抓取考试资料网的答案,将答案写到一个html里面,支持多url输入.解决考试资料网上面答案不能被复制的问题.
使用方法
- 找到你需要的题目,如http://www.ppkao.com/tiku/shiti/428976.html,点击
点击查看答案
得到一个新的链接http://www.ppkao.com/tiku/daan/428976/40da7044806ecf7eca6388e4cff72407. - 运行本脚本:
➜ ~ python /Users/yanzi/work/workspaces/python/ppkao/spider.py
.....请输入url地址,多个url以空格分开....
http://www.ppkao.com/tiku/daan/428976/40da7044806ecf7eca6388e4cff72407
-----------------That is ok---------------
注意,python版本为3.5,使用2.7会出错
源码如下
源码也可以直接看github
import sys,urllib.request, os
from bs4 import BeautifulSoup
# python /Users/yanzi/work/workspaces/python/spider.py http://www.baidu.com
# print(sys.argv[0])
# print(urllib.__file__)
# 获取脚本文件的当前路径
def cur_file_dir():
path1 = sys.path[0]
if os.path.isdir(path1):
return path1
elif os.path.isfile(path1):
return os.path.dirname(path1)
def get_source_url():
url = sys.argv[1]
if 1:
return url
else:
return url
def get_template_body():
# 加载模版
path = cur_file_dir() + "/template.html"
soup2 = BeautifulSoup(open(path), "html.parser")
body = soup2.body
return (soup2, body)
def get_answer(url):
content = urllib.request.urlopen(url).read()
# fp = open("/Users/yanzi/Desktop/aaa.html", "wb")
# fp.write(content)
# fp.close()
soup = BeautifulSoup(content, "html.parser")
# print(soup.title)
answer1 = soup.find('div', class_='single-siti clearfix')
answer1.i.extract()
answer2 = soup.find('div', class_='tm-bottom')
answer2.a.extract()
answer3 = soup.find('div', class_='analysis clearfix')
return (answer1, answer2, answer3)
"""
def get_url_from_browser():
path = cur_file_dir()
# dr = webdriver.Chrome(executable_path=path + '/chromedriver')
chromedriver = '/Applications/Google Chrome.app/Contents/MacOS' + '/chromedriver'
os.environ["webdriver.chrome.driver"] = chromedriver
dr = webdriver.Chrome(chromedriver)
dr.get("http://www.163.com")
url = dr.current_url
print(url)
def get_url_from_browser2():
path = cur_file_dir()
dr = webdriver.Firefox()
# dr.get("http://www.163.com")
url = dr.current_url
print(url)
# url = dr.open_new_tab("http:\\www.163.com")
"""
# url=get_source_url()
(soup2, body) = get_template_body();
url = input(".....请输入url地址,多个url以空格分开....\n")
# print(url)
urls = url.split(" ")
for temp_url in urls:
(a1, a2, a3) = get_answer(temp_url)
body.append(a1)
body.append(a2)
body.append(a3)
tt = BeautifulSoup("<br/>", "html.parser")
body.append(tt)
#save to answer.html
fp = open("/Users/yanzi/Desktop/answer.html", "w")
fp.write(soup2.prettify())
fp.close()
print('-----------------That is ok---------------')
备注
1.本来我的构想是python直接抓取当前浏览器打开的符合条件的网址,然后把答案爬出来。但是经过实验,无法直接获得到浏览器当前地址。它可以打开一个新的浏览器shell,但是与原来打开的不同.所以避免不了要手动把浏览器的地址粘贴过来。
2.python发起一个网络请求
content = urllib.request.urlopen(url).read()
soup = BeautifulSoup(content, "html.parser")
通过urllib发起一个请求,通过read得到内容,然后将内容传进去,实例化一个BeautifulSoup对象,就着就可以用soup.title
或soup.body
得到对应字段,然后用answer1 = soup.find('div', class_='single-siti clearfix')
find找到class对应的内容,return()可以返回多个字段。
3.open直接打开本地文件
soup2 = BeautifulSoup(open(path), "html.parser")
4.python的for循环
urls = url.split(" ")
for temp_url in urls:
5.python保存一个文件
fp = open("/Users/yanzi/Desktop/answer.html", "w")
fp.write(soup2.prettify())
fp.close()
整体都比较简单,记录于斯.如果有需要也可以把url来源直接放到数据库或文本里,然后批量处理。
参考链接
- Beautiful Soup 4.2.0 文档