中文字幕av专区_日韩电影在线播放_精品国产精品久久一区免费式_av在线免费观看网站

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

使用python爬蟲實現網絡股票信息爬取的demo

發布時間:2020-09-13 08:45:17 來源:腳本之家 閱讀:198 作者:OliverkingLi 欄目:開發技術

實例如下所示:

import requests
from bs4 import BeautifulSoup
import traceback
import re
 
def getHTMLText(url):
 try:
  r = requests.get(url)
  r.raise_for_status()
  r.encoding = r.apparent_encoding
  return r.text
 except:
  return ""
 
def getStockList(lst, stockURL):
 html = getHTMLText(stockURL)
 soup = BeautifulSoup(html, 'html.parser') 
 a = soup.find_all('a')
 for i in a:
  try:
   href = i.attrs['href']
   lst.append(re.findall(r"[s][hz]\d{6}", href)[0])
  except:
   continue
 
def getStockInfo(lst, stockURL, fpath):
 for stock in lst:
  url = stockURL + stock + ".html"
  html = getHTMLText(url)
  try:
   if html=="":
    continue
   infoDict = {}
   soup = BeautifulSoup(html, 'html.parser')
   stockInfo = soup.find('div',attrs={'class':'stock-bets'})
 
   name = stockInfo.find_all(attrs={'class':'bets-name'})[0]
   infoDict.update({'股票名稱': name.text.split()[0]})
    
   keyList = stockInfo.find_all('dt')
   valueList = stockInfo.find_all('dd')
   for i in range(len(keyList)):
    key = keyList[i].text
    val = valueList[i].text
    infoDict[key] = val
    
   with open(fpath, 'a', encoding='utf-8') as f:
    f.write( str(infoDict) + '\n' )
  except:
   traceback.print_exc()
   continue
 
def main():
 stock_list_url = 'http://quote.eastmoney.com/stocklist.html'
 stock_info_url = 'https://gupiao.baidu.com/stock/'
 output_file = 'D:/BaiduStockInfo.txt'
 slist=[]
 getStockList(slist, stock_list_url)
 getStockInfo(slist, stock_info_url, output_file)
 
main()

使用python爬蟲實現網絡股票信息爬取的demo

優化并且加入進度條顯示

import requests
from bs4 import BeautifulSoup
import traceback
import re
def getHTMLText(url, code="utf-8"):
 try:
  r = requests.get(url)
  r.raise_for_status()
  r.encoding = code
  return r.text
 except:
  return ""
def getStockList(lst, stockURL):
 html = getHTMLText(stockURL, "GB2312")
 soup = BeautifulSoup(html, 'html.parser')
 a = soup.find_all('a')
 for i in a:
  try:
   href = i.attrs['href']
   lst.append(re.findall(r"[s][hz]\d{6}", href)[0])
  except:
   continue
def getStockInfo(lst, stockURL, fpath):
 count = 0
 for stock in lst:
  url = stockURL + stock + ".html"
  html = getHTMLText(url)
  try:
   if html == "":
    continue
   infoDict = {}
   soup = BeautifulSoup(html, 'html.parser')
   stockInfo = soup.find('div', attrs={'class': 'stock-bets'})
   name = stockInfo.find_all(attrs={'class': 'bets-name'})[0]
   infoDict.update({'股票名稱': name.text.split()[0]})
   keyList = stockInfo.find_all('dt')
   valueList = stockInfo.find_all('dd')
   for i in range(len(keyList)):
    key = keyList[i].text
    val = valueList[i].text
    infoDict[key] = val
   with open(fpath, 'a', encoding='utf-8') as f:
    f.write(str(infoDict) + '\n')
    count = count + 1
    print("\r當前進度: {:.2f}%".format(count * 100 / len(lst)), end="")
  except:
   count = count + 1
   print("\r當前進度: {:.2f}%".format(count * 100 / len(lst)), end="")
   continue
def main():
 stock_list_url = 'http://quote.eastmoney.com/stocklist.html'
 stock_info_url = 'https://gupiao.baidu.com/stock/'
 output_file = 'BaiduStockInfo.txt'
 slist = []
 getStockList(slist, stock_list_url)
 getStockInfo(slist, stock_info_url, output_file)
main()

使用python爬蟲實現網絡股票信息爬取的demo

以上這篇使用python爬蟲實現網絡股票信息爬取的demo就是小編分享給大家的全部內容了,希望能給大家一個參考,也希望大家多多支持億速云。

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

新干县| 宁陵县| 巴里| 榆社县| 南溪县| 景德镇市| 日照市| 广灵县| 托里县| 遵义县| 灌南县| 大渡口区| 武清区| 宜州市| 汾西县| 宣恩县| 宁晋县| 黑龙江省| 广水市| 芦山县| 深泽县| 航空| 宁夏| 唐河县| 上杭县| 惠来县| 龙川县| 靖安县| 凉城县| 科技| 遵义市| 冷水江市| 吉安市| 和政县| 旺苍县| 嘉定区| 华安县| 镇江市| 萨嘎县| 柳河县| 双桥区|