您好,登錄后才能下訂單哦!
這篇文章主要為大家展示了“如何利用scrapy將爬到的數據保存到mysql”,內容簡而易懂,條理清晰,希望能夠幫助大家解決疑惑,下面讓小編帶領大家一起研究并學習一下“如何利用scrapy將爬到的數據保存到mysql”這篇文章吧。
1.環境建立
1.使用xmapp安裝php, mysql ,phpmyadmin
2.安裝python3,pip
3.安裝pymysql
3.(windows 略)我這邊是mac,安裝brew,用brew 安裝scrapy
2.整個流程
1. 創建數據庫和數據庫表,準備保存
2.寫入爬蟲目標URL,進行網絡請求
3.對爬返回數據進行處理,得到具體數據
4.對于具體數據保存到數據庫中
2.1.創建數據庫
首先創建一個數據庫叫scrapy,然后創建一個表article,我們這里給body加了唯一索引,防止重復插入數據
-- -- Database: `scrapy` -- -- -------------------------------------------------------- -- -- 表的結構 `article` -- CREATE TABLE `article` ( `id` int(11) NOT NULL, `body` varchar(200) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, `author` varchar(50) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, `createDate` datetime NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -- Indexes for table `article` -- ALTER TABLE `article` ADD PRIMARY KEY (`id`), ADD UNIQUE KEY `uk_body` (`body`);
弄好以后是這樣的。
2.2 先看下整個爬蟲項目的結構
quotes_spider.py是核心,負責對網絡請求和對內容進行處理,然后對整理好的內容拋給pipelines進行具體處理,保存到數據庫中,這樣不會影響速度。
其他的看 圖說明
2.2 寫入爬蟲目標URL,進行網絡請求
import scrapy from tutorial.items import TutorialItem class QuotesSpider(scrapy.Spider): name = "quotes" def start_requests(self): url = 'http://quotes.toscrape.com/tag/humor/' yield scrapy.Request(url) def parse(self, response): item = TutorialItem() for quote in response.css('div.quote'): item['body'] = quote.css('span.text::text').extract_first() item['author'] = quote.css('small.author::text').extract_first() yield item next_page = response.css('li.next a::attr("href")').extract_first() if next_page is not None: yield response.follow(next_page, self.parse)
start_requests 就是要寫入具體要爬的URL
parse就是核心的對返回的數據進行處理的地方,然后以item的形式拋出,接下來定義好下一個要爬的內容
2.3 items
# -*- coding: utf-8 -*- # Define here the models for your scraped items # # See documentation in: # https://doc.scrapy.org/en/latest/topics/items.html import scrapy class TutorialItem(scrapy.Item): body = scrapy.Field() author = scrapy.Field() pass
2.4 pipelines
# -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html import pymysql import datetime from tutorial import settings import logging class TutorialPipeline(object): def __init__(self): self.connect = pymysql.connect( host = settings.MYSQL_HOST, db = settings.MYSQL_DBNAME, user = settings.MYSQL_USER, passwd = settings.MYSQL_PASSWD, charset = 'utf8', use_unicode = True ) self.cursor = self.connect.cursor(); def process_item(self, item, spider): try: self.cursor.execute( "insert into article (body, author, createDate) value(%s, %s, %s) on duplicate key update author=(author)", (item['body'], item['author'], datetime.datetime.now() )) self.connect.commit() except Exception as error: logging.log(error) return item def close_spider(self, spider): self.connect.close();
2.5 配置
ITEM_PIPELINES = { 'tutorial.pipelines.TutorialPipeline':300 } MYSQL_HOST = 'localhost' MYSQL_DBNAME = 'scrapy' MYSQL_USER = 'root' MYSQL_PASSWD = '123456' MYSQL_PORT = 3306
3.啟動爬蟲
scrapy crawl quotes
以上是“如何利用scrapy將爬到的數據保存到mysql”這篇文章的所有內容,感謝各位的閱讀!相信大家都有了一定的了解,希望分享的內容對大家有所幫助,如果還想學習更多知識,歡迎關注億速云行業資訊頻道!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。