fix template/shortcode here later

Spiders & Scraping

Model of Scrapy Architecture

basic Project outline

aim: Set up and run a web scraper using scrapy.

Links with other projects

LAMP server - display results

Hardware

  • Raspberry Pi 3B+
  • Usb cable
  • Cat 5-6 - RJ45 Network Cable
  • 8tb NAS Drive
  • 5tb NAS Drive

Software build

Raspbian Stretch
Motioneye

Links with other projects

LAMP server - display results

Key Steps/Milestones



Fututre Opportunities


Future opportunities

Hang out, Jump in

Special thanks and credits


Future opportunities

Hang out, Jump in

recommended spider Project Tutorials

architecture

Architecture overview

This document describes the architecture of Scrapy and how its components interact.

Overview

The following diagram shows an overview of the Scrapy architecture with its components and an outline of the data flow that takes place inside the system (shown by the red arrows). A brief description of the components is included below with links for more detailed information about them. The data flow is also described below.

Data flow

Scrapy architecture

The data flow in Scrapy is controlled by the execution engine, and goes like this:

  1. The Engine gets the initial Requests to crawl from the Spider.
  2. The Engine schedules the Requests in the Scheduler and asks for the next Requests to crawl.
  3. The Scheduler returns the next Requests to the Engine.
  4. The Engine sends the Requests to the Downloader, passing through theDownloader Middlewares (see process_request()).
  5. Once the page finishes downloading the Downloader generates a Response (with that page) and sends it to the Engine, passing through the Downloader Middlewares (see process_response()).
  6. The Engine receives the Response from the Downloader and sends it to theSpider for processing, passing through the Spider Middleware (seeprocess_spider_input()).
  7. The Spider processes the Response and returns scraped items and new Requests (to follow) to the Engine, passing through the Spider Middleware(see process_spider_output()).
  8. The Engine sends processed items to Item Pipelines, then send processed Requests to the Scheduler and asks for possible next Requests to crawl.
  9. The process repeats (from step 1) until there are no more requests from theScheduler.

Components

Scrapy Engine

The engine is responsible for controlling the data flow between all components of the system, and triggering events when certain actions occur. See the Data Flowsection above for more details.

Scheduler

The Scheduler receives requests from the engine and enqueues them for feeding them later (also to the engine) when the engine requests them.

Downloader

The Downloader is responsible for fetching web pages and feeding them to the engine which, in turn, feeds them to the spiders.

Spiders

Spiders are custom classes written by Scrapy users to parse responses and extract items (aka scraped items) from them or additional requests to follow. For more information see Spiders.

Item Pipeline

The Item Pipeline is responsible for processing the items once they have been extracted (or scraped) by the spiders. Typical tasks include cleansing, validation and persistence (like storing the item in a database). For more information see Item Pipeline.

Downloader middlewares

Downloader middlewares are specific hooks that sit between the Engine and the Downloader and process requests when they pass from the Engine to the Downloader, and responses that pass from Downloader to the Engine.

Use a Downloader middleware if you need to do one of the following:

  • process a request just before it is sent to the Downloader (i.e. right before Scrapy sends the request to the website);
  • change received response before passing it to a spider;
  • send a new Request instead of passing received response to a spider;
  • pass response to a spider without fetching a web page;
  • silently drop some requests.

For more information see Downloader Middleware.

Spider middlewares

Spider middlewares are specific hooks that sit between the Engine and the Spiders and are able to process spider input (responses) and output (items and requests).

Use a Spider middleware if you need to

  • post-process output of spider callbacks – change/add/remove requests or items;
  • post-process start_requests;
  • handle spider exceptions;
  • call errback instead of callback for some of the requests based on response content.

For more information see Spider Middleware.

Event-driven networking

Scrapy is written with Twisted, a popular event-driven networking framework for Python. Thus, it’s implemented using a non-blocking (aka asynchronous) code for concurrency.

For more information about asynchronous programming and Twisted see these links:

Customer spiders

The data flow in Scrapy is controlled by the execution engine, and goes like this:

 

  1. The Engine gets the initial Requests to crawl from the Spider.
 
  1. The Engine schedules the Requests in the Scheduler and asks for the next Requests to crawl.
  2. The Scheduler returns the next Requests to the Engine.
  3. The Engine sends the Requests to the Downloader, passing through theDownloader Middlewares (see process_request()).
  4. Once the page finishes downloading the Downloader generates a Response (with that page) and sends it to the Engine, passing through the Downloader Middlewares (see process_response()).
  5. The Engine receives the Response from the Downloader and sends it to theSpider for processing, passing through the Spider Middleware (seeprocess_spider_input()).
  6. The Spider processes the Response and returns scraped items and new Requests (to follow) to the Engine, passing through the Spider Middleware(see process_spider_output()).
  7. The Engine sends processed items to Item Pipelines, then send processed Requests to the Scheduler and asks for possible next Requests to crawl.
  8. The process repeats (from step 1) until there are no more requests from theScheduler.

What is it trying to do?

Collect data

Save it on a NAS drive

1. Run spider to send requests to crawl to engine.

How? Python code

2. Engine schedules the request in scheduler

Python:

 

 

Purpose of spider:

Simple

Go to wiki page and return the section headings and url links and then save in csv.

 

The data flow in Scrapy is controlled by the execution engine, and goes like this:

 

  1. The Engine gets the initial Requests to crawl from the Spider.

 

 

1. . Run spider to send requests to crawl to engine.

How? Python code

Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.
Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.
Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.
Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

The following is adapted from https://doc.scrapy.org/en/latest/intro/tutorial.html 

 

The Engine gets the initial Requests to crawl from the Spider.

  1. The Engine schedules the Requests in the Scheduler and asks for the next Requests to crawl.
  2. The Scheduler returns the next Requests to the Engine.
  3. The Engine sends the Requests to the Downloader, passing through theDownloader Middlewares (see process_request()).
  4. Once the page finishes downloading the Downloader generates a Response (with that page) and sends it to the Engine, passing through the Downloader Middlewares (see process_response()).
  5. The Engine receives the Response from the Downloader and sends it to theSpider for processing, passing through the Spider Middleware (seeprocess_spider_input()).
  6. The Spider processes the Response and returns scraped items and new Requests (to follow) to the Engine, passing through the Spider Middleware(see process_spider_output()).
  7. The Engine sends processed items to Item Pipelines, then send processed Requests to the Scheduler and asks for possible next Requests to crawl.
  8. The process repeats (from step 1) until there are no more requests from theScheduler.

What is it trying to do?

Collect data

Save it on a NAS drive

1. Create spider with the configuration defined below. Run spider to send requests to crawl

Code:

2. 

Scrapy

Aim

  • Use automated bots to gather information, format and distribute information.
  • Use information from intelligent automated machine output for scrapy to gather information
  • Use information from scrapy as an input into an intelligent machine
  • Use improvement algorythms
  • get scrap to find improvement algorithms and order data into set that scrappy can interpret

Scrapy Schema

Functionality

Stage 1 I want it to

Hardware Requirements

  • Raspberry Pi 3B+
  • Usb cable
  • Cat 5-6 – RJ45 Network Cable
  • 8tb NAS Drive
  • 5tb NAS Drive

Software

Raspian Stretch Scrapy ? which build

Stage 1

Load Operating Systems Software Run generic spiders

Stage 2

Link to LAMP and get result to

Stage 3

Links with other projects

LAMP Server

Future opportunities

Hang out, Jump in  
Close Menu