import arche
from arche import *

The only required parameter is source, which accepts various inputs - see signature (?Arche) or examples.

Data Sources

Arche with pandas API provide ability to read data from various places and formats.

*.json as iterable

import json
with open("data/items_books_1.json") as f:
    raw_items = json.load(f)
a = Arche(source=raw_items)

*.jl.gz and pandas API

url = ""
df = pd.read_json(url,lines=True)

jsonlines and json are not memory efficient if data contains nested objects. If other types are not available, you can read compressed jsonline in chunks.

chunks = pd.read_json(url, lines=True, chunksize=500)
dfs = [df for df in chunks]
df = pd.concat(dfs, sort=False)
(1000, 5)

Uncompressed jsonline files however need to be downloaded first

raw_json ="")
chunks = pd.read_json(raw_json, lines=True, chunksize=500)
dfs = [df for df in chunks]
df = pd.concat(dfs, sort=False)
(1000, 5)
a = Arche(source=df)
Pandas stores `NA` (missing) data differently, which might affect schema validation. Should you care, consider passing raw data in array-like types.
For more details, see

Scrapy Cloud keys

You can access data from a job at Scrapy Cloud using the job key.

Note: To access Scrapy Cloud Data, you need to set Scrapinghub API key in SH_APIKEY environment variable.

a = Arche(source="381798/1/3")

To get a full report of the data, arche provides the report_all() function


This method runs a determined set of rules. Which rules to execute is dependent from input parameters - i.e. if we have both source and target then [comparison] rules will be executed too. Some of them is not part of report_all(), see rules for more information. The validation can be improved by adding a json schema, so let’s infer one from the data we already have.

JSON Schema

{'$schema': '',
 'additionalProperties': False,
 'definitions': {'url': {'pattern': '^https?://(www\\.)?[a-z0-9.-]*\\.[a-z]{2,}([^<>%\\x20\\x00-\\x1f\\x7F]|%[0-9a-fA-F]{2})*$'}},
 'properties': {'category': {'type': 'string'},
                'description': {'type': 'string'},
                'price': {'type': 'string'},
                'title': {'type': 'string'}},
 'required': ['category', 'description', 'price', 'title'],
 'type': 'object'}

By itself a basic schema is not very helpful, but you can update it.

title price category description It's Only the Himalayas £45.17 Travel “Wherever you go, whatever you do, just . . . ... Libertarianism for Beginners £51.33 Politics Libertarianism isn't about winning elections; ... Mesaerion: The Best Science Fiction Stories 18... £37.59 Science Fiction Andrew Barger, award-winning author and engine... Olio £23.88 Poetry Part fact, part fiction, Tyehimba Jess's much ... Our Band Could Be Your Life: Scenes from the A... £57.25 Music This is the never-before-told story of the mus...

Looks like price can be checked with regex. Let’s also add category tag which helps to see the distribution in categoric data and unique tag to title to ensure there are no duplicates.

a.schema = {
    "$schema": "",
    "definitions": {
        "float": {
            "pattern": "^-?[0-9]+\\.[0-9]{2}$"
        "url": {
            "pattern": "^https?://(www\\.)?[a-z0-9.-]*\\.[a-z]{2,}([^<>%\\x20\\x00-\\x1f\\x7F]|%[0-9a-fA-F]{2})*$"
    "additionalProperties": False,
    "type": "object",
    "properties": {
        "category": {"type": "string", "tag": ["category"]},
        "price": {"type": "string", "pattern": "^£\d{2}.\d{2}$"},
        "_type": {"type": "string"},
        "description": {"type": "string"},
        "title": {"type": "string", "tag": ["unique"]},
        "_key": {"type": "string"}
    "required": [

Or if your job is really big you can use almost 100x faster backend


We already got something! Let’s execute the whole thing again to see how category tag works.


Accessing Results Data

dict_keys(['Job Outcome', 'Job Errors', 'Garbage Symbols', 'Fields Coverage', 'Categories', 'JSON Schema Validation', 'Tags', 'Compare Price Was And Now', 'Duplicates', 'Coverage For Scraped Categories'])
[23]:"Coverage For Scraped Categories").stats
[Novels                  1
 Erotica                 1
 Suspense                1
 Short Stories           1
 Adult Fiction           1
 Cultural                1
 Academic                1
 Paranormal              1
 Crime                   1
 Parenting               1
 Historical              2
 Contemporary            3
 Christian               3
 Politics                3
 Health                  4
 Biography               5
 Self Help               5
 Sports and Games        5
 New Adult               6
 Christian Fiction       6
 Spirituality            6
 Religion                7
 Psychology              7
 Art                     8
 Autobiography           9
 Humor                  10
 Travel                 11
 Thriller               11
 Philosophy             11
 Business               12
 Music                  13
 Science                14
 Science Fiction        16
 Womens Fiction         17
 Horror                 17
 History                18
 Poetry                 19
 Classics               19
 Historical Fiction     26
 Childrens              29
 Food and Drink         30
 Mystery                32
 Romance                35
 Fantasy                48
 Young Adult            54
 Fiction                65
 Add a comment          67
 Sequential Art         75
 Nonfiction            110
 Default               152
 Name: category, dtype: int64]
[ ]: