pdxpug-incremental-schema



pdxpug-incremental-schema

0 0


pdxpug-incremental-schema

Presentation on exploring JSON data using PostgreSQL

On Github jasonaowen / pdxpug-incremental-schema

Incremental Schema Discovery

or

JSON Wrangling

by

@jasonaowen

https://jasonaowen.github.io/pdxpug-incremental-schema https://github.com/jasonaowen/pdxpug-incremental-schema

Query Plan

  • Dataset introduction
  • Loading data into Postgres
  • Poking at JSON objects
  • Correcting data types
  • Incremental normalization
  • Dealing with JSON arrays
  • Comparing Google Big Query

Dataset Introduction

https://developer.github.com/v3/

GitHub Archive

https://www.githubarchive.org/

wget + cron

Data format

Gzipped, Line-Delimited JSON

February 18th, 5pm Pacific time: http://data.githubarchive.org/2016-02-19-01.json.gz

Loading into Postgres

Python is pretty cool

https://github.com/jasonaowen/document-to-postgres

Expected format

CREATE TABLE raw_events (
  id SERIAL PRIMARY KEY,
  event JSONB NOT NULL
)

Not lossless

Unicode allows escaped 0 bytes, but if you try to look at a JSON object with one:

unsupported Unicode escape sequence
DETAIL:  \u0000 cannot be converted to text.
CONTEXT:  JSON data, line 1: ...yload": {"action": "created", "comment": {"body":...

JSONB, too

The jsonb type also rejects \u0000 (because that cannot be represented in PostgreSQL's text type)

http://www.postgresql.org/docs/9.5/static/datatype-json.html

Need to filter it

zcat 2016-02-19-01.json.gz | grep '\\u0000' | wc -l

The double backslash matters.

other approaches are valid, here; you could halt the import, or substitute a placeholder using sed.

Loading

zcat 2016-02-19-01.json.gz |
  grep -v '\\u0000' |
  python json-to-postgres.py owenja pdxpug raw_events event

Size comparison

SELECT nspname || '.' || relname AS "relation",
       pg_size_pretty(pg_relation_size(C.oid)) AS "size"
  FROM pg_class C
    LEFT JOIN pg_namespace N
      ON (N.oid = C.relnamespace)
WHERE nspname NOT IN ('pg_catalog', 'information_schema')
ORDER BY pg_relation_size(C.oid) DESC;

https://wiki.postgresql.org/wiki/Disk_Usage

will someone write this down?

Keep a copy for comparison

CREATE TABLE events (
  id INTEGER PRIMARY KEY
    REFERENCES raw_events(id),
  event JSONB NOT NULL
);

INSERT INTO events (id, event)
SELECT id, event
FROM raw_events;

What's in a JSON?

Scalars:

  • Boolean
  • Number
  • String
  • null

Containers:

  • Array
  • Object

What's in an object?

{
  "created_at": "2011-09-06T17:26:27Z",
  "id": 12345
}
keys and values. Keys are always strings; values can be any JSON type. Keys may not be unique!

What's in our database?

SELECT jsonb_typeof(event),
       COUNT(*)
FROM events
GROUP BY jsonb_typeof(event);

We'll be doing that a lot

CREATE FUNCTION type_count(in table_name text, in column_name text)
  RETURNS TABLE (json_type text, count bigint) AS
  $func$
    BEGIN
      RETURN QUERY EXECUTE format(
        'SELECT jsonb_typeof("%s"),
                COUNT(*)
         FROM "%s"
         GROUP BY jsonb_typeof("%s")',
      column_name, table_name, column_name);
    END
  $func$ LANGUAGE plpgsql STABLE;

select * from type_count('events', 'event');

What are those objects?

SELECT jsonb_object_keys(event),
       COUNT(*)
FROM events
GROUP BY jsonb_object_keys(event);

We'll be doing that a lot, too

CREATE FUNCTION key_count(in table_name text, in column_name text)
  RETURNS TABLE (key text, count bigint) AS
  $func$
    BEGIN
      RETURN QUERY EXECUTE format(
        'SELECT jsonb_object_keys("%s"),
                COUNT(*)
         FROM "%s"
         GROUP BY jsonb_object_keys("%s")',
      column_name, table_name, column_name);
    END
  $func$ LANGUAGE plpgsql STABLE;

select * from key_count('events', 'event');

Getting data out of a JSON object

Now that we know what's in those objects, let's get them into columns.

So much DDL

ALTER TABLE events
ADD COLUMN actor JSONB,
ADD COLUMN created_at JSONB,
ADD COLUMN github_id JSONB,
ADD COLUMN org JSONB,
ADD COLUMN payload JSONB,
ADD COLUMN public JSONB,
ADD COLUMN repo JSONB,
ADD COLUMN type JSONB;
UPDATE events
SET actor = event->'actor',
    created_at = event->'created_at',
    github_id = event->'github_id',
    org = event->'org',
    payload = event->'payload',
    public = event->'public',
    repo = event->'repo',
    type = event->'type';

We might do that a few times, too

CREATE FUNCTION explode_json_column(
  in table_name text,
  in column_name text
) RETURNS void AS
  $func$
    DECLARE
      record RECORD;
    BEGIN
      FOR record IN SELECT key, count
                    FROM key_count(table_name, column_name) LOOP
        EXECUTE format('ALTER TABLE "%s" ADD COLUMN "%s" JSONB',
                       table_name, record.key);
        EXECUTE format('UPDATE "%s" SET "%s" = "%s"->''%s''',
                       table_name, record.key, column_name, record.key);
      END LOOP;
    END
  $func$ LANGUAGE plpgsql;

Correcting scalar data types

PostgreSQL seems not to be able to automatically, implicitly convert between a JSON[B] type and another scalar.

TEXT is the great equalizer

SELECT * FROM type_count('events', 'created_at');
SELECT created_at FROM events LIMIT 5;

ALTER TABLE events
ALTER COLUMN created_at
  TYPE TIMESTAMP
  USING created_at::TEXT::TIMESTAMP;

And again

SELECT * FROM type_count('events', 'public');
SELECT public FROM events LIMIT 5;

ALTER TABLE events
ALTER COLUMN public
  TYPE BOOLEAN
  USING public::TEXT::BOOLEAN;
the public field in this dataset is always a True value; probably you could drop it.

And again...

SELECT * FROM type_count('events', 'type');
SELECT type FROM events LIMIT 5;

ALTER TABLE events
ALTER COLUMN type
  TYPE TEXT
  USING type::TEXT::TEXT;

What're those quotes doing there?

SELECT type FROM events LIMIT 5;

UPDATE events
SET type = TRIM(BOTH '"' FROM type);

Maybe let's automate

CREATE FUNCTION set_concrete_type(
  in table_name text,
  in column_name text,
  in type_name text
) RETURNS void AS
  $func$
  BEGIN
    EXECUTE format(
      'ALTER TABLE "%s" ALTER COLUMN "%s" TYPE %s
         USING TRIM(BOTH ''"'' FROM "%s"::TEXT)::%s',
      table_name, column_name, type_name, column_name, type_name);
  END
  $func$ LANGUAGE plpgsql;

Last scalar

SELECT * FROM type_count('events', 'github_id');
SELECT github_id FROM events LIMIT 5;

SELECT set_concrete_type('events', 'github_id', 'BIGINT');

Extracting a table

Let's normalize a little.

Actors gonna act

SELECT * FROM type_count('events', 'actor');

SELECT COUNT(DISTINCT actor) FROM events;

SELECT * FROM key_count('events', 'actor');

Give them their own place

CREATE TABLE actors (
  actor_id SERIAL PRIMARY KEY,
  payload JSONB UNIQUE NOT NULL
);

INSERT INTO actors (payload)
SELECT DISTINCT actor FROM events;

But keep them connected

ALTER TABLE events
  ADD COLUMN actor_id INTEGER
  REFERENCES actors (actor_id);

UPDATE events
  SET actor_id =
    (SELECT actor_id FROM actors WHERE actor = payload);
without that UNIQUE constraint, this update would have taken far longer.

Finish the move

ALTER TABLE events
  ALTER COLUMN actor_id SET NOT NULL,
  DROP COLUMN actor;

Now we can start working on the actors table

SELECT explode_json_column('actors', 'payload');

SELECT * FROM type_count('actors', 'avatar_url');
SELECT set_concrete_type('actors', 'avatar_url', 'TEXT');

SELECT * FROM type_count('actors', 'id');
SELECT set_concrete_type('actors', 'id', 'BIGINT');
SELECT * FROM type_count('actors', 'login');
SELECT set_concrete_type('actors', 'login', 'TEXT');

SELECT * FROM type_count('actors', 'url');
SELECT set_concrete_type('actors', 'url', 'TEXT');

Clean up data

SELECT * FROM type_count('actors', 'gravatar_id');
SELECT DISTINCT gravatar_id FROM actors;

ALTER TABLE actors DROP COLUMN gravatar_id;

If it's always empty, do we still need it?

Finish up with actors

ALTER TABLE actors DROP COLUMN payload;

So mechanical!

Polymorphism

The payload type is a little different.

SELECT * FROM key_count('events', 'payload');

Varies with type

SELECT type,
       jsonb_object_keys(event),
       COUNT(*)
FROM events
GROUP BY type, jsonb_object_keys(event)

Let's do PushEvent

CREATE TABLE pushes (
  id SERIAL PRIMARY KEY,
  event_id INTEGER NOT NULL
    REFERENCES events(id),
  payload JSONB NOT NULL);

INSERT INTO pushes (event_id, payload)
SELECT id, payload FROM events WHERE type = 'PushEvent';

Note that the foreign key points the other way!

Most of these are mechanical

SELECT * FROM key_count('pushes', 'payload');
SELECT explode_json_column('pushes', 'payload');
SELECT set_concrete_type('pushes', 'before', 'TEXT');
SELECT set_concrete_type('pushes', 'distinct_size', 'INTEGER');
SELECT set_concrete_type('pushes', 'head', 'TEXT');
SELECT set_concrete_type('pushes', 'push_id', 'BIGINT');
SELECT set_concrete_type('pushes', 'ref', 'TEXT');
SELECT set_concrete_type('pushes', 'size', 'INTEGER');

But what's that commits column?

SELECT * FROM type_count('pushes', 'commits');
SELECT commits->0 FROM pushes WHERE id = 1;
SELECT jsonb_array_elements(commits) FROM pushes WHERE id = 1;

Sounds like a link table

CREATE TABLE push_commits (
  id SERIAL PRIMARY KEY,
  push_id INTEGER NOT NULL
    REFERENCES PUSHES(id),
  commit JSONB NOT NULL
);

INSERT INTO push_commits (push_id, commit)
SELECT id, jsonb_array_elements(commits) FROM pushes;

Link target

CREATE TABLE commits (
  id SERIAL PRIMARY KEY,
  payload JSONB NOT NULL
);

INSERT INTO commits (payload)
SELECT DISTINCT commit FROM push_commits;

Large values are hard

ALTER TABLE push_commits
  ADD COLUMN commit_id INTEGER
    REFERENCES commits (id);

CREATE UNIQUE INDEX ON commits (payload);
CREATE UNIQUE INDEX ON commits (MD5(payload::TEXT));
CREATE INDEX ON push_commits (MD5(commit::TEXT));
we don't need a unique index on push_commits because the same commit can be in multiple pushes.
UPDATE push_commits SET commit_id =
  (SELECT id FROM commits WHERE MD5(commit::TEXT) = MD5(payload::TEXT));

ALTER TABLE push_commits
  ALTER COLUMN commit_id SET NOT NULL,
  DROP COLUMN commit,
  DROP COLUMN id,
  ADD PRIMARY KEY (push_id, commit_id);

Finish with commits

SELECT * FROM type_count('commits', 'payload');
SELECT * FROM key_count('commits', 'payload');
SELECT explode_json_column('commits', 'payload');
SELECT set_concrete_type('commits', 'distinct', 'BOOLEAN');
SELECT set_concrete_type('commits', 'message', 'TEXT');
SELECT set_concrete_type('commits', 'sha', 'TEXT');
SELECT set_concrete_type('commits', 'url', 'TEXT');

Good break point

We could extract the authors table here.

SELECT * FROM type_count('commits', 'author');
SELECT * FROM key_count('commits', 'author');

But we won't, as it doesn't cover anything new.

Clean up

ALTER TABLE commits DROP COLUMN payload;
ALTER TABLE pushes DROP COLUMN payload,
  DROP COLUMN commits;
UPDATE events SET payload = NULL WHERE type = 'PushEvent';

Size comparison

SELECT nspname || '.' || relname AS "relation",
       pg_size_pretty(pg_relation_size(C.oid)) AS "size"
  FROM pg_class C
    LEFT JOIN pg_namespace N
      ON (N.oid = C.relnamespace)
WHERE nspname NOT IN ('pg_catalog', 'information_schema')
ORDER BY pg_relation_size(C.oid) DESC;

https://wiki.postgresql.org/wiki/Disk_Usage

Google BigQuery

This is a lot of data.

Sizes: 2016-01-15-15

  • gzipped JSON: 128507688
  • JSONB: ~67 MB
  • somewhat normalized: ~193 MB

And that's just for an hour!

https://bigquery.cloud.google.com/table/githubarchive:day.yesterday

Thank you

@jasonaowen

https://jasonaowen.github.io/pdxpug-incremental-schema https://github.com/jasonaowen/pdxpug-incremental-schema

Incremental Schema Discovery or JSON Wrangling by @jasonaowen https://jasonaowen.github.io/pdxpug-incremental-schema https://github.com/jasonaowen/pdxpug-incremental-schema