Article
0 comment

PostgreSQL: Backup a large database to disk

I really like database dumps with complete column inserts. You can read them, browse, search for some special data and even manipulate them. The most simple way to create such a readable backup is:

pg_dump --column-inserts --inserts --if-exists --clean --create -f FILENAME.sql DATABASENAME

There is one drawback: the files are large compared to the actual information stored und im- and export are rather slow. But there is a backup format that is both compressed and fast. It’s called “directory format”:

pg_dump -Fd -f DIRECTORYNAME -d DATABASENAME

This creates a directory called DIRECTORYNAME and dumps the content of DATABASENAME in a compressed format.
Getting the data back into the database is done with pg_restore:

pg_restore -C -d DATABASENAME DIRECTORYNAME

The -C option creates the database prior import.

Article
1 comment

PostgreSQL: Copy a table structure and data

Once again a short notice about some nifty PostgreSQL feature you might not be aware of. It’s quire simple to copy the structure of a table with:

CREATE TABLE table2 ( like table1 INCLUDING DEFAULTS INCLUDING CONSTRAINTS INCLUDING INDEXES );

Now you can copy data with or without some sort of filter:

INSERT INTO table2 SELECT * FROM table LIMIT 100;

Hope that helps.

Article
0 comment

PostgreSQL: Convert an hstore column to jsonb

I’m involved in a tech scouting project, where I designed a PostgreSQL database schema using an hstore column to store additional a priori unspecified data to every record in that table. For some historical reason I decided to use an hstore column instead of one of the much more versatile JSON column types.

Historical priorities change and so they did in that project. Now I wanted to use that column as JSONB. As always in SQL columns are changed with an ALTER TABLE … ALTER COLUMN statement. In this case the conversion of the content is a bit more complex, but giving PostgreSQL some hint about what we expect does the job.

First let’s create a table to play with and insert one row of data:

CREATE TABLE hstore_data (data hstore);
INSERT INTO hstore_data (data) VALUES ('key1 => "value1", key2 => "value2"');

Let’s see, if it looks OK:

SELECT * FROM hstore_data;

Yields:

                data                
------------------------------------
 "key1"=>"value1", "key2"=>"value2"
(1 row)

Fine. Next step is the magic column conversion:

ALTER TABLE hstore_data ALTER COLUMN data TYPE JSONB USING CAST(data as JSONB);

Another look:

SELECT * FROM hstore_data;

And we get:

                 data                 
--------------------------------------
 {"key1": "value1", "key2": "value2"}
(1 row)

Mission accomplished.