Kam Kam - 2 months ago 17
SQL Question

To ignore duplicate keys during 'copy from' in postgresql

I have to dump large amount of data from file to a table PostgreSQL. I know it does not support 'Ignore' 'replace' etc as done in MySql. Almost all posts regarding this in the web suggested the same thing like dumping the data to a temp table and then do a 'insert ... select ... where not exists...'.

This will not help in one case, where the file data itself contained duplicate primary keys.
Any body have an idea on how to handle this in PostgreSQL?

P.S. I am doing this from a java program, if it helps

Answer

Use the same approach as you described, but DELETE (or group, or modify ...) duplicate PK in the temp table before loading to the main table.

Something like:

CREATE TEMP TABLE tmp_table 
ON COMMIT DROP
AS
SELECT * 
FROM main_table
WITH NO DATA;

COPY tmp_table FROM 'full/file/name/here';

INSERT INTO main_table
SELECT DISTINCT ON (PK_field) *
FROM tmp_table
ORDER BY (some_fields)

Details: CREATE TABLE AS, COPY, DISTINCT ON

Comments