How can i import this sql dump?

Posted on 2012-03-27
Last Modified: 2012-04-26
Hi folks.  I have a firewall device that dumps some SQL info that i would like to import into another postgresql server.  attached is what gets dumped but i get errors importing.  Can anyone tell me what i need to change here to be able to import this into another database?


-- PostgreSQL database dump

SET client_encoding = 'UTF8';
SET check_function_bodies = false;
SET client_min_messages = warning;

SET search_path = public, pg_catalog;

SET default_tablespace = '';

SET default_with_oids = false;

-- Name: usage_stat_raw_10272011; Type: TABLE; Schema: public; Owner: postgres; Tablespace: 

CREATE TABLE usage_stat_raw_10272011 (
    usage_stat_id bigserial NOT NULL,
    interval_start_time timestamp without time zone DEFAULT now(),
    interval_end_time timestamp without time zone DEFAULT now(),
    report_id bigint,
    username character varying(65),
    content_category_type integer,
    ip_protocol character varying(20),
    total_packet_count bigint DEFAULT 0,
    total_block_count bigint DEFAULT 0,
    total_byte_count bigint DEFAULT 0,
    total_upstream_byte_count bigint DEFAULT 0,
    use_time bigint DEFAULT 0,
    total_downstream_byte_count bigint DEFAULT 0,
    full_report_stat integer,
    reporting_group integer DEFAULT 0,
    total_upstream_packet_count bigint DEFAULT 0,
    total_downstream_packet_count bigint DEFAULT 0,
    content_category_type_name character varying(20),
    filtering_group_number integer,
    firstname character varying(40) DEFAULT ''::character varying,
    lastname character varying(40) DEFAULT ''::character varying,
    filtering_group_name character varying(65) DEFAULT ''::character varying,
    interval_seconds bigint DEFAULT 0,
    aggregate_bandwidth integer DEFAULT 1,
    "location" character varying(60)

ALTER TABLE public.usage_stat_raw_10272011 OWNER TO postgres;

-- Name: usage_stat_raw_10272011_usage_stat_id_seq; Type: SEQUENCE SET; Schema: public; Owner: postgres

SELECT pg_catalog.setval(pg_catalog.pg_get_serial_sequence('usage_stat_raw_10272011', 'usage_stat_id'), 1674380, true);

-- Data for Name: usage_stat_raw_10272011; Type: TABLE DATA; Schema: public; Owner: postgres

COPY usage_stat_raw_10272011 (usage_stat_id, interval_start_time, interval_end_time, report_id, username, content_category_type, ip_protocol, total_packet_count, total_block_count, total_byte_count, total_upstream_byte_count, use_time, total_downstream_byte_count, full_report_stat, reporting_group, total_upstream_packet_count, total_downstream_packet_count, content_category_type_name, filtering_group_number, firstname, lastname, filtering_group_name, interval_seconds, aggregate_bandwidth, "location") FROM stdin;
1	2011-10-27 02:10:36.123041	2011-10-27 02:10:36.123041	0	*	1		1192	0	229642	149469	0	80173	0	0	668	524	All	1			NoInternet	3602	0	
2	2011-10-27 02:10:36.123041	2011-10-27 02:10:36.123041	0	*	1		1603	0	398948	168716	0	230232	0	0	805	798	All	1			NoInternet	3602	0

-- Name: pk_usage_stat_raw_10272011_usage_stat_id; Type: CONSTRAINT; Schema: public; Owner: postgres; Tablespace: 

ALTER TABLE ONLY usage_stat_raw_10272011
    ADD CONSTRAINT pk_usage_stat_raw_10272011_usage_stat_id PRIMARY KEY (usage_stat_id);

-- Name: ix_usage_stat_raw_10272011_cct_un; Type: INDEX; Schema: public; Owner: postgres; Tablespace: 

CREATE INDEX ix_usage_stat_raw_10272011_cct_un ON usage_stat_raw_10272011 USING btree (content_category_type, username);

-- Name: ix_usage_stat_raw_10272011_cct_un_protocol; Type: INDEX; Schema: public; Owner: postgres; Tablespace: 

CREATE INDEX ix_usage_stat_raw_10272011_cct_un_protocol ON usage_stat_raw_10272011 USING btree (username, content_category_type, ip_protocol);

-- Name: ix_usage_stat_raw_10272011_content_category_type; Type: INDEX; Schema: public; Owner: postgres; Tablespace: 

CREATE INDEX ix_usage_stat_raw_10272011_content_category_type ON usage_stat_raw_10272011 USING btree (content_category_type);

-- Name: ix_usage_stat_raw_10272011_q1; Type: INDEX; Schema: public; Owner: postgres; Tablespace: 

CREATE INDEX ix_usage_stat_raw_10272011_q1 ON usage_stat_raw_10272011 USING btree (interval_start_time, full_report_stat, report_id, username, ip_protocol, content_category_type, total_byte_count);

-- Name: ix_usage_stat_raw_10272011_q2; Type: INDEX; Schema: public; Owner: postgres; Tablespace: 

CREATE INDEX ix_usage_stat_raw_10272011_q2 ON usage_stat_raw_10272011 USING btree (full_report_stat, report_id, username, ip_protocol, content_category_type, total_byte_count);

-- Name: ix_usage_stat_raw_10272011_usage_stat_id; Type: INDEX; Schema: public; Owner: postgres; Tablespace: 

CREATE INDEX ix_usage_stat_raw_10272011_usage_stat_id ON usage_stat_raw_10272011 USING btree (usage_stat_id);

-- PostgreSQL database dump complete

Open in new window

Question by:linuxrox
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 2
  • 2
LVL 24

Expert Comment

ID: 37774424
what is the error you are getting?

Author Comment

ID: 37776718
i was using the program "navicat for postgresql" to import this file and apply it to the database and it just sits there and loops forever and nothing happens.  I guess I need to know how to properly import information like this into a postgreSQL database server.

how can i do that via the console?
LVL 24

Accepted Solution

johanntagle earned 500 total points
ID: 37780240
The simplest is "psql dbname < dumpfile"


Author Comment

ID: 37781630
ahh i see.  i'll try that and get back.

Featured Post

Microsoft Certification Exam 74-409

Veeam® is happy to provide the Microsoft community with a study guide prepared by MVP and MCT, Orin Thomas. This guide will take you through each of the exam objectives, helping you to prepare for and pass the examination.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

A company’s centralized system that manages user data, security, and distributed resources is often a focus of criminal attention. Active Directory (AD) is no exception. In truth, it’s even more likely to be targeted due to the number of companies …
This post looks at MongoDB and MySQL, and covers high-level MongoDB strengths, weaknesses, features, and uses from the perspective of an SQL user.
Video by: Steve
Using examples as well as descriptions, step through each of the common simple join types, explaining differences in syntax, differences in expected outputs and showing how the queries run along with the actual outputs based upon a simple set of dem…
This is a high-level webinar that covers the history of enterprise open source database use. It addresses both the advantages companies see in using open source database technologies, as well as the fears and reservations they might have. In this…

634 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question