[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x

PostgreSQL

PostgreSQL is an object-relational database management system with an emphasis on extensibility and standards-compliance. As a database server, its primary function is to store data securely, supporting best practices, and to allow for retrieval at the request of other software applications. PostgreSQL implements the majority of the SQL2011 standard, is ACID-compliant and transactional using multiversion concurrency control (MVCC), and handles complex SQL queries using many indexing methods that are not available in other databases. It is cross-platform and runs on many operating systems including Linux, FreeBSD, OS X, Solaris, and Microsoft Windows.

Share tech news, updates, or what's on your mind.

Sign up to Post

What is the location of  the binary and library postgresql directories installed on docker linux suse 12 sp2 ?
We have this postgres DB installation working ok.
I have access to the database using pgadmin and dbeaver.
But we don't know who did this installation and I need to know the location of the binary and library postgresql directories in order to run pg_ctl and psql.
0
Amazon Web Services
LVL 12
Amazon Web Services

Are you thinking about creating an Amazon Web Services account for your business? Not sure where to start? In this course you’ll get an overview of the history of AWS and take a tour of their user interface.

Hi All,
   
 Postgresql 9.6 and we have two tables.
First table layout is like below.
   TB_ID       Col_id     Col_nm                       Col_Desc
      1               1           fin_cd1                         finance Code 1
      1               2           fin_cd2                         finance Code 2
      1               3           fin_cd3                         finance Code 3

Open in new window

  TB_ID                 Code_Value
      1                       FA|BR|CM
  

Open in new window


I need to display the result as follows:
    TB_ID                fin_cd1            fin_cd2                  fin_cd3
       1                        FA                     BR                         CM

Open in new window

           

First table is joined to the second, on the TB_ID, and the number of rows in  the first table is equal to the number of pipe delimited values in the second table.

Please help
0
Hi,

  We are on Postgresql 9.6.1.
I have a table which has an ID attribute, start_Dt and end_Dt.
Need to find the first_start_dt where there is a chain of the prior end_dt beging the same as the current start_Dt.

I have the example data and the desired output below.
Can someone kindly help?

WITH AA
AS
(SELECT ID,START_DT,END_DT
  FROM 
  (
  VALUES
     (10,'2018-01-01','2018-01-31')
    ,(10,'2018-02-04','2018-03-28')
    ,(10,'2018-03-28','2018-05-20'
    ,(10,'2018-05-20',NULL)
    ,(20,'2018-07-01','2018-08-10')
    ,(20,'2018-08-01','2018-09-10')
  ) tmp(ID,START_DT,END_DT)
)

SELECT ID,start_dt,end_Dt
 FROM AA

Open in new window


Desired result:
 
 id        start_dt               end_dt
 10     2018-01-01        2018-01-31
 10     2018-02-04           null
 20     2018-07-01         2018-09-10
0
I need modify this query to honor the minmk and maxmk, if mt = 'P' and either minmk and maxmk exist.
Not sure how to do that...

Currently, the query returns 125.00 for the first record and 2893.75. It needs to return 140.00 and 2815.00

Note: There are only 'P' (percentage) and 'F' (flat) markup types.

SELECT
    -- when the markup is a 'P' (percentage) multiply     
	CASE WHEN (SELECT c0.mt 
               FROM p.markups c0 
               WHERE c0.id = 'M07' AND 
                     c0.pid = 'M34' AND 
                     c0.pt = '107') = 'P' THEN
		TO_CHAR(u.price + (u.price * (SELECT c1.mk 
		                                FROM p.markups c1 
		                                WHERE c1.id = 'M07' AND 
		                                      c1.pid = 'M34' AND 
		                                      c1.pt = '107')) , 'FM999999999.00')
	ELSE
	-- when the markup is a 'F" (flat), so add it
		TO_CHAR(u.price + (SELECT c2.mk 
		                    FROM p.markups c2 
		                    WHERE c2.id = 'M07' AND 
		                          c2.pid = 'M34' AND 
		                          c2.pt = '107') , 'FM999999999.00')
		                          
	END as mkprice	
FROM u.unet u
WHERE u.id = 'M34' AND 
      u.pt = '107'

Open in new window

If I have this data in `p.markups`:
"id" , "pid", "pt" , "mt", "mk", "minmk", "maxmk", "exc"
"M07", "M34", "107", "P" , 0.25, 40     , 500    , 0

Open in new window

and this data in u.unet:
"id",  "pt" , "price"
"M34", "107",  100.00
"M34", "107", 2315.00

Open in new window

0
I have been given a postgreSQL database export (txt file).  

It contains the create table structures, insert statements etc for the whole database.

Does anyone know of a tool that will import the file directly into an MSSQL database?    Importing it into a local postgress and then into SQL is painfully slow and I need to run this several times over the next few weeks.
0
I have a c# application and I am trying to write to a table in my postgresql database
through a stored procedure  (Function)

I am at the  point where i need to execute the stored procedure having obtained
the values i need to write from a  windows form UI

The problem now is that when i hit the line that should execute the procedure
i get the error:

System.Data.OleDb.OleDbException
  HResult=0x80040E21
  Message=No error message available, result code: DB_E_ERRORSOCCURRED(0x80040E21).
  Source=System.Data

With no error message available as displayed  how can i resolve this issue ?

Attached is a word document  showing the error screen shot, my stored procedure  and the stack trace

I will be grateful for any help to resolve this issue

O.A. Oluwole
ERROR-0x80040E21---29-09-2018.docx
0
I have a c#  application which needs to write into sqlserver  and postgresql  databases.

I  use Dapper  ORM

I have been able to specify my connection string   for sqlserver  and it works

Now i am trying to specify  the equivalent  for  PostgreSQL  and I am getting the

Error   " OLE DB Provider  not Specified in the Connection String "

I have tried the following 3  providers

providerName="Npgsql2" />
providerName="Npgsql" />
providerName="SQLOLEDB" />

with same error message

Whats should be my provider string  for a c# application  connection to a

PostgreSQL database  using Dapper

Please Note that  I have Dapper  and   Npgsql installed in my application using Nuget

Attached please find error screen using the debugger and my connection string PostgreSQL

I will be grateful for an assistance

O.A.  Oluwole
OLE-DB-PROVIDER--24-09-2018.docx
0
I'm using a jump box for my developers to access PostgreSQL database.  The leadership has limited access of 2 users to each Jump box.  It is very difficult to scale that model when all the developers need is secure access (this is on the Azure cloud by the way).  I was thinking about opening up port 5433 as recommended by this article to install pgAdmin.  Does anyone have any pros or cons to this course of action?
0
I have a C# application that needs to connect to sqlserver and postgresql

I have been able to connect it to sqlserver and it works and the connection string is shown below

<add name="Tournaments" connectionString="Server=CHRISTINESPC;Database=Tournaments;Trusted_Connection=True;" providerName="System.Data.SqlClient"/>

I have tried a number of options for postgresql without success

Can anybody kindly help to specify what  the postgresql  connection string equivalent is

Notes:

CHRISTINESPC is my servername as displayed in sqlserver  management studio thou I am open to using an IP address
Tournaments my Database name in both sqlserver and postgresql databases

Thanks

Oluwole
0
I'm trying to build a PostgreSQL/PostGIS database using Entire country files dataset but I'm getting missing data error

I'm wondering if the copy command is affected by diacritics or I've not set the database up properly
Created a new database with UTF8 encoding

I built the table schema based on the given format (but using type text for NM_MODIFY_DATE not varchar,  having these as dates didn't make a diferance)
PostgreSQL docs state fields can be NULL by default  so

create table my_table(
id SERIAL PRIMARY KEY,
test1 text;  -- can be null
test2 text NOT NULL;  -- Has to have a value
}

Open in new window


In my  theory every column apart from id doesn't have to contain a value
 
I used large text viewer to open the uncompressed countries.txt file and copied the top 5 rows into a test file

Using   PostgreSQL Copy this test file imported correctly so I know my schema is correct
copy my_table(List of columns ) from '\\Path\\To\\TestFile.txt' WITH delimiter E'\t' csv HEADER;

Open in new window



When tried to ingest the larger  countries.txt (2.9GB) file I get an error "missing data" for column xyz at line 12345 (Last column with  data in that row, NM_MODIFY_DATE)

Using large text viewer again I located this line with proceeding and following lines into my test file and tried the copy again but get the same error

I opened the test file in …
0
Angular Fundamentals
LVL 12
Angular Fundamentals

Learn the fundamentals of Angular 2, a JavaScript framework for developing dynamic single page applications.

Hi Experts,

     docker container is not picking the recent changes from the source code.     it makes me to build the docker image every time to see the latest changes on the source code.

Please find below the contents of the Dockerfile

FROM ubuntu:16.04

MAINTAINER *****

RUN apt-get update -y
RUN apt-get install -y software-properties-common python-software-properties curl
RUN add-apt-repository -y ppa:fkrull/deadsnakes

RUN apt-get update -y && apt-get install -y curl
RUN apt-get update -y && apt-get install -y \
	git \
	python3.6 \
	python3.6-dev \
	nginx \
	sqlite3 \
	nodejs \
	build-essential \
	libmagickwand-dev \
	cron \
	nginx

RUN rm -f /usr/bin/python3
RUN ln -s /usr/bin/python3.6 /usr/bin/python3
RUN curl https://bootstrap.pypa.io/get-pip.py | python3

WORKDIR /home/trove
COPY . .

COPY build/docker/uwsgi_params .
COPY build/docker/uwsgi.ini .
RUN pip3 install --no-cache-dir uwsgi
RUN pip3 install --no-cache-dir -r requirements.txt

COPY build/docker/start /usr/bin/
COPY build/docker/crontab /etc/cron.d/harvest-cron
RUN chmod 0644 /etc/cron.d/harvest-cron
RUN touch /var/log/harvest.log

RUN echo "daemon off;" >> /etc/nginx/nginx.conf
COPY build/docker/nginx-app.conf /etc/nginx/sites-available/default
COPY build/docker/start /usr/bin/
RUN mkdir /var/log/harvest/
RUN python3 manage.py collectstatic --noinput

WORKDIR /home/trove/
RUN chmod 755 /home/trove
RUN chown -R www-data:www-data /home/trove

EXPOSE 80
CMD ["start"]

Open in new window


please find below the contents of the crontab

SHELL=/bin/bash
* * * * * root ( source /tmp/environment.sh && /usr/bin/python3 /home/trove/run.py $(cat /tmp/method) ) >> /dev/null 2>/var/log/harvest/ts_errors.log

Open in new window

0
I am planning to design an Java application accessing "Postgresql" database in Windows server. This Java application is expected to be used by 50-100 users simultaneously.

1. Would like to have prerequisites about System requirements for setting up "postgresql" database and Java in windows server?

2. While implementing in production environments, whether any licensing is needed for "postgresql" database?
0
Hi Experts,

I want to restore postgres database.

I had taken backup with pg_dump.  the command which I used to take backup is

pg_dump -U postgres -h 127.0.0.1 -p 5432 ts > ts.tar
password:

it created the file ts.tar

Steps I had tried to restore is

root@ip-10-252-14-11:/home/ubuntu/workarea/sourcecode/database_backup# pg_restore --host localhost --port 5432 --username "postgres" --dbname "ts" --role "ts" --no-password --verbose "./ts.tar"
pg_restore: [archiver] input file appears to be a text format dump. Please use psql.

root@ip-10-252-14-11:/home/ubuntu/workarea/sourcecode/database_backup# pg_restore --host localhost --port 5432 --username "postgres" --dbname "ts" --role "ts" --no-password < "./ts.tar"
pg_restore: [archiver] input file appears to be a text format dump. Please use psql.

root@ip-10-252-14-11:/home/ubuntu/workarea/sourcecode/database_backup# pg_restore --host 127.0.0.1 --port 5432 --username "postgres" --dbname "ts" --role "ts" --no-password < "./ts.tar"
pg_restore: [archiver] input file appears to be a text format dump. Please use psql.

root@ip-10-252-14-11:/home/ubuntu/workarea/sourcecode/database_backup# pg_restore --host 127.0.0.1 --port 5432 --username "postgres" --dbname "ts" --role "ts"  < "./ts.tar"
pg_restore: [archiver] input file appears to be a text format dump. Please use psql.

root@ip-10-252-14-11:/home/ubuntu/workarea/sourcecode/database_backup# pg_restore --host 127.0.0.1 --port 5432 --username "ts" --dbname "ts" 

Open in new window

0
I have a set of stored procedures i need to create in PostgreSQL

My Database name is TOURNAMENTS

Where within  PGAdmin  do i create  these stored procedure

 

i have attached the pgAdmin screen
Stored-Procedure-Creation-18-08-2018.PNG
0
Hi,
   We are on Postgres 9.6
I have a table Customer that has these fields:

ID          Rel_Id
10            12, 23, 34

Then there is this Relation table, that has this structure

Rel_id           Rel_name
12                  Mike
23                  Dave
34                  Jim

I want to show the output as:
ID           Rel_ID                       Rel_Name
10       12,23,34                      Mike, Dave, Jim

Can someone please help?
0
I have a website where the number of visitors is increasing and it is facing a poor response time. I am not a server specialist and for this reason I want to expose my issue here to receive some feedback. The website has the following characteristics:

* Developed with ASP.NET Web Forms (I will be implementing a new version with ASP.NET MVC soon).
* It uses a PostgreSQL 9.1 database.
* It is 100% dymanic which means that every page is formed at run-time getting information from the DB.
* Hosted in a cloud server with 2 Cores and 4 GB of RAM.
* Windows Server 2012 in drive C:
* PostgreSQL DB in another "hard disk" as drive D:
* The average number of daily visitors is 2,500

Watching the Task Manager I can notice that there are many PostgreSQL processes which consume CPU 100%. Another process that is shown in the "IIS Worker Process" but I have not seen it as critical as the PostgreSQL process. So it seems that the main issue here has to do with the database access. When CPU usage is high the website response is extremely low and also it is very difficult to establish a remote desktop connection.

I have thought about upgrading the cloud server to, for example, 4 cores and 8 GB of RAM, but firstly I want to get feedback from experts here so I can take the best approach. I am not a server expert and, for this reason, I do not want to take steps blindly.

Respectfully,
Jorge Maldonado
0
is it possible to create master/slave between postgresql version 9.5 (as master) and 10(as slave) on ubuntu 16.04 and 18.04?
i running master/slave between two 9.5 but i can't start postgres 10 as slave.
the error massage is connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
and it's because /var lib/postgres/10/main folder is on version 9.5 or .. .
0
Hi
I'm trying to import some data into my_table using something  like this

create table my_table(
id SERIAL PRIMARY KEY,
title text,
my_date date,
-- plus other columns of types text, date and integer  
another_date date,
etc
);
COPY my_table (title,my_date,....)
FROM 'path/to/input.csv' WITH DELIMITER '|' CSV HEADER;   

Open in new window


the COPY is failing with

ERROR: date/time out of range: "10/07/2018"
SQL state: 2208
Hint: Perhaps you need a different "datestyle" setting.
Context: COPY my_table line 2 , column another_date:   "10/07/2018"

Open in new window

Is there a way of converting the date to "2018-07-10" on the way in?  I can't guarantee each date is in this format but from what  I've seen they are
if not can I set the dates to accept a broader format import the data then change it back


Opening it in openoffice calc and  setting the format of the column(s)   is first a bad idea, this could introduce errors,  and second some of the cells don't convert (I Tried a copy)


BTW:
I'm using PostgreSQL 8.4
0
Hi,
Any advice, to following issue, when accessing PostgreSQL pgAdmin?
0
C++ 11 Fundamentals
LVL 12
C++ 11 Fundamentals

This course will introduce you to C++ 11 and teach you about syntax fundamentals.

Hi
I need to run the following update query on a PostgreSQL / PostGIS database

UPDATE geo_table
SET geom =ST_ConvexHull(
     ST_GeomFromText('MULTIPOINT(co_ord_string)', 4326)
);
WHERE 
Id =1234;

Open in new window


co_ord_string is a column name if I manually copy the row value into above the query runs otherwise I get

parse error – invalid geometry HINT: "MULTIPOINT(" <-- parse error at position 11 within geometry


I'm pretty sure this is because of the single quotes around  'MULTIPOINT(co_ord_string)' interprets co_ord_string as the literal string

The value of co_ord_string is a series of long lat points separated by a comma. These don’t create perfect polygons as they may contain internal points but using ST_ConvexHull a polygon is created


Could someone help with the syntax
0
Hi,

  We are on Postgres 9.6 Linux database.
Have a requirement to create a function and pass  cc_case_ID as the parameter.
requirement then, is to to delete all the records from all the tables in a schema, where the cc_case_ID is equal to the parameter value passed.

Can someone kindly help with the solution?
0
Hi

I have 2 PostrgreSQL/ PostGIS databases I need to update a table in 1 DB with data in a table  in the second

geo_table in DB1

id,country_code,country_name,long,lat,geom  # point data

Open in new window


boundary.table in DB2

id,country_code,country_name,geom  # polygons

Open in new window



If both tables were in the same database I could run a select query something like (UNTESTED)

UPDATE geo.table
set g.country_code = b.country_code,g.country_name = b.country_name
FROM
geo_table g, boundary b
INNER JOIN on ST_intersect(geo_table.geom, b.geom) 

Open in new window



How can I update columns country_code,country_name in the geo_table with corresponding values in the boundary table where the 2 geometries intersect

If necessary I could copy the boundary table into DB1 but this would mean maintaining duplicate data
0
I have a Windows server where the CPU utilization is constantly at 50% which is mostly made up of the PostgreSQL server service. Even if I reboot the server as soon as it is up the CPU usage is at 50% +

It does not matter whether people are logged in and using the Application that is using the PostgreSQL server. Also, the application using the PostgreSQL server makes very light use of it

Here is a screenshot and one as well below from an CPU monitor external to the server. The CPU monitor is from the last 3 hours which was night time with no one on the server or using the application

0
I have the Postgres SQL below in an API function. I am passing the $id and $ic variables. The $ic variable can be an array of values or a single value.
    WITH cte as (
        SELECT partnerslistori as a FROM sales.members WHERE id = '".$id."'
    )
    SELECT 
        u.id as actualid,
    	(SELECT m.company || ' (' || m.id ||')' FROM support.members m WHERE m.id = u.id) AS actualcompany,
    	u.itemname,
    	DATE_PART('day', CURRENT_TIMESTAMP - u.datein::timestamp) AS daysinstock, 
    	TRIM(u.grade)::character varying as condition, 
    	u.vstockno AS stock, 
    	u.hol AS ic, 
    	CASE WHEN u.rprice > 0 THEN 
    		u.rprice 
    	ELSE 
    		NULL 
    	END AS price, 
    	u.quantity, 
    	u.location
    FROM public.net u 
    WHERE u.holl in (".$ic.")
      AND visibledate <= now() 
      AND u.id = any(regexp_split_to_array('".$id.",'||(select a from cte), ',')); 

Open in new window


Now, I need to "aggregate" the results for a single id and contactenate the vstockno values and sum the quantity values into a single record.

For instance, if I return CSV this using the sql above:
"G06","WAP (G06)","CALIPER",370,"C","CAE17366","536-01036",1.00,1,"CY31B05"
"G06","WAP (G06)","CALIPER",770,"C","CAD16821","536-01036",1.00,1,"CALIPERCO"
"G21","WNB (G21)","CALIPER",257,"C","NBI17724","536-01036",1.00,1,"R3415"
"G21","WNB (G21)","CALIPER",128,"C","NBA18095","536-01036",1.00,1,"R2804"
"K17","GAAAP (K17)","CALIPER",270,"C","7I1581","536-01036",,1,"B5-A04"
"K17","GAAAP (K17)","CALIPER",267,"C","7I1594","536-01036",,1,"B4-B04"
"K17","GAAAP (K17)","CALIPER",78,"C","8C1210","536-01036",,1,"[UNKNOWN]"
"N40","WAC (N40)","CALIPER",147,"C","MNA18002","536-01036",1.00,1,"[UNKNOWN]"
"N49","YUPIP (N49)","CALIPER",77,"K","DS180429","536-01036",6.00,1,"UPULL-E"
"V02","CBAP (V02)","CALIPER",1408,"A","CKA1301","536-01036",40.00,1,"[UNKNOWN]"
"V02","CBAP (V02)","CALIPER",350,"A","CPA1257","536-01036",12.00,1,"BIN.360.A"
"M16","BAP (M16)","CALIPER",196,"A","8A067","536-01036",,1,"LOT-23"
"M23","BA (M23)","CALIPER",115,"A","8A065","536-01036",15.00,1,"36AU"
"M21","BUPI (M21)","CALIPER",115,"A","8A065","536-01036",15.00,1,"36AU"
"M22","BSS(M22)","CALIPER",115,"A","8A065","536-01036",15.00,1,"36AU"

Open in new window

I want it to return this instead; the stock and location columns are concatenated and the quantity is summed when there is more than one result for a particular id:
"G06","WAP (G06)","CALIPER",370,"C","CAE17366, CAD16821","536-01036",1.00,2,"CY31B05, CALIPERCO"
"G21","WNB (G21)","CALIPER",257,"C","NBI17724, NBA18095","536-01036",1.00,2,"R3415, R2804"
"K17","GAAAP (K17)","CALIPER",270,"C","7I1581, 7I1594, 8C1210","536-01036",,3,"B5-A04, B4-B04, [UNKNOWN]"
"N40","WAC (N40)","CALIPER",147,"C","MNA18002","536-01036",1.00,1,"[UNKNOWN]"
"N49","YUPIP (N49)","CALIPER",77,"K","DS180429","536-01036",6.00,1,"UPULL-E"
"V02","CBAP (V02)","CALIPER",1408,"A","CKA1301, CPA1257","536-01036",40.00,2,"[UNKNOWN], BIN.360.A"
"M16","BAP (M16)","CALIPER",196,"A","8A067","536-01036",,1,"LOT-23"
"M23","BA (M23)","CALIPER",115,"A","8A065","536-01036",15.00,1,"36AU"
"M21","BUPI (M21)","CALIPER",115,"A","8A065","536-01036",15.00,1,"36AU"
"M22","BSS(M22)","CALIPER",115,"A","8A065","536-01036",15.00,1,"36AU"

Open in new window


Can this be done in a single SQL statement?
0
My shop currently uses both Oracle 11g Single-Instance, and Oracle 12c RAC databases.  A contingent of managers, co-workers and peers are very interested in pursuing a PostGress solution.  Can you point out some of the pros/cons of both database vendors and their solutions?  By the way my application has requirements for 7/24, high availability, load balancing, and the requirement to perform Point-In-Time (PITR) when and if required.  Also database needs to be hosted in an Linux environment.
0

PostgreSQL

PostgreSQL is an object-relational database management system with an emphasis on extensibility and standards-compliance. As a database server, its primary function is to store data securely, supporting best practices, and to allow for retrieval at the request of other software applications. PostgreSQL implements the majority of the SQL2011 standard, is ACID-compliant and transactional using multiversion concurrency control (MVCC), and handles complex SQL queries using many indexing methods that are not available in other databases. It is cross-platform and runs on many operating systems including Linux, FreeBSD, OS X, Solaris, and Microsoft Windows.

Top Experts In
PostgreSQL
<
Monthly
>