PostgreSQL

PostgreSQL is an object-relational database management system with an emphasis on extensibility and standards-compliance. As a database server, its primary function is to store data securely, supporting best practices, and to allow for retrieval at the request of other software applications. PostgreSQL implements the majority of the SQL2011 standard, is ACID-compliant and transactional using multiversion concurrency control (MVCC), and handles complex SQL queries using many indexing methods that are not available in other databases. It is cross-platform and runs on many operating systems including Linux, FreeBSD, OS X, Solaris, and Microsoft Windows.

Share tech news, updates, or what's on your mind.

Sign up to Post

Hello there!

I made a SQL Fiddle thingy here:
https://www.db-fiddle.com/f/5UxJHyAaLTuTQEzCxnQf7W/1

But here is the DDL:
CREATE TABLE Table1 (
  	 MyID int
  	,MyDate date
);

CREATE TABLE Table2 (
  	 MyID int
  	,MyDate date
  	,Item varchar(20)
);

INSERT INTO Table1 (MyID, MyDate) VALUES 
(1, '2020-01-01'),
(2, '2020-01-02'),
(3, '2020-01-03'),
(4, '2020-01-03');

INSERT INTO Table2 (MyID, MyDate, Item) VALUES 
(1, '2020-01-01', 'Widget 0'),
(1, '2020-01-01', 'Widget 1'),
(1, '2020-01-01', 'Widget 1'),
(2, '2020-01-02', 'Widget 1'),
(3, '2020-01-03', 'Widget 0'),
(3, '2020-01-03', 'Widget 10'),
(3, '2020-01-03', 'Widget 2'),
(3, '2020-01-03', 'Widget 7'),
(4, '2020-01-03', ''),
(4, '2020-01-03', 'Widget 87');

Open in new window


What I'm trying to do is this:
Joining Table1 and Table2 on MyID and MyDate, generate SQL to create a result set that includes the MyId, MyDate, and distinct Item values for the MyID and MyDate values in Table2 as columns (Item1 - Item#).
The count of distinct Item values for each MyID and MyDate in Table2 will be unknown and so the number of Item# columns should be generated dynamically based on the max count of distinct values for any MyID and MyDate.

For example - using the sample data the row for MyID=3 and MyDate=2020-01-03 would look like this:
----------------------------------------------------------------------------------
   MyID     |    MyDate        |      Item1      |     Item2     |     Item3      |     Item4       |
----------------------------------------------------------------------------------
     3         |  2020-01-03  |   Widget 0   |  Widget 2  |  Widget 7  |   Widget 10 |
----------------------------------------------------------------------------------

I did this manually using LISTAGG(Item, ', ') as All_Items and then breaking them back out with multiple SPLIT_PART(All_Items, ',', #) statements but I'm thinking there's a much more graceful way to do this (e.g., PIVOT) but I'm too dense to work it out.  :P

Thanks in advance for any help!
0
hi having this error when connection in python
/home/python/anaconda3/bin/python3.7 /home/python/Documents/intro-to-database/app.py

Traceback (most recent call last):

  File "/home/python/Documents/intro-to-database/app.py", line 6, in <module>

    my_user.save_to_db()

  File "/home/python/Documents/intro-to-database/user.py", line 18, in save_to_db

    (self.email, self.first_name, self.last_name))

  File "/home/python/Documents/intro-to-database/database.py", line 22, in __exit__

    self.connection_pool.putconn(self.connection)

AttributeError: 'connectionFromPool' object has no attribute 'connection_pool'

from psycopg2 import pool
 
connection_pool = pool.SimpleConnectionPool(1,
                                            1,
                                            dbname="learning",
                                            user="postgres",
                                            password="postgres",
                                            host="localhost",
                                            port="5432")
 
 
class connectionFromPool:
    def __init__(self):
        self.connection = None
 
    def __enter__(self):
        self.connection = connection_pool.getconn()
        return self.connection
 
    def __exit__(self, exc_type, exc_val, exc_tb):
        self.connection.commit()
        self.connection_pool.putconn(self.connection)

===
from database import connectionFromPool
 
 
class User:
    def

Open in new window

0
Hi, I am trying to bulk insert data into a postgres 11 table with a C# .net core console app.

I have a string builder which prepares the insert data statement for 20,000 records
It then tries to run the following code:

using (NpgsqlConnection con = new NpgsqlConnection(DbConnectionString))
{
	con.Open();
	NpgsqlCommand cmd;
	NpgsqlTransaction transactSql = con.BeginTransaction();

	cmd = new NpgsqlCommand(sb.ToString(), con, transactSql)
	{
		CommandType = CommandType.Text
	};

	cmd.ExecuteNonQuery();
}

Open in new window



However I get the following error

08P01: invalid message format

   at Npgsql.NpgsqlConnector.<>c__DisplayClass160_0.<<DoReadMessage>g__ReadMessageLong|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at Npgsql.NpgsqlConnector.<>c__DisplayClass160_0.<<DoReadMessage>g__ReadMessageLong|0>d.MoveNext() in C:\projects\npgsql\src\Npgsql\NpgsqlConnector.cs:line 973
--- End of stack trace from previous location where exception was thrown ---
   at Npgsql.NpgsqlDataReader.NextResult(Boolean async, Boolean isConsuming) in C:\projects\npgsql\src\Npgsql\NpgsqlDataReader.cs:line 416
   at Npgsql.NpgsqlDataReader.NextResult() in C:\projects\npgsql\src\Npgsql\NpgsqlDataReader.cs:line 298
   at Npgsql.NpgsqlCommand.ExecuteReaderAsync(CommandBehavior behavior, Boolean async, CancellationToken cancellationToken) in C:\projects\npgsql\src\Npgsql\NpgsqlCommand.cs:line 1178
   at 

Open in new window

0
hi,

heard that PostgreSQL has problem on failover in terms of HA and DR to remote site, is that right?

In terms of HA, DR, backup/restore, DB upgrade, DB level debug and SQL level debug, how we do it in PostgreSQL?
0
How to find out recently changed  partition tables and other tables so that we can ANALYZE only those tables  in PostgresSQL 9.4/Greenplum 6.0

Example:  if we want to ANALYZE tables which loaded/updated 3 days ago ..etc
0
I have a time series database with the the following columns

  id BIGINT  -- PRIMARY KEY,
  imei_number BIGINT NOT NULL,
  gps_datetime datetime  NOT NULL,
  event_id INT,
  speed INT,
  latitude decimal(10,6),
  longitude decimal(10,6),
  raw_data varchar NULL

Open in new window


The are currently 4 billion rows in this one table, and growing by 1 billion a month.
I need to do a couple of things

1) I would like to partition / split the data onto different drives, perhaps "now to 3 months" on current drive, and "3+ months and >" to a different drive.
2) I would like to compress the older data  "3+ > months" for example,

On the compression point, all queries I do are virtually the same
for example, these are typical queries
Where imei_number = 1234
And gps_datetime between 'date1' and 'date'
and event_id in 1,5,14 --this is sometimes not used

Open in new window


My Clustered index is on the following (inc order)
imei_number
gps_datetime
id

Open in new window


I also have a non clustered index on
imei_number
event_id
id

Open in new window


and a non clustered index (unique primary key) on
Id

Open in new window



I dont have much experience with compression, but I've been trying out postgres with timescaledb and they have a column store compression option where you define a segment by key, i've used imei_number as SEGMENT BY (so it sticks everything in b-tree/disc by imei number first) and the compression is outstanding (180 x), and query performance the same as normal row store.

On my SQL server I've run some stats on page/row compression and the saving are not groundbreaking (about 20-30% based on an SP that estimates)
Therefore I'd like to try column store compression but not sure if it has something similar to Segment by
0
Hi Fellows,

I wander whether we can use pgstrom 2.0 as an extension on postgresql xl data nodes.

Thank you
0
Hello Experts

we use an application with postgresql 8.4  on debian 6.0.5
but application is too much slow
we made tuning on database but always slow

postgresql.conf :

data_directory = '/var/lib/postgresql/8.4/main'                              
hba_file = '/etc/postgresql/8.4/main/pg_hba.conf'                              
ident_file = '/etc/postgresql/8.4/main/pg_ident.conf'                                                      
external_pid_file = '/var/run/postgresql/8.4-main.pid'                                                
listen_addresses = '*'                              
port = 5432      
max_connections = 80                    
unix_socket_directory = '/var/run/postgresql'                                                      
ssl = true            
shared_buffers = 7680MB
work_mem = 192MB
maintenance_work_mem = 1GB
wal_buffers = 8MB
checkpoint_segments = 16
checkpoint_completion_target = 0.9
effective_cache_size = 20GB
default_statistics_target = 50      
constraint_exclusion = on
client_min_messages = debug4
log_min_messages = debug4
log_error_verbosity = verbose
log_line_prefix = '%t '
datestyle = 'iso, dmy'
lc_messages = 'fr_FR.UTF-8'
lc_monetary = 'fr_FR.UTF-8'
lc_numeric = 'fr_FR.UTF-8'
lc_time = 'fr_FR.UTF-8'
default_text_search_config = 'pg_catalog.french'

host info and tuning:

32G RAM
20 Vcpu
kernel.shmmax=17179869184
kernel.shmall=4194304

Regards
0
hi am not able to start pgAdmin4 in my browser i did install
pgAdmin 4 - Application Initialisation
======================================

Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for systemd (237-3ubuntu10.29) ...
Processing triggers for ureadahead (0.100.0-21) ...
Processing triggers for ufw (0.36-0ubuntu0.18.04.1) ...
postgresql
0
Hi,

PostgreSQL(version 9.4) function output  when union with another  select query  on Greenplum 6.0 database, we are getting error.  Please find below simple reproducible example with output.
could you please advise any work around to overcome this error, this seems product limitation but just looking for any work around solution to achieve  same output

--Test  UDF

CREATE OR REPLACE FUNCTION public.get_country()
 RETURNS TABLE (
  country_id integer,
  country character varying(50)
 
  )
AS $$

  begin
        
drop table if exists public.country;
create table public.country( country_id integer,
  country character varying(50));

insert into public.country
(  country_id,
  country)
 select 111,'INDIA'
union all
select 222,'CANADA'
union all
select 333,'USA' ;
        
 
 RETURN QUERY
 SELECT
 c.country_id,
 c.country
  FROM
 public.country c;

 
 END; $$
LANGUAGE 'plpgsql';


--Execute UDF
select * from public.get_country();

country_id|country|
----------|-------|
       222|CANADA |
       111|INDIA  |
       333|USA    |

--Create another table to test union
create table public.country_d2 as select * from public.country;

insert into public.country_d2
(  country_id,
  country)
 select 444,'JAPAN';

select * from public.country_d2

country_id|country|
----------|-------|
       222|CANADA |
       444|JAPAN  |
       333|USA    |
       111|INDIA  |


       
--Union test

with…
0
Team,
I'm curious what the above title is as far as a command to run via postgres.exe

Any help here would be great!
Thank you!
Aaron.
0
Hi,

How to suppress milliseconds in timestamp without time zone or Timestamp  data type field in Postgresdatabase/Greenplum.
for instance if you have a Function which inserts data into a table  of field of data type timestamp , we wanted to make sure we only enter yyyy-mm-dd hh:mi:ss , No milliseconds  across the board.   Could  some one please advise.
0
Postgres database corrupted

We have a cluster postgres corrupted working with drbd  ,  we have repaired  xfs

But vacuum  is failing
 sd_produccion=# vacuum FULL ANALYZE VERBOSE xxxx.transaccion ;
INFO:  haciendo vacuum a «xxxx.transaccion»
ERROR:  la página no es válida en el bloque 1416180 de la relación base/16429/118805272

We can not access to some tables error with reindex
0
How to set  default  trim for trailing spaces on strings/characters data in PostgresSQL

if you have string 'abc   '     it should 'abc'.   don't want to apply any trim functions manually on each query, Please advise.
0
how to move data from sonarqube (7.4) on postgres standalone server, to other ubuntu server on docker sonar with postgres?
What i did:
i install sonar with postgres on docker sonar 7.7-community and postgres 11.3
then i take dump of old sonar and restore on new postgres.
it's not work.
0
Hi,
I have the following problem to be solved.
There are two tables
1) applications
2) docs
each application has a set of docs.

mark all applications with status ='skipped' to status = null and in the corresponding docs associated with this application mark a column attr to null.

I initially did it like this -
1) fetch all the applications with status = skipped
for each application

update application set status = null
   update docs set attr = null

looping over each application id. so for example if there were 100 applications i will be making 200 queries
normally i am expecting 300-400 applications but it could go up to 10K

I am not sure but on the surface this does seem like a bad strategy if i had to make 10K* 2 db calls


2)
Since we have list of all application ids
update docs set attr= null where application_id in ();
   update applications set status = null where id in ();

So in this case no matter how many applications are there it will need only 2 queries
Also the initial one query to fetch all the applications so a total of 3 queries exactly no matter how many applications are there.

Here i am not sure about the performance implications of in clause. but it does seem faster on the surface due to minimal number of queries.

Please comment on this approach if this will be better.

Also is there any way to achieve the same in less than 3 queries or a faster way ?

Thanks
0
I try to run sonarqube 7.7. on docker with postgresql database.
i running artifactory on postgresql and i using default port 5432.
When i try sonarqube and in my docker compose change port to 5430 web server want to start.
how to use same docker postgresql for both artifactory and sonar.
i mean how from compose-file tell sonar to use existing container (of course i login to postgres and create a database for sonar)
0
need a simple and direct method to rename the admin account for PostGre-SQL 9.3. I am using the pgadmin application but cant get the password changed. I wonder if there is an alternate way for me to get this process done. maybe the SQL Shell or something else in the actual PGAdmin suite. recently lost access to the admin account for this Postgre Sql. I have been reading around on ways to get it done but none of them seemed to work out for me yet.
0
Hi,

we are getting syntax error when we have a return table column matches with any other table columns in the function. If we need to use of one of return table column data and insert into another table which has same column name, it seems, it is not allowed, any work around on this? please find below code.


CREATE OR REPLACE FUNCTION public.get_country9() 
 RETURNS TABLE (
  [b]country_id[/b] integer,
  country character varying(50),
  last_update timestamp without time zone
  )
AS $BODY$

BEGIN
 RETURN QUERY 
 SELECT
 c.country_id,
 c.country,
 c.last_update
 FROM
 public.country c;
  
insert into public.test_insert
(  city,
  [b]country_id[/b]) 
 select 'LONDON',222;

 insert into public.test_insert
(  city,
  [b]country_id[/b]) 
 select 'TORONTO',111;
END;
$BODY$
LANGUAGE plpgsql VOLATILE SECURITY INVOKER;

Open in new window



ERROR:

ERROR: 42601: syntax error at or near "$1"

Open in new window

0
is there any way to create master slave for postgresql when slave is on docker installation ?
0
Hi,

  We are on Postgresql 9.7
There is a table "Table-1" with the following fields.
ID                Name                DOB               SSN                   Start_dt            End_dt
1                   Kevin               2/3/2001      xxx-xx-2354        1/1/2018         12/31/9999

Open in new window

Whenever there is an update, let's say, to the DOB, the current row gets end dated and a new row gets inserted, like below:
ID                Name                DOB               SSN                   Start_dt            End_dt
1                   Kevin               2/3/2001      xxx-xx-2354        1/1/2018         3/17/2019
2                   Kevin               2/3/2000      xxx-xx-2354        3/17/2019         12/31/9999

Open in new window


My question is related to a trigger to capture the old and new values of the DOB , which is a mix of UPDATE and INSERT.
First off, how can we sense if it is a change to the existing entry or a new insert, in this CDC Type-2 scenario?

Please help
Thank you!
0
I need someone who understands shell scripting in a PostgresSQL environment to look at this.
I am trying to get the Pgbackrest  (ver. 2.10) application to provide notifications by Email when it encounters errors.
The error reports (either from a logfile or the command pgbackrest info) should be included in the notification.
The email are to be sent from a RHEL server.
CRON tasks currently used sends empty notification emails even when there are no errors.
0
Steps to install PostgreSQL 6.9 on a Oracle Virtual Box on WIndows 10 computer.
0
Does anyone know of a good tool to handle postgresql differential BU without rep, currently I am using native Wal procedure, but I am investigating, there are some good Linux tools, but could not find any windows apps, that don't blow my developer one person budget
0
I am running Wal backup, I am looking for the correct string for pg configuration files, that will copy my last successful Wal backup files to another local directory,  

bu wal is running fine  minutely, I now need to have the finalized files pushed for pg to another dir, if I monitor the wal dir, and move some files, that are still active somehow, pg will crash
0

PostgreSQL

PostgreSQL is an object-relational database management system with an emphasis on extensibility and standards-compliance. As a database server, its primary function is to store data securely, supporting best practices, and to allow for retrieval at the request of other software applications. PostgreSQL implements the majority of the SQL2011 standard, is ACID-compliant and transactional using multiversion concurrency control (MVCC), and handles complex SQL queries using many indexing methods that are not available in other databases. It is cross-platform and runs on many operating systems including Linux, FreeBSD, OS X, Solaris, and Microsoft Windows.

Top Experts In
PostgreSQL
<
Monthly
>