PostgreSQL

PostgreSQL is an object-relational database management system with an emphasis on extensibility and standards-compliance. As a database server, its primary function is to store data securely, supporting best practices, and to allow for retrieval at the request of other software applications. PostgreSQL implements the majority of the SQL2011 standard, is ACID-compliant and transactional using multiversion concurrency control (MVCC), and handles complex SQL queries using many indexing methods that are not available in other databases. It is cross-platform and runs on many operating systems including Linux, FreeBSD, OS X, Solaris, and Microsoft Windows.

Share tech news, updates, or what's on your mind.

Sign up to Post

I am running a small subnet which  has several servers performing their varied tasks, one of which is a a PostGres SQL box. This small group has a management server which has two nics - one on the primary net and the other on the "internal".

The two servers in question we will call "Jump" and "PG"

Jump has an ip address of 10.10.10.10 for the primary net and an address of 192.168.20.10 for the internal net.
BG has an ip address of 192.168.20.40

My iptables rules are as follows:

    iptables -A INPUT -p tcp -m state --state NEW,ESTABLISHED -m tcp --dport 5432 -j ACCEPT

    iptables -A FORWARD -d 192.168.20.40/32 -p tcp -m tcp --dport 5432 -m comment --comment "Forward inbound PGAdmin traffic to postgres server" -j ACCEPT

     iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 5432 -j DNAT --to-destination 192.168.20.40:543

     iptables -t nat -A POSTROUTING -s 192.168.26.44 -o eth0 -j MASQUERADE

PGAdmin is configured to address the Jump server on port 5432 and when attempting contact, the traffic arrives to the PG server and returns to my workstation but with a hitch.
     Watching tcpdump on the linux boxes and Wireshark on the Windows box, I am able to see the tcp stream initiate at my workstation travel through the jump box and arrive at the PG box.  When the packet has arrived at the PG box, the source is my workstation.
          IP 10.10.10.11.54564 > 192.168.20.40.5432: Flags [S]
     The PG box sends a reply to my workstation:
         …
0
On Demand Webinar: Networking for the Cloud Era
LVL 9
On Demand Webinar: Networking for the Cloud Era

Ready to improve network connectivity? Watch this webinar to learn how SD-WANs and a one-click instant connect tool can boost provisions, deployment, and management of your cloud connection.

Hi
My VCenter server service on VMware virtual appliance 5.5 keeps stopping. upon investigating it looks like the database has taken all the disk space (99%).

I have followed this article and it only reclaimed a couple of gigs as now showing (97%). see screen print attached.

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2056448

Can anyone tell me how I can reclaim more disk space please?

Thanks
disk.PNG
0
We support a system where custom fields in the system are stored as hstore fields. Like this

select query
select udf_data from inventory

Open in new window


""LocOver"=>"", "freight"=>"0", "3M Codes"=>"", "V_Partno"=>"110SP-60", "brokerage"=>"0", "Dentsply C"=>"""

Here is the data for one record. LocOver, freight, 3M Code, V_Partno, brokerge, Dentsply are the field names in the system

I want to update

I want to update the udf_data column with the values from inventory.part_no column from the udf_data V_Partno

So the update statement should be like this:

update customers
set udf_data = misc1
where udf_data ?| '{cust_type}'

But the hstore has a particular update statement like this:

update customers set udf_data = udf_data || '"cust_type"=>"test"'::hstore

Which does not work. Can anyone help?
0
I want to convert my postgress 7.4 Sql_ascii database to postgress 9.4 utf8 database..... how can I do it?..... please help
0
Greetings,

I have a 700gb postgres database which has a standby readonly replica and I wish to migrate and upgrade to a new server with the latest postgres version so what would be the ideal migration path for this situation? it's in a Linux environment.
0
Simply put.. I connected to a free micro instance of a postgres DB I created on AWS RDS.

It worked fine. I set up a database and a couple tables on it.

Now that I'm trying to connect to it from my app.. it's just timing out.

AND.. I can no longer connect to it from PSequel or.. anything. It just keeps timing out.

I'm not even sure what more detailed questions to ask or what to direction to poke in on this one.
0
Hi, I write after sending my code to the attention. So ... I have to convert an Oracle Pl-Sql application into a similar PostgreSQL application. In addition to the ect table view tables, I need to convert Procedures and Functions.
I find myself having to convert an Oracle Pl-Sql procedure with 6 input parameters (the search filter) and 2 output parameters (one numeric for cpunt of extracted recs, the other Type Cursor for the processed cursor result ).
Well in Postgres does not exist the Type Cursor, so I wondered how to overcome the problem having a minimum target: the structure of the procedure / function should remain unaltered in the input and output parameters.
This is because the java application must be able to call the procedure / function both in oracle and postgres without modifying the call itself.
It's possible?
So far I have worked with the refcursor solution and with the table solution (as data type of output).
With the first one they could not get the result of the cursor but only the error "unnamed_portal_1" while with the TABLE solution I finally found the solution but it obliges me to have to modify the similar running procedure in Oracle.
Attach the test environment and code tested in the various versions.
0
The same slightly changed function returns only the count. I have difficulty with the cursor result sets.
If I use the function3 using refcursor, I return to "unnamed_portal_1", who finds the solution to extract the second cursor? The 2 OUT variables must remain since the java application must invoke the Oracle function and the one converted to Postgres without changing the call
function2.sql
function3.sql
0
I find this function, but I can not return any data.
Testing the individual queries, there are 5 recs that should be extracted.
Where am I wrong?
The function MUST be with the INPUT parameters that are highlighted and with the two OUT parameters.
This is because of the conversion of a similar Oracle function, a function that must be invoked by the same application by either pointing to Oracle or pointing to Postgres
function1.sql
0
CREATE OR REPLACE FUNCTION pck_gestione_tracce_sistema.p_get_rotta_traccia_sistema6(
    IN in_id_traccia_sistema numeric,
    IN in_id_versione numeric,
    IN in_limite_inferiore numeric,
    IN in_limite_superiore numeric,
    IN id_limite_inferiore_data date,
    IN id_limite_superiore_data date,
    IN in_id_viaggio numeric,
    OUT on_out_total_count numeric,
    OUT oc_out_recordset character varying)
  RETURNS record AS
$BODY$
DECLARE
   LD_LIMITE_INFERIORE_DATA Date ;
   LD_LIMITE_SUPERIORE_DATA Date ;
   LK_PACKAGE_NAME varchar(1000);
   LK_FUNCTION_NAME varchar(1000);
   LN_DEBUG_INFO Integer;
 
   ref refcursor;
   ref1 refcursor;

Begin  

   
   LN_DEBUG_INFO := 1;
   LK_PACKAGE_NAME := 'PCK_GESTIONE_DATI';
   LK_FUNCTION_NAME := 'P_GET_ROTTA_TRACCIA_SISTEMA';
   LD_LIMITE_INFERIORE_DATA  := TO_DATE(TO_CHAR(ID_LIMITE_INFERIORE_DATA,'DD/MM/YYYY HH24:MI:SS'),'DD/MM/YYYY HH24:MI:SS');
   LD_LIMITE_SUPERIORE_DATA  := TO_DATE(TO_CHAR(ID_LIMITE_SUPERIORE_DATA,'DD/MM/YYYY HH24:MI:SS'),'DD/MM/YYYY HH24:MI:SS');

    If (LN_DEBUG_INFO = 1) Then
      PERFORM nereus.p_logging(IN_TIPO        => 'I'::character varying,
                IN_CODICE      => 'INFO'::character varying,
                IN_DESCRIZIONE => 'Input call'::character varying,
                IN_MODULO      => LK_PACKAGE_NAME::character varying, --LK_PACKAGE_NAME,
                IN_FUNZIONE    => LK_FUNCTION_NAME::character varying, --LK_FUNCTION_NAME,
               …
0
Announcing the Most Valuable Experts of 2016
LVL 6
Announcing the Most Valuable Experts of 2016

MVEs are more concerned with the satisfaction of those they help than with the considerable points they can earn. They are the types of people you feel privileged to call colleagues. Join us in honoring this amazing group of Experts.

CREATE OR REPLACE FUNCTION pck_gestione_tracce_sistema.p_get_rotta_traccia_sistema10(
    IN in_id_traccia_sistema numeric,
    IN in_id_versione numeric,
    IN in_limite_inferiore numeric,
    IN in_limite_superiore numeric,
    IN id_limite_inferiore_data date,
    IN id_limite_superiore_data date,
    IN in_id_viaggio numeric,
    OUT on_out_total_count numeric,
    OUT oc_out_recordset character varying)
  RETURNS SETOF record AS
$BODY$
DECLARE
   LD_LIMITE_INFERIORE_DATA Date ;
   LD_LIMITE_SUPERIORE_DATA Date ;
   LK_PACKAGE_NAME varchar(1000);
   LK_FUNCTION_NAME varchar(1000);
   LN_DEBUG_INFO Integer;
 
   ---ref refcursor;
   ---ref1 refcursor;
   counter record;
   setdati record;

Begin  

   
   LN_DEBUG_INFO := 1;
   LK_PACKAGE_NAME := 'PCK_GESTIONE_DATI';
   LK_FUNCTION_NAME := 'P_GET_ROTTA_TRACCIA_SISTEMA';
   LD_LIMITE_INFERIORE_DATA  := TO_DATE(TO_CHAR(ID_LIMITE_INFERIORE_DATA,'DD/MM/YYYY HH24:MI:SS'),'DD/MM/YYYY HH24:MI:SS');
   LD_LIMITE_SUPERIORE_DATA  := TO_DATE(TO_CHAR(ID_LIMITE_SUPERIORE_DATA,'DD/MM/YYYY HH24:MI:SS'),'DD/MM/YYYY HH24:MI:SS');

    If (LN_DEBUG_INFO = 1) Then
      PERFORM nereus.p_logging(IN_TIPO        => 'I'::character varying,
                IN_CODICE      => 'INFO'::character varying,
                IN_DESCRIZIONE => 'Input call'::character varying,
                IN_MODULO      => LK_PACKAGE_NAME::character varying, --LK_PACKAGE_NAME,
                IN_FUNZIONE    => …
0
Greetings,

I have a Postgres 9.0.22 version on debian 3.0 and trying to restore base online backup from a particular day to a new Vm that has Postgres 9.0.23 and running into alot of issues. It keeps asking for Wal files and I finally placed the Wal file it needs and now it says it has an invalid length. Please help.
0
Today we have 1 odoo instance with one front and 1 postgres db, how to scale it as we want to use it with 50k+ users ? :)
0
I have a server which runs on PostgreSQL, Django on apache and gets hang up every couple of days. I understood that it would require PostgreSQL optimization. But there are so many parameters not sure where to start. As of now, the server hanging up. Please let me know what kind of information you need so that I can provide you and you can better assist me.

Also I have iptables firewall setup not sure where exactly is the problem. But the website hangs up every 1 one day and when I restart the apache it works for one day.
0
Hello Experts,

Currently the Atlassian applications along with the Postgres databases reside on 1 VM.

I am looking for any recommendations for splitting up the applications/databases and platform options to integrate the Atlassian applications on (VM, Docker or AWS).

We have 100 users split between the US and UK.
0
Hello, I am using Ubuntu 16.04 and Python psycop2 to insert about 2.1 million rows (5 columns) with a loop reading a 120GB binary file into a Postgresql Table.   I have a I7-7600K CPU with 16GB Ram.   Around 280,000 my system runs out of memory and my computer will lock up or the Python IDE will crash.   Could someone tell how to fix this problem?   I tried to turn off auto-commit and commit every 1000 inserts.   Any help would be greatly appreciated.    Thank you.

Below is a sample of my code

for entry in entries:
   #retrieve data here in the loop

        try:
            query = "INSERT INTO registryTable (pkt_num, timestamp, name, address, gender) VALUES (%s, %s, %s, %s, %s);"
            data = pkt_num, timestamp, name, address, gender
            cur.execute(query, data)
        except psycopg2.DatabaseError, e:
            print(e)
        i += 1
        if i % 1000 == 0:
            conn.commit()

David
0
Hello,

i have this SQL that i am getting a string out of range exception

SELECT month(from_unixtime(wo.createdtime/1000)) "Month",
week(from_unixtime(wo.createdtime/1000)) "Month/Week",
count(wo.workorderid) "Number of opened request",
count(case when std.ispending='0' THEN 1 ELSE NULL END) "Number of closed request" 
FROM WorkOrder wo 
LEFT JOIN WorkOrderStates wos ON wo.WORKORDERID=wos.WORKORDERID 
LEFT JOIN StatusDefinition std ON wos.STATUSID=std.STATUSID 
WHERE (wo.ISPARENT='1') and wo.CREATEDTIME >= <from_thisyear> 
AND wo.CREATEDTIME <= <to_thisyear> group by month(from_unixtime(wo.createdtime/1000)),week(from_unixtime(wo.createdtime/1000)) 
order by 1

Open in new window



Error
java.lang.StringIndexOutOfBoundsException: String index out of range: -1
	at java.lang.String.substring(Unknown Source)
	at com.adventnet.servicedesk.reports.utils.ReportUtil.parseDateColumns(ReportUtil.java:2422)
	at com.adventnet.servicedesk.reports.utils.CoreReportDesigner.generateSQLReport(CoreReportDesigner.java:8151)
	at com.adventnet.servicedesk.reports.utils.CoreReportDesigner.generateSQLReport(CoreReportDesigner.java:8020)
	at com.adventnet.servicedesk.reports.action.ReportRequestHandler.runSQLQueryReport(ReportRequestHandler.java:1684)
	at com.adventnet.servicedesk.reports.action.ReportRequestHandler.handleRequest(ReportRequestHandler.java:2002)
	at com.adventnet.servicedesk.reports.action.CustomReportHandler.handleRequest(CustomReportHandler.java:195)
	at 

Open in new window

0
We are setting an SFTP server with ProFTPD with PostgreSQL back end.

The user can authenticate and upload and download fine.  However, we are running into some sort of timeout issue while transferring upload/download.  Our SFTP server is anticipating large file transfer gigabyte.  

Checking on the /var/log/message we see that while the client is upload/download, there will be an SSH2 session closed and opened in a specific time around 45 seconds on one server and 30 seconds on another VM that we are testing on.  It also follow by a signal 11 on ProFPTD.

Mar 27 22:52:38 ftsftpsvr02 proftpd[21446]:  - SSH2 session opened.
Mar 27 22:52:49 ftsftpsvr02 proftpd[21449]:  - SSH2 session opened.
Mar 27 22:53:27 ftsftpsvr02 proftpd[21449]:  - ProFTPD terminating (signal 11)
Mar 27 22:53:27 ftsftpsvr02 proftpd[21449]:  - SSH2 session closed.
Mar 27 22:53:31 ftsftpsvr02 proftpd[21453]: - SSH2 session opened.
Mar 27 22:53:39 ftsftpsvr02 proftpd[21446]:  - SSH2 session closed.
Mar 27 22:54:10 ftsftpsvr02 proftpd[21453]:  - ProFTPD terminating (signal 11)
Mar 27 22:54:10 ftsftpsvr02 proftpd[21453]:  - SSH2 session closed.
Mar 27 22:54:13 ftsftpsvr02 proftpd[21458]:  - SSH2 session opened.
Mar 27 22:55:42 ftsftpsvr02 proftpd[21458]:  - SSH2 session closed.

We check both proftd.conf and sshd_config and there isn't any tiemout specified.
0
Greetings,

I have been tasks to upgrade Postgres cluster (replication with read only standby node) from 9.0 to 9.4, 9.5 or 9.6. What's the best option to upgrade and what advantages in the various versions you gain versus the current version? This is in a Linux environment. Please advise.
0
[Webinar] How Hackers Steal Your Credentials
LVL 9
[Webinar] How Hackers Steal Your Credentials

Do You Know How Hackers Steal Your Credentials? Join us and Skyport Systems to learn how hackers steal your credentials and why Active Directory must be secure to stop them. Thursday, July 13, 2017 10:00 A.M. PDT

I am running Postgres 8.4 on CentOS 6.3.  I have configured the pg_hba.conf file to allow remote connections.  Everything works well except about once a week the pg_hba.conf file is getting reset to a default file -- my changes are completely lost.  When this happens we lose connectivity, of course.  I've made a backup of the file so restoring and restarting the Postgres service corrects the problem, but I can't figure why the file is getting reset.

This is a brand new server.  I am the only admin.  I've configured everything on the server and have reviewed all Cron jobs to ensure nothing is running that could possibly be resetting the file.  Any ideas?
0
Hello Experts,

I would like to use spark streaming to identify in real time if a record already exists in the database.

The data that we receive contains the customerId and I would like to be able to determine if the customerId that we are receiving is already stored in a specific table in our database. If not I would go ahead and store the record, if yes I won't store it again.

The customerId is a value generated by an external application
The data is received from Kafka 24/7.
0
I'm stuck (novice says with a bit of gifted code)

Error : ERROR:  syntax error at or near "1"
LINE 5:        INNER JOIN  LATERAL ( SELECT TOP 1
                                                ^

UPDATE ACC
SET ACC.ACC_EC = SITESmin.ACC_EC, 
    ACC.ACC_NC = SITESmin.ACC_NC
FROM    ACC
       INNER JOIN LATERAL ( SELECT TOP 1
                                *
                      FROM      SITES
                      ORDER BY  ( acc_ec - site_etg ) * ( acc_ec - site_etg ) + (acc_ncb - site_ntg ) * ( acc_ncb - site_ntg )
                    ) SITESmin;

Open in new window



MACOSX 10.12.3 postgres (PostgreSQL) 9.4.6

I thought lateral was ok according to : https://blog.heapanalytics.com/postgresqls-powerful-new-join-type-lateral/

Is it LIMIT and not TOP ?
0
I am having trouble changing the structure and data of certain columns that are currently in postgres as text (and that contain blanks) into either integer or double precision. And a simple ALTER ... USING... statement won't work because there are blanks that haven't been changed to NULL.

Current sample data
to

NeedsToLookLike.PNG
To look like this
0

PostgreSQL

PostgreSQL is an object-relational database management system with an emphasis on extensibility and standards-compliance. As a database server, its primary function is to store data securely, supporting best practices, and to allow for retrieval at the request of other software applications. PostgreSQL implements the majority of the SQL2011 standard, is ACID-compliant and transactional using multiversion concurrency control (MVCC), and handles complex SQL queries using many indexing methods that are not available in other databases. It is cross-platform and runs on many operating systems including Linux, FreeBSD, OS X, Solaris, and Microsoft Windows.

Top Experts In
PostgreSQL
<
Monthly
>