PostgreSQL

2K

Solutions

3

Articles & Videos

1K

Contributors

PostgreSQL is an object-relational database management system with an emphasis on extensibility and standards-compliance. As a database server, its primary function is to store data securely, supporting best practices, and to allow for retrieval at the request of other software applications. PostgreSQL implements the majority of the SQL2011 standard, is ACID-compliant and transactional using multiversion concurrency control (MVCC), and handles complex SQL queries using many indexing methods that are not available in other databases. It is cross-platform and runs on many operating systems including Linux, FreeBSD, OS X, Solaris, and Microsoft Windows.

Share tech news, updates, or what's on your mind.

Sign up to Post

I want to convert my postgress 7.4 Sql_ascii database to postgress 9.4 utf8 database..... how can I do it?..... please help
0
Free Tool: SSL Checker
LVL 8
Free Tool: SSL Checker

Scans your site and returns information about your SSL implementation and certificate. Helpful for debugging and validating your SSL configuration.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

The query is as below.
The attached file shows the existing worksheet with the columns that are output from this query.
The attached file also shows the desired worksheet, with some columns removed and the postcode column included twice, the extra postcode column to be included as the first column.
CREATE OR REPLACE VIEW ADDRESS_VIEW

AS

SELECT
AD.ADDRESS_DETAIL_PID as ADDRESS_DETAIL_PID,
AD.STREET_LOCALITY_PID as STREET_LOCALITY_PID,
AD.LOCALITY_PID as LOCALITY_PID,
AD.BUILDING_NAME as BUILDING_NAME,

AD.LOT_NUMBER_PREFIX as LOT_NUMBER_PREFIX,
AD.LOT_NUMBER as LOT_NUMBER,
AD.LOT_NUMBER_SUFFIX as LOT_NUMBER_SUFFIX,

FTA.NAME as FLAT_TYPE,
AD.FLAT_NUMBER_PREFIX as FLAT_NUMBER_PREFIX,
AD.FLAT_NUMBER as FLAT_NUMBER,
AD.FLAT_NUMBER_SUFFIX as FLAT_NUMBER_SUFFIX,

LTA.NAME as LEVEL_TYPE,
AD.LEVEL_NUMBER_PREFIX as LEVEL_NUMBER_PREFIX,
AD.LEVEL_NUMBER as LEVEL_NUMBER,
AD.LEVEL_NUMBER_SUFFIX as LEVEL_NUMBER_SUFFIX,

AD.NUMBER_FIRST_PREFIX as NUMBER_FIRST_PREFIX,
AD.NUMBER_FIRST as NUMBER_FIRST,
AD.NUMBER_FIRST_SUFFIX as NUMBER_FIRST_SUFFIX,
AD.NUMBER_LAST_PREFIX as NUMBER_LAST_PREFIX,
AD.NUMBER_LAST as NUMBER_LAST,
AD.NUMBER_LAST_SUFFIX as NUMBER_LAST_SUFFIX,

SL.STREET_NAME as STREET_NAME,
SL.STREET_CLASS_CODE as STREET_CLASS_CODE,
SCA.NAME as STREET_CLASS_TYPE,
SL.STREET_TYPE_CODE as STREET_TYPE_CODE,
SL.STREET_SUFFIX_CODE as STREET_SUFFIX_CODE,
SSA.NAME as STREET_SUFFIX_TYPE,

L.LOCALITY_NAME as LOCALITY_NAME,

ST.STATE_ABBREVIATION as …
0
Greetings,

I have a 700gb postgres database which has a standby readonly replica and I wish to migrate and upgrade to a new server with the latest postgres version so what would be the ideal migration path for this situation? it's in a Linux environment.
0
Simply put.. I connected to a free micro instance of a postgres DB I created on AWS RDS.

It worked fine. I set up a database and a couple tables on it.

Now that I'm trying to connect to it from my app.. it's just timing out.

AND.. I can no longer connect to it from PSequel or.. anything. It just keeps timing out.

I'm not even sure what more detailed questions to ask or what to direction to poke in on this one.
0
Hi, I write after sending my code to the attention. So ... I have to convert an Oracle Pl-Sql application into a similar PostgreSQL application. In addition to the ect table view tables, I need to convert Procedures and Functions.
I find myself having to convert an Oracle Pl-Sql procedure with 6 input parameters (the search filter) and 2 output parameters (one numeric for cpunt of extracted recs, the other Type Cursor for the processed cursor result ).
Well in Postgres does not exist the Type Cursor, so I wondered how to overcome the problem having a minimum target: the structure of the procedure / function should remain unaltered in the input and output parameters.
This is because the java application must be able to call the procedure / function both in oracle and postgres without modifying the call itself.
It's possible?
So far I have worked with the refcursor solution and with the table solution (as data type of output).
With the first one they could not get the result of the cursor but only the error "unnamed_portal_1" while with the TABLE solution I finally found the solution but it obliges me to have to modify the similar running procedure in Oracle.
Attach the test environment and code tested in the various versions.
0
The same slightly changed function returns only the count. I have difficulty with the cursor result sets.
If I use the function3 using refcursor, I return to "unnamed_portal_1", who finds the solution to extract the second cursor? The 2 OUT variables must remain since the java application must invoke the Oracle function and the one converted to Postgres without changing the call
function2.sql
function3.sql
0
I find this function, but I can not return any data.
Testing the individual queries, there are 5 recs that should be extracted.
Where am I wrong?
The function MUST be with the INPUT parameters that are highlighted and with the two OUT parameters.
This is because of the conversion of a similar Oracle function, a function that must be invoked by the same application by either pointing to Oracle or pointing to Postgres
function1.sql
0
CREATE OR REPLACE FUNCTION pck_gestione_tracce_sistema.p_get_rotta_traccia_sistema6(
    IN in_id_traccia_sistema numeric,
    IN in_id_versione numeric,
    IN in_limite_inferiore numeric,
    IN in_limite_superiore numeric,
    IN id_limite_inferiore_data date,
    IN id_limite_superiore_data date,
    IN in_id_viaggio numeric,
    OUT on_out_total_count numeric,
    OUT oc_out_recordset character varying)
  RETURNS record AS
$BODY$
DECLARE
   LD_LIMITE_INFERIORE_DATA Date ;
   LD_LIMITE_SUPERIORE_DATA Date ;
   LK_PACKAGE_NAME varchar(1000);
   LK_FUNCTION_NAME varchar(1000);
   LN_DEBUG_INFO Integer;
 
   ref refcursor;
   ref1 refcursor;

Begin  

   
   LN_DEBUG_INFO := 1;
   LK_PACKAGE_NAME := 'PCK_GESTIONE_DATI';
   LK_FUNCTION_NAME := 'P_GET_ROTTA_TRACCIA_SISTEMA';
   LD_LIMITE_INFERIORE_DATA  := TO_DATE(TO_CHAR(ID_LIMITE_INFERIORE_DATA,'DD/MM/YYYY HH24:MI:SS'),'DD/MM/YYYY HH24:MI:SS');
   LD_LIMITE_SUPERIORE_DATA  := TO_DATE(TO_CHAR(ID_LIMITE_SUPERIORE_DATA,'DD/MM/YYYY HH24:MI:SS'),'DD/MM/YYYY HH24:MI:SS');

    If (LN_DEBUG_INFO = 1) Then
      PERFORM nereus.p_logging(IN_TIPO        => 'I'::character varying,
                IN_CODICE      => 'INFO'::character varying,
                IN_DESCRIZIONE => 'Input call'::character varying,
                IN_MODULO      => LK_PACKAGE_NAME::character varying, --LK_PACKAGE_NAME,
                IN_FUNZIONE    => LK_FUNCTION_NAME::character varying, --LK_FUNCTION_NAME,
               …
0
CREATE OR REPLACE FUNCTION pck_gestione_tracce_sistema.p_get_rotta_traccia_sistema10(
    IN in_id_traccia_sistema numeric,
    IN in_id_versione numeric,
    IN in_limite_inferiore numeric,
    IN in_limite_superiore numeric,
    IN id_limite_inferiore_data date,
    IN id_limite_superiore_data date,
    IN in_id_viaggio numeric,
    OUT on_out_total_count numeric,
    OUT oc_out_recordset character varying)
  RETURNS SETOF record AS
$BODY$
DECLARE
   LD_LIMITE_INFERIORE_DATA Date ;
   LD_LIMITE_SUPERIORE_DATA Date ;
   LK_PACKAGE_NAME varchar(1000);
   LK_FUNCTION_NAME varchar(1000);
   LN_DEBUG_INFO Integer;
 
   ---ref refcursor;
   ---ref1 refcursor;
   counter record;
   setdati record;

Begin  

   
   LN_DEBUG_INFO := 1;
   LK_PACKAGE_NAME := 'PCK_GESTIONE_DATI';
   LK_FUNCTION_NAME := 'P_GET_ROTTA_TRACCIA_SISTEMA';
   LD_LIMITE_INFERIORE_DATA  := TO_DATE(TO_CHAR(ID_LIMITE_INFERIORE_DATA,'DD/MM/YYYY HH24:MI:SS'),'DD/MM/YYYY HH24:MI:SS');
   LD_LIMITE_SUPERIORE_DATA  := TO_DATE(TO_CHAR(ID_LIMITE_SUPERIORE_DATA,'DD/MM/YYYY HH24:MI:SS'),'DD/MM/YYYY HH24:MI:SS');

    If (LN_DEBUG_INFO = 1) Then
      PERFORM nereus.p_logging(IN_TIPO        => 'I'::character varying,
                IN_CODICE      => 'INFO'::character varying,
                IN_DESCRIZIONE => 'Input call'::character varying,
                IN_MODULO      => LK_PACKAGE_NAME::character varying, --LK_PACKAGE_NAME,
                IN_FUNZIONE    => …
0
Greetings,

I have a Postgres 9.0.22 version on debian 3.0 and trying to restore base online backup from a particular day to a new Vm that has Postgres 9.0.23 and running into alot of issues. It keeps asking for Wal files and I finally placed the Wal file it needs and now it says it has an invalid length. Please help.
0
Visualize your virtual and backup environments
LVL 1
Visualize your virtual and backup environments

Create well-organized and polished visualizations of your virtual and backup environments when planning VMware vSphere, Microsoft Hyper-V or Veeam deployments. It helps you to gain better visibility and valuable business insights.

Hi Team,

I am having a Python script which insert data into PostgreSQL from XML file. However, we have updated the python version to 3.6.1 it does not add any data into PostgreSQL database and does not throw any error message on command line screen.
Can you please suggest what's goes wrong in the below code...
#/usr/bin/python3.6.1

# import modules
import sys
import psycopg2
import datetime


#import cursors
now = datetime.datetime.now()
# current data and time
dt = now.strftime("%Y%m%dT%H%M%S")

# xml tree access
#from xml.etree import ElementTree
import xml.etree.ElementTree as ET

# incremental variable
x = 0
with open(sys.argv[1], 'rt',encoding="utf8") as f:
    	#tree = ElementTree.parse(f)
		tree = ET.parse(f)
# connection to postgreSQL database
try:
    conn=psycopg2.connect("dbname='####' user='postgres' password='XXXX'")
except:
    print ("Hey I am unable to connect to the database.")
cur = conn.cursor()
# access the xml tree element nodes
try:
	for node in tree.findall('.//tu'):
		src = node.find('tuv[1]/seg')
		tgt = node.find('tuv[2]/seg')
		
		source = src.text.encode("utf-8")
		target = tgt.text.encode("utf-8")
		
		#query to database table
		query = "INSERT INTO MT_DB (SRC_LOCALE,SRC_DATA,TGT_LOCALE,TGT_DATA,DOMAIN,DATETIME,TYPE) VALUES (%s, %s, %s, %s, %s, %s, %s)"
		data = (sys.argv[2],source,sys.argv[3],target,sys.argv[4],str(dt),sys.argv[5])

		#for locale swap with data
		#data = 

Open in new window

0
Today we have 1 odoo instance with one front and 1 postgres db, how to scale it as we want to use it with 50k+ users ? :)
0
I have a server which runs on PostgreSQL, Django on apache and gets hang up every couple of days. I understood that it would require PostgreSQL optimization. But there are so many parameters not sure where to start. As of now, the server hanging up. Please let me know what kind of information you need so that I can provide you and you can better assist me.

Also I have iptables firewall setup not sure where exactly is the problem. But the website hangs up every 1 one day and when I restart the apache it works for one day.
0
Hello Experts,

Currently the Atlassian applications along with the Postgres databases reside on 1 VM.

I am looking for any recommendations for splitting up the applications/databases and platform options to integrate the Atlassian applications on (VM, Docker or AWS).

We have 100 users split between the US and UK.
0
Hello, I am using Ubuntu 16.04 and Python psycop2 to insert about 2.1 million rows (5 columns) with a loop reading a 120GB binary file into a Postgresql Table.   I have a I7-7600K CPU with 16GB Ram.   Around 280,000 my system runs out of memory and my computer will lock up or the Python IDE will crash.   Could someone tell how to fix this problem?   I tried to turn off auto-commit and commit every 1000 inserts.   Any help would be greatly appreciated.    Thank you.

Below is a sample of my code

for entry in entries:
   #retrieve data here in the loop

        try:
            query = "INSERT INTO registryTable (pkt_num, timestamp, name, address, gender) VALUES (%s, %s, %s, %s, %s);"
            data = pkt_num, timestamp, name, address, gender
            cur.execute(query, data)
        except psycopg2.DatabaseError, e:
            print(e)
        i += 1
        if i % 1000 == 0:
            conn.commit()

David
0
Hello,

i have this SQL that i am getting a string out of range exception

SELECT month(from_unixtime(wo.createdtime/1000)) "Month",
week(from_unixtime(wo.createdtime/1000)) "Month/Week",
count(wo.workorderid) "Number of opened request",
count(case when std.ispending='0' THEN 1 ELSE NULL END) "Number of closed request" 
FROM WorkOrder wo 
LEFT JOIN WorkOrderStates wos ON wo.WORKORDERID=wos.WORKORDERID 
LEFT JOIN StatusDefinition std ON wos.STATUSID=std.STATUSID 
WHERE (wo.ISPARENT='1') and wo.CREATEDTIME >= <from_thisyear> 
AND wo.CREATEDTIME <= <to_thisyear> group by month(from_unixtime(wo.createdtime/1000)),week(from_unixtime(wo.createdtime/1000)) 
order by 1

Open in new window



Error
java.lang.StringIndexOutOfBoundsException: String index out of range: -1
	at java.lang.String.substring(Unknown Source)
	at com.adventnet.servicedesk.reports.utils.ReportUtil.parseDateColumns(ReportUtil.java:2422)
	at com.adventnet.servicedesk.reports.utils.CoreReportDesigner.generateSQLReport(CoreReportDesigner.java:8151)
	at com.adventnet.servicedesk.reports.utils.CoreReportDesigner.generateSQLReport(CoreReportDesigner.java:8020)
	at com.adventnet.servicedesk.reports.action.ReportRequestHandler.runSQLQueryReport(ReportRequestHandler.java:1684)
	at com.adventnet.servicedesk.reports.action.ReportRequestHandler.handleRequest(ReportRequestHandler.java:2002)
	at com.adventnet.servicedesk.reports.action.CustomReportHandler.handleRequest(CustomReportHandler.java:195)
	at 

Open in new window

0
We are setting an SFTP server with ProFTPD with PostgreSQL back end.

The user can authenticate and upload and download fine.  However, we are running into some sort of timeout issue while transferring upload/download.  Our SFTP server is anticipating large file transfer gigabyte.  

Checking on the /var/log/message we see that while the client is upload/download, there will be an SSH2 session closed and opened in a specific time around 45 seconds on one server and 30 seconds on another VM that we are testing on.  It also follow by a signal 11 on ProFPTD.

Mar 27 22:52:38 ftsftpsvr02 proftpd[21446]:  - SSH2 session opened.
Mar 27 22:52:49 ftsftpsvr02 proftpd[21449]:  - SSH2 session opened.
Mar 27 22:53:27 ftsftpsvr02 proftpd[21449]:  - ProFTPD terminating (signal 11)
Mar 27 22:53:27 ftsftpsvr02 proftpd[21449]:  - SSH2 session closed.
Mar 27 22:53:31 ftsftpsvr02 proftpd[21453]: - SSH2 session opened.
Mar 27 22:53:39 ftsftpsvr02 proftpd[21446]:  - SSH2 session closed.
Mar 27 22:54:10 ftsftpsvr02 proftpd[21453]:  - ProFTPD terminating (signal 11)
Mar 27 22:54:10 ftsftpsvr02 proftpd[21453]:  - SSH2 session closed.
Mar 27 22:54:13 ftsftpsvr02 proftpd[21458]:  - SSH2 session opened.
Mar 27 22:55:42 ftsftpsvr02 proftpd[21458]:  - SSH2 session closed.

We check both proftd.conf and sshd_config and there isn't any tiemout specified.
0
Greetings,

I have been tasks to upgrade Postgres cluster (replication with read only standby node) from 9.0 to 9.4, 9.5 or 9.6. What's the best option to upgrade and what advantages in the various versions you gain versus the current version? This is in a Linux environment. Please advise.
0
I am running Postgres 8.4 on CentOS 6.3.  I have configured the pg_hba.conf file to allow remote connections.  Everything works well except about once a week the pg_hba.conf file is getting reset to a default file -- my changes are completely lost.  When this happens we lose connectivity, of course.  I've made a backup of the file so restoring and restarting the Postgres service corrects the problem, but I can't figure why the file is getting reset.

This is a brand new server.  I am the only admin.  I've configured everything on the server and have reviewed all Cron jobs to ensure nothing is running that could possibly be resetting the file.  Any ideas?
0
Upcoming Webinar: Percona XtraDB Cluster 6/21 10am
LVL 2
Upcoming Webinar: Percona XtraDB Cluster 6/21 10am

Join Percona’s MySQL Practice Manager Kenny Gryp and QA Engineer, Ramesh Sivaraman as they present Percona XtraDB Cluster, Galera Cluster, MySQL Group Replication on Wednesday, June 21, 2017 at 10:00 am PDT / 1:00 pm EDT (UTC-7).

Hello Experts,

I would like to use spark streaming to identify in real time if a record already exists in the database.

The data that we receive contains the customerId and I would like to be able to determine if the customerId that we are receiving is already stored in a specific table in our database. If not I would go ahead and store the record, if yes I won't store it again.

The customerId is a value generated by an external application
The data is received from Kafka 24/7.
0
I'm stuck (novice says with a bit of gifted code)

Error : ERROR:  syntax error at or near "1"
LINE 5:        INNER JOIN  LATERAL ( SELECT TOP 1
                                                ^

UPDATE ACC
SET ACC.ACC_EC = SITESmin.ACC_EC, 
    ACC.ACC_NC = SITESmin.ACC_NC
FROM    ACC
       INNER JOIN LATERAL ( SELECT TOP 1
                                *
                      FROM      SITES
                      ORDER BY  ( acc_ec - site_etg ) * ( acc_ec - site_etg ) + (acc_ncb - site_ntg ) * ( acc_ncb - site_ntg )
                    ) SITESmin;

Open in new window



MACOSX 10.12.3 postgres (PostgreSQL) 9.4.6

I thought lateral was ok according to : https://blog.heapanalytics.com/postgresqls-powerful-new-join-type-lateral/

Is it LIMIT and not TOP ?
0
I am having trouble changing the structure and data of certain columns that are currently in postgres as text (and that contain blanks) into either integer or double precision. And a simple ALTER ... USING... statement won't work because there are blanks that haven't been changed to NULL.

Current sample data
to

NeedsToLookLike.PNG
To look like this
0

PostgreSQL

2K

Solutions

3

Articles & Videos

1K

Contributors

PostgreSQL is an object-relational database management system with an emphasis on extensibility and standards-compliance. As a database server, its primary function is to store data securely, supporting best practices, and to allow for retrieval at the request of other software applications. PostgreSQL implements the majority of the SQL2011 standard, is ACID-compliant and transactional using multiversion concurrency control (MVCC), and handles complex SQL queries using many indexing methods that are not available in other databases. It is cross-platform and runs on many operating systems including Linux, FreeBSD, OS X, Solaris, and Microsoft Windows.

Top Experts In
PostgreSQL
<
Monthly
>