Statistical Packages

Statistical packages are software titles, such as JMP and GNU Octave, and programming languages, such as MATLAB, R and SAS, that are used to discover, explore and analyze data and suggest useful conclusions, either to learn something unexpected or to confirm a hypothesis. The field includes the design and analysis of techniques to give approximate but accurate solutions to hard problems in statistics, econometrics, time-series, optimization and 2D- and 3D-visualization. Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, in different business, science, and social science domains.

Share tech news, updates, or what's on your mind.

Sign up to Post


So if I manually connect using sftp from a centos box to my Proftpd server and issue a get command to grab a file, all's good.

If I do it in a script, it fails after getting the file  (next step would be to delete the file which it never does)

It's driving me round the bend abit, so any help would be greatly appreciated.

on the sftp client side
Log from Scripted version
2018-03-15 12:34:35,029 [30711] <sftp:6>: received READ (5) SFTP request (request ID 11, channel ID 0)
2018-03-15 12:34:35,030 [30711] <sftp:7>: received request: READ 8fc9867310df242f 0 32768
2018-03-15 12:34:35,030 [30711] <sftp:8>: sending response: STATUS 1 'End of file' ('End of file' [-1])
2018-03-15 12:34:35,030 [30711] <ssh2:9>: sending CHANNEL_DATA (remote channel ID 0, 37 data bytes)
2018-03-15 12:34:35,030 [30711] <ssh2:19>: waiting for max of 600 secs while polling socket 1 using select(2)
2018-03-15 12:34:35,030 [30711] <ssh2:3>: sent SSH_MSG_CHANNEL_DATA (94) packet (80 bytes)
2018-03-15 12:34:35,031 [30711] <ssh2:11>: channel ID 0 remote window size currently at 2096633 bytes
2018-03-15 12:34:35,031 [30711] <ssh2:19>: waiting for max of 600 secs while polling socket 0 using select(2)
2018-03-15 12:34:35,031 [30711] <ssh2:20>: SSH2 packet len = 44 bytes
2018-03-15 12:34:35,031 [30711] <ssh2:20>: SSH2 packet padding len = 5 bytes
2018-03-15 12:34:35,031 [30711] <ssh2:20>: SSH2 packet payload len = 38 bytes
2018-03-15 12:34:35,031 [30711] <ssh2:19>: waiting for max of …
Free Tool: Site Down Detector
LVL 12
Free Tool: Site Down Detector

Helpful to verify reports of your own downtime, or to double check a downed website you are trying to access.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Hello All,

Hope someone clarify the error I have in my STIDF data plot.
I'm reading through related questions but no solution fixed my error.

I'm working on STIDF  and I want to use stplot and spplot but it seems spplot is not suitable for STIDF.

When I use stplot I always get this error:

    Error in `levels<-`(`*tmp*`, value = if (nl == nL) as.character(labels) else paste0(labels,  :
      factor level [2] is duplicated

Here's how my data in STIDR data type looks:

           Lat         Long       sp.ID       time                                 endTime              TimeIndex     Speed    Station_ID    
    41.71268  -87.64341    1      2017-07-01 00:00:00   2017-07-01 18:00:00       1                    86           2
    41.47268  -87.35281    2      2017-07-01 00:00:00   2017-07-01 18:00:00       1                    35           5
    41.71268  -87.64341    3      2017-07-01 01:00:00   2017-07-01 18:01:00       2                    43           2
    41.47268  -87.35281    4      2017-07-01 01:00:00   2017-07-01 18:01:00       2                    55           5

I think it's related to my ID variable but I have duplicated station ID because I have hourly reading for each location , so ID will be repeated in my dataset.

I tried this code but I still have the error message,

 STIDF_jour$Station_ID <- factor(STIDF_jour $Station_ID, levels = rev(unique(STIDF_jour $Station_ID)), ordered=TRUE)
1) I have sales volumes which I can sum by customer and month and I have interest rates by month.
So just imagine months as rows, the avg monthly interest rates in column 1, monthly sales volume customer A in column 2, monthly sales volume customer B in column 3...

2) Typically I use Excel to analyze.

3) Goal: I'd like to find how interest rates impact our sales volume.

4) Thought & Concerns:

a) I don't think I should use monthly sales totals for all customers. I think I need to somehow factor in the customers since an increase/decrease in a specific customer's sales may be attributable to a new/lost customer or a change in a customer's volume that may not be related to interest rates. this is why I suggested the data columns described above.
b) I am concerned that this may be too much for excel since we have a few hundred customers with sales (columns) in the past 3 year period I am reviewing.
c) There is likely a lag between the month of the the interest rate change and the change in sales.
d) There is likely an extreme initial reaction to a rate change.
e) The interest rate impact could vary depending upon the amount of change and/or where the change started. Meaning a jump from 3.5% to 4% may have less impact then a rate increase from 4% to 4.5%

Maybe this is involves more than a multiple regression analysis? In either case; it would help if you could describe how to set up the data and how to use the appropriate analysis tool
Can you please help me with the below query:

                                 MAX (ABS(MKT_VALUE * r.rate)) AS 'MKV_USD',
                                 MAX (ABS(MKT_NOTION * r.rate)) AS 'NOTIONAL_USD',
                                 COUNT (*) AS 'NUM_OF_HOLDINGS'  ,
                        MAX(FI.MATURITY) AS MATURITY
                        --      INTO #FI_Summary
                                 iim_risk_point.dbo.FI_PORT_SEC_CHAR_LOAD FI,
                                 dbo.Account ACT,
                                 dbo.fx_rate r
                                 ASOF_DATE = '12/29/2017' AND
                       FI.MATURITY < dateadd(dd, 30, '12/29/2017') AND
                                 FI.PORTF_LIST = ACT.ID_ALADDIN AND
                                 ACT.STATUS = 'A' AND
                                 ACT.NME_GC_LVL1 IN('Fixed Income','Liquidity' )   AND
                                 MKT_VALUE <> 0  AND
                                 FI.PORT_CURRENCY = r.curr_sold AND
                                 r.curr_bought = 'USD' AND
                                 FI.ASOF_DATE =
                              GROUP BY
                              ORDER BY 1,2

      the dateadd function is not giving correct results. what am I doing wrong.
So I have a dataset wherein I have account number and "days past due" with every observation. So for every account number, as soon as the "days past due" column hits a code like "DLQ3" , I want to remove rest of the rows for that account(even if DLQ3 is the first observation for that account).

My dataset looks like :

Observation date Account num   Days past due

2016-09                           200056              DLQ1
2016-09                           200048              DLQ2
2016-09                           389490              NORM
2016-09                           383984              DLQ3.....

So for account 383984, I want to remove all the rows post the date 2016-09 as now its in default.

So in all I want to see when the account hits DLQ3 and when it does I want to remove all the rows post the first DLQ3 observation.
I have 2 java projects to do a replication , a RMIreplication and the publhiser, in the RMIReplication I create an ArrayList of Subjects anda in the publisher I nedd to aceed this ArrayList to do the attach and setstate how can I do, I will put belong the code of the 2 class from diferents projects
public class Replication {

	 //static ArrayList<Subject> theList ;
	static ArrayList<Subject> theList;
	public static void main(String args[]){
				theList =new ArrayList<Subject>();
				Registry r=null;
				Registry r1=null;
				Registry r2=null;
					r = LocateRegistry.createRegistry(2023);
					r1 = LocateRegistry.createRegistry(2024);
					r2 = LocateRegistry.createRegistry(2025);
				}catch(RemoteException a){}
				//System.setSecurityManager(new RMISecurityManager());
					Subject list = new Event();
					Subject list1 = new Event();
					Subject list2 = new Event();
		            	Naming.rebind("//localhost:2023/Subject", (Remote) list );
		            	Naming.rebind("//localhost:2024/Subject1", (Remote) list1 );
		            	Naming.rebind("//localhost:2025/Subject2", (Remote) list2 );
		            	theList.add( list);

Open in new window

I have a data frame with some names (rows) and I have some positions that I want to use, for instance 35th row and 145th row. How do I get the names of the rows which are in this positions? I uploaded a print screen that may help understanding. Thanks!

I tried something like

names1 <- row.names(which(size_96 < median(size_96, na.rm= T)))
Hi, I have one data frame (df1) with 20 observations (one for each year) and 597 variables (each one is one stock). The values are a ratio called book-to-market ratio. I need to build two portfolios for each year which consists of the stocks with values lower than the median and stocks with values above the median. The names of the stocks are the columns from df1. So I need to check if each value from each row (each year) is below or above the median and identify each stock name (columns in df1). Then I need to match it with the columns from another data frame (df2) which has data from the return of each stock in each year (20x597). The end result would be a vector with 20 entries, which are the differences of average returns between the two portfolios. I hope it was clear enough, thanks for the answer and I`m here for any explanation.

I've logged into a Microsoft R Server using mrsdeploy::remoteLogin()

Test with session:

REMOTE> result <- system("gpg --yes --batch -r [e-mail] --passphrase=[youPassphrase] --armor --utf8-strings --decrypt youFile", intern = TRUE)

REMOTE> result
[1] 2

REMOTE> exit
>Logout from remote R session complete

Open in new window

Test without session:

result <- system("gpg --yes --batch -r [e-mail] --passphrase= youPassphrase] --armor --utf8-strings --decrypt youFile", intern = TRUE)

gpg: encrypted with 2048-bit RSA key, ID XXXXXXX, created 2017-11-20 "name<e-mail>"

[1] "Esta es la frase\r"
[2] "que he encriptado\r"

Open in new window

I need that it work based on remote session because this way works on service.

Thanks for your reply.

Kind regards,
I have written the function below. It works, but is slow. On my windows 7 R installation, what should I do to get this function working with the parallel library? Or is there some other obvious performance improvement I could do?

I followed the answer which led me to try vectorising but the improvement is minimal. Given I have another 23 cores and 50GB of RAM available I suspect the biggest improvement would be parallel processing. Albeit tricky to do on my Windows OS and using my newly learnt R skills. 

# Build the encoding function

  encode <- function(dataframe, columnName, code_key){
    asc <- function(x) { strtoi(charToRaw(x),16L) }
    chr <- function(n) { rawToChar(as.raw(n)) } 
    encoded <- c()
    for (j in 1:length(dataframe[[columnName]])) {
      asc1<- NULL
      asc1 <- c()
      if((j%%(1E4)) == 0) {print(paste0(j," of ",length(dataframe[[columnName]]), " records processed"))}
      for (i in 1:nchar(dataframe[[columnName]][j])) {
        asc1[i] <- chr(asc(substr(dataframe[[columnName]][j], i, i ))  + i + code_key)  
        encoded[j] <- paste(asc1, collapse='')}} 
    encName <- paste0(columnName, "_Encoded")
    dataframe[[encName]] <- encoded

# Example data set to work function on

  df1 <-$Species, 10000))
  colnames(df1) <- "Species"
  df1$Species <- 

Open in new window

The 14th Annual Expert Award Winners
The 14th Annual Expert Award Winners

The results are in! Meet the top members of our 2017 Expert Awards. Congratulations to all who qualified!

I recently procured Visual Studio 2017 professional and trying to hands on with R tools.
I created a new R project and created a custom (user defined) function.  The function generates
4 sub-graphs with the function par(mfrow=c(2,2)) in one main graph
My function is working well with regular R software version 3.4.
When I trying the same function in R tools in Visual Studio 2017, I am getting an error
Error in : figure margins too large
What could be the problem. Any solutions for rectification?
I am building a 2-tier Microsoft PKI infrastructure.
I have 1 off-line root CA and 2 issuing CA r running Windows server 2012 R2.  I want to have 1 active issuing CA an and the 2nd CA as a standby in a disaster recovery site.
How should I configure the CDP and AIA LOCATION?  Do I need a shared location where both CA’s can access the CRL information or can  I make the CDP and AIA  location local to the  issuing CA and rely on  a backup/restore  if I need to activate the 2nd  CA in DR.

Hello All Experts,
I am a student enthusiast in learning "Data Analytics" , which is the best platform to learn for FREE?
I want to Learn 'Data Science (Statistics)' & 'SAS/R' from scratch?
Any videos? Any websites? Any Blogs?


Satish Kumar G N
Hi All,
While using REF keyword in my logical file , i get compilation error - "Record name same as name of file being created"

DDS of LF -

*************** Beginning of data *************************************
                R USEREF                                                
                  ACCLVL    R               REFFLD(ACCLEVELID ACCOUNT)  
                  ACCORG    R               REFFLD(ACTORGCOD  ACCOUNT)  
                  ACCNUM    R               REFFLD(ACCOUNTNUM ACCOUNT)  
****************** End of data ****************************************

May i know why is that so ?
Issue is that when I set a different it doesn't update neither my texblock.Text nor my listbox.Items;

Help very appreciated:)

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Runtime.InteropServices.WindowsRuntime;
using Windows.Foundation;
using Windows.Foundation.Collections;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Controls.Primitives;
using Windows.UI.Xaml.Data;
using Windows.UI.Xaml.Input;
using Windows.UI.Xaml.Media;
using Windows.UI.Xaml.Navigation;
using Windows.Services.Maps;
using Windows.Devices.Geolocation;

// The Blank Page item template is documented at

namespace New_World_Map
    /// <summary>
    /// An empty page that can be used on its own or navigated to within a Frame.
    /// </summary>
    public sealed partial class MainPage : Page


        List<string> stringlist = new List<string>();

        public MainPage()

            this.RightTapped += MainPage_RightTapped;

            mapscontrol.CenterChanged += Mapscontrol_CenterChanged;

            listbox.DoubleTapped += Listbox_DoubleTapped;

            listview.Items.Add("Zoom In");

            listview.Items.Add("Zoom Out");

            listview.Items.Add("Navigate North");

            listview.Items.Add("Navigate South");

write.csv(df,file="~C:/Users/anitha/Documents/social_media analysis/socialmedia/tweets.csv",row.names=FALSE,append = TRUE)
Error in file(file, ifelse(append, "a", "w")) :
  cannot open the connection

I fairly new in R, I am doing some simple visualization in shiny app, I am trying to flip a bar chart downward using  scale_y_reverse() , it works well when I run my code in R console, but when I run it in shiny it does not flip the bar chart, below is my code in the server part:

output$trendbarPlot <- renderPlotly({
                              mydat <- mydatCopy %>% filter(Country ==input$Country)

attacksbarplot = ggplot(data=mydat,aes(x=as.factor(Year))) + geom_bar() + theme_bw(base_size=35) + xlab("") + ylab("") + theme(axis.text.x = element_blank(), axis.ticks=element_blank(),panel.grid.major=element_blank(),panel.grid.minor=element_blank(),panel.border=element_blank())  + scale_y_reverse()

attacksbarplotnol = ggplot(data=mydat,aes(x=as.factor(Year))) + geom_bar() + theme_bw(base_size=15) + xlab("") + ylab("") + theme(axis.text.x = element_blank(), axis.text.y = element_blank(), axis.ticks=element_blank(),panel.grid.major=element_blank(),panel.grid.minor=element_blank(),panel.border=element_blank()) +  scale_y_reverse()

attached file has the required flipped bar chart in shiny.

Does anyone knows how can I solve this issue?
My data:

Gage_number Latitude    Longitude   Date    Gage_1  Gage_2  Gage_3

1   35.02   -80.84  1/1/2002    0.23    0   0.7
2   35.03   -81.04  1/2/2002    0   0   0.2
3   35.06   -80.81  1/3/2002    3.2 2.1 0.1
This is just a subset of data. I around 50 gauge stations. I want to find spatial auto correction between my gauge stations for rain fall. Based on distance between them. I have created my distance matrix. But I don’t want to use any library in R. I want to do all steps in a function.

loc <- read.table("rain_data.txt",header=TRUE,fill=TRUE)  
gauge.dists <- as.matrix(dist(cbind(loc$Latitude, loc$Latitude))) #distance matrix
Now since distance between gauges is not uniform. I want to use a certain bin size to decide about distance lags.


If the distance between guage pair 1-2 is 1 meter then assign a distance lag of 1 and so on So Lag 1=intergage dist=1 meter. So Lag 5=intergage dist=5 meter After creating that matrix I will find autocorrelation between gauge pairs.

so for lag 1 intergage dist=1 for lag 5 intergage dist=5

Gage pair   date    RainA   RainB       Gage pair   date    RainA   RainB

1-2 1/1/2002    0.23    0       1-3 1/1/2002    0.23    0.7
1-2 1/2/2002    0   0       1-3 1/2/2002    0   0.2
1-2 1/3/2002    3.2 2.1     1-3 1/3/2002    3.2 0.1
I have a hard time translating it into loop or a function. Any ideas?
I am bit new to R so I am not sure if this is possible or if its more difficult than I am assuming.

Objective: I want to find the correlation between Diagnosis codes. If patient #1 has condition X what the likelihood he will at some point also have condition Y as well.

Here is what I have:
136,337 Unique patient IDs (74,527 Female, 61,810 Male)
34,442 Unique Diagnosis that exists in my population
7,777,728 Unique observations

So my 2 questions are:
1. How should I layout my Table for R?
Right now I have the table columns as :
ID, SEX, Diagnosis

2. What should my Rscript look like in order to create correlation coefficients between all my diagnosis codes.  

FYI: Yes I also have a time stamp per diagnosis code but adding it now would be to adding more confusion to the confusion I already have.
Get expert help—faster!
LVL 12
Get expert help—faster!

Need expert help—fast? Use the Help Bell for personalized assistance getting answers to your important questions.

I have an excel file that I want to add a two new columns to and then group and sum the new and other columns in R Studio and save the output, not entirely sure how to do this.  

Adding two new columns:
if Sec_flag is "Y" then I want to add a new column called Sec_checked and put a 1 as the value
if stu_status is "Ret" i want to add another new column Stu_check and put a 1 as the value

Group & Sum
I would like to group the data by columns Year, Month, Stu_status, Point1, Point2 and Point3 and sum them by the values in stu_fee, stu_return_fee, student_count, Sec_checked and Stu_check.
Overtime I will add new data points to my excel file so I would like to be able to add these in future and get new groupings.

I tried using plyr but i dont know how to add the new columns and group & sum the data.
system("java -version")

mydata <- read.xlsx("stu_d_sample.xlsx", sheetName = "Sample") 

groupColumns = c("year","month", "Stu_status","Point1","Point2","Point3")
dataColumns = c("stu_fee", "stu_return_fee","student_count", "Sec_checked", "stu_check")
res = ddply(baseball, groupColumns, function(x) colSums(x[dataColumns]))

Open in new window

2 Questions about regression in R
  Question 1:
  Let's say I create a model that correlates the unique words found in a corpus to the number of lines read. Notice that this model compiles the logs of BOTH, the outcome and the predictor.
  x <- lm( log(Words) ~ log(Lines) )
  Does that mean that exp(predict(x,list(Lines=100000))) will give me the number of words for a given number of lines? Or will it give me the LOG of a number of words for a given number of lines?
  Question 2:
  How do I invert this model so that I can input a number of words, and get back a prediction for the number of lines required in order to obtain this quantity of words?
Hello all,
i have a situation where a common value out of available data is to be computed, but the data contains different summary stats, for example:
consider there are apples in different boxes and average size of the apple is to be determined, and the available data consists  of size mean  from one basket, standard deviation of size from other basket,min size and max size from other boxes, is there any way that a general value can be derived to represent size of the apple? 

Statistical Packages

Statistical packages are software titles, such as JMP and GNU Octave, and programming languages, such as MATLAB, R and SAS, that are used to discover, explore and analyze data and suggest useful conclusions, either to learn something unexpected or to confirm a hypothesis. The field includes the design and analysis of techniques to give approximate but accurate solutions to hard problems in statistics, econometrics, time-series, optimization and 2D- and 3D-visualization. Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, in different business, science, and social science domains.

Top Experts In
Statistical Packages