When processing thousands of IFS Text Files into DB tables, could there be a significant performance improvement if the process is changed from using two SBMJOB requests for each IFS Text file operation, to using two program CALLs in a MQ Get App job to complete processing of each IFS Text files? We have a process designed to process SAP iDoc that were converted to "|" delimited text files and placed in the IBMi IFS file system via SSH/sFTP sessions. MQ is used to send new IFS Text file header information to the IBMi server. The MQ Queue Get App currently get one file header, and submits a job that evaluates the header name and determine the correct program for that file. Then submits another job into a single thread job queue and this last job actual moves the file from one IFS directory to another IFS directoty, then finally copies the data it to a DB work table for the final process to generate the appropriate DB record(s). Two new jobs for each IDoc Text file to be processed, sequential processing being almost ensure via single thread job queue in the last step. We did a 200,000 iDoc benchmark, and I found that the 100th iDoc completed this process with 36 seconds of wait time in the single thread job queue. 1,000th iDoc - 6 minutes job queue wait time, 20,000th iDoc - 49 minutes job queue wait time, and at 200,000th iDoc's job queue wait time was 13 hours and 19 minutes or so. I am believing that Job Queue (work) management overhead is the problem here, and if we let the MQ Get app perform the complete process in the IBMi server we will see better throughput. WHAT DO YOU THINK?