• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 222
  • Last Modified:

command in scripts to switch virtual consoles

I am trying to write a script that executes a program processing a file, but I don't know what command I can use that will switch consoles and execute it there as well.

If this question isn't clear, please comment and I will try to get back to you as soon as I can.(3 or 5 x a day)
0
orz012999
Asked:
orz012999
  • 2
  • 2
1 Solution
 
jlevieCommented:
The big question is why you'd want to switch consoles. That's not a very efficent use of the machine resources and there's almost always a better way.
Explanation please?
0
 
orz012999Author Commented:
I am trying to process work units for setiathome.  Each directory has a workload in it, and all you need to do to process the file is to run the program that is already in the directory.  

I am new to scripting, but is there a way that I could process all the units at one time?  Or is it just going to divide the processing time for 1 into 6, so it will be just as slow as doing 6 singularly.
0
 
monasCommented:
orz,

      I believe that seti@home will grab as much processor time as is available from other tasks. Therefore theoretically you can get advantage only for moments you are getting new task to do (at this moment other process could use your spare CPU). But even then you loose some CPU for task management. And total result is unclear.

      Another reason to use several processes is if you have more than one CPU. But again, only in case if seti@home client is not clever enought to find and utilize this (I don't run it - I don't know).

      And finaly - you don't need to switch consoles to run several jobs. Unix has such concept as running jobs in background. - If you start some command "command &" you will be able to write new commands at the same time your command will run. If you need to get everyting this command writes in the file - use "command > /path/to/file &"

      Good Look!
0
 
jlevieCommented:
Yes multiple jobs can be run from one terminal window. The process is called "running a job in the background" and is done by adding an "&" to the end of the command. In this case, since each task needs to be run in it's own work directory you could do something like:

user> cd work1
user> ./task &
user> cd ../work2
user> ./task &

Or you could use Unix's sub-shell facility. That feature allows you to execute an arbitrary process in it's own copy of the shell, spawned from the current shell, like:

user> (cd work1; ./task) &
user> (cd work2; ./task) &

The subshell is invoked by wrapping the commands in (). You'll notice two things about those lines; that I've executed two commands by separating them with ";", and that I've told the subshell to change to the work dir, not the main shell.

Okay, so far we got the tasks running in the background, but if they emit anything like error messages or status info the output is going to be all mixed together on the screen. We solve that by redirecting and output sent to stdout/stderr into a file. Using the subshell example I'd do:

user> (cd work1; ./task >results 2>&1);

I can look at what's in "results" ("more work1/results") at any time during the tasks's execution without affecting the program. Furthermore, if I want to watch (in real time) what's being added to the results file I can do so by executing "tail -f work1/results").

All of this works equally well from a script file. Unix uses the same commands within a script file as you'd use on the command line. By default a script file that doesn't say otherwise will use whatever shell it was invoked from, but you can force the system to use a particular shell (sh in this case) by having "#!/bin/sh" as the first line of the file. To be able to execute a script file as if it were a program, one makes the file executable ("chmod +x script), allowing you to:

user> ./script

How useful this will be in the case of running seti@home is debatable. As monas pointed out, seti@home is designed to be run in the background as a low priority job using whatever "free cpu time" is available. "Free" in this case basically means when there isn't anything else to do. Since, the task is cpu bound, running more two copies will probably result in each getting roughly half of the available run time (the kernel will time-slice between the two jobs). Tasks that have even mixes of I/O and compute are better candidates for multiple simultaneous runs. When one task is waiting for I/O the other gets a chance at the cpu. Finally, the last consideration is memory. If the total memory usage of the two task and the OS is much greater than the amount of physical memory available, the system will have to start swapping the tasks in and out of memory. This is expensive and can cause run time for a pair of jobs to be significantly greater tahn it would have been to run the jobs back-to-back.
0
 
orz012999Author Commented:
Thanks for both of your help, I appreciate it:-)
0

Featured Post

Concerto's Cloud Advisory Services

Want to avoid the missteps to gaining all the benefits of the cloud? Learn more about the different assessment options from our Cloud Advisory team.

  • 2
  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now