Re: [modeller_usage] Question running script on cluster
To:
Subject: Re: [modeller_usage] Question running script on cluster
From: Modeller Caretaker <>
Date: Tue, 01 Nov 2005 10:38:26 -0800
Cc:
wrote:
I'm trying to do some modelling using 6 templates with very low
sequence id (<30), so I wan't to generate as many models as possible.
I was thinking 50 in the first instance (unless someone recommends a
lot more). I have decided to send the job to a cluster (parallel
machines) I have access to in the interest of time, but before doing
so I wanted to clarify some things.
Am I correct in assuming I need to create 50 diff script files, each
with a different STARTING_MODEL and ENDING_MODEL, and in each script
setting a different random number seed?
ie. script 1 STARTING_MODEL = 1 ENDING_MODEL = 1
script 2 STARTING_MODEL = 2 ENDING_MODEL = 2 .....and
so on
Yes, you could certainly do this.
Also, if I do this, will all models be intercomparable, because I
thought you were only able to compare models of the same run. Could
someone briefly explain why this is or isn't the case?
You can only compare models built from the same set of restraints, since
the energy function is a function of these restraints. But in your case
every run will have the same restraints (since your templates and
alignment will be the same each time) so the runs will be comparable.
And also, one last question. Should I be worried about log files
over-writing each other, because I think they will all go to the same
directory in the cluster? Is there maybe some line I can add to each
script do direct the output into its own folder? (As you probably can
tell from that question, my background is not in computing!)
Not only the log files, but the other outputs (schedule, initial model,
restraints). The easiest thing to do is to run each job in a separate
directory.