ViewVC Help
View File | Revision Log | Show Annotations | Root Listing
root/cvsroot/COMP/CRAB/python/crab_help.py
Revision: 1.119
Committed: Thu Oct 1 22:10:34 2009 UTC (15 years, 7 months ago) by ewv
Content type: text/x-python
Branch: MAIN
Changes since 1.118: +1 -1 lines
Log Message:
Document default of first_run

File Contents

# User Rev Content
1 nsmirnov 1.1
2     ###########################################################################
3     #
4     # H E L P F U N C T I O N S
5     #
6     ###########################################################################
7    
8     import common
9    
10     import sys, os, string
11 spiga 1.34
12 nsmirnov 1.1 import tempfile
13    
14     ###########################################################################
15     def usage():
16 slacapra 1.43 print 'in usage()'
17 nsmirnov 1.1 usa_string = common.prog_name + """ [options]
18 slacapra 1.3
19     The most useful general options (use '-h' to get complete help):
20    
21 spiga 1.100 -create -- Create all the jobs.
22     -submit n -- Submit the first n available jobs. Default is all.
23 slacapra 1.102 -status -- check status of all jobs.
24 spiga 1.100 -getoutput|-get [range] -- get back the output of all jobs: if range is defined, only of selected jobs.
25     -extend -- Extend an existing task to run on new fileblocks if there.
26     -publish -- after the getouput, publish the data user in a local DBS instance.
27 ewv 1.104 -checkPublication [dbs_url datasetpath] -- checks if a dataset is published in a DBS.
28 spiga 1.100 -kill [range] -- kill submitted jobs.
29     -resubmit [range] -- resubmit killed/aborted/retrieved jobs.
30 ewv 1.118 -copyData [range [dest_se or dest_endpoint]] -- copy locally (in crab_working_dir/res dir) or on a remote SE your produced output,
31     already stored on remote SE.
32 spiga 1.100 -renewCredential -- renew credential on the server.
33     -clean -- gracefully cleanup the directory of a task.
34     -match|-testJdl [range] -- check if resources exist which are compatible with jdl.
35     -report -- print a short report about the task
36     -list [range] -- show technical job details.
37     -postMortem [range] -- provide a file with information useful for post-mortem analysis of the jobs.
38     -printId [range] -- print the job SID or Task Unique ID while using the server.
39     -createJdl [range] -- provide files with a complete Job Description (JDL).
40     -validateCfg [fname] -- parse the ParameterSet using the framework's Python API.
41     -continue|-c [dir] -- Apply command to task stored in [dir].
42     -h [format] -- Detailed help. Formats: man (default), tex, html, txt.
43     -cfg fname -- Configuration file name. Default is 'crab.cfg'.
44     -debug N -- set the verbosity level to N.
45     -v -- Print version and exit.
46 nsmirnov 1.1
47 slacapra 1.4 "range" has syntax "n,m,l-p" which correspond to [n,m,l,l+1,...,p-1,p] and all possible combination
48    
49 nsmirnov 1.1 Example:
50 slacapra 1.26 crab -create -submit 1
51 nsmirnov 1.1 """
52 slacapra 1.43 print usa_string
53 nsmirnov 1.1 sys.exit(2)
54    
55     ###########################################################################
56     def help(option='man'):
57     help_string = """
58     =pod
59    
60     =head1 NAME
61    
62     B<CRAB>: B<C>ms B<R>emote B<A>nalysis B<B>uilder
63    
64 slacapra 1.3 """+common.prog_name+""" version: """+common.prog_version_str+"""
65 nsmirnov 1.1
66 slacapra 1.19 This tool B<must> be used from an User Interface and the user is supposed to
67 fanzago 1.37 have a valid Grid certificate.
68 nsmirnov 1.1
69     =head1 SYNOPSIS
70    
71 slacapra 1.13 B<"""+common.prog_name+"""> [I<options>] [I<command>]
72 nsmirnov 1.1
73     =head1 DESCRIPTION
74    
75 ewv 1.52 CRAB is a Python program intended to simplify the process of creation and submission of CMS analysis jobs to the Grid environment .
76 nsmirnov 1.1
77 slacapra 1.3 Parameters for CRAB usage and configuration are provided by the user changing the configuration file B<crab.cfg>.
78 nsmirnov 1.1
79 spiga 1.48 CRAB generates scripts and additional data files for each job. The produced scripts are submitted directly to the Grid. CRAB makes use of BossLite to interface to the Grid scheduler, as well as for logging and bookkeeping.
80 nsmirnov 1.1
81 ewv 1.52 CRAB supports any CMSSW based executable, with any modules/libraries, including user provided ones, and deals with the output produced by the executable. CRAB provides an interface to CMS data discovery services (DBS and DLS), which are completely hidden to the final user. It also splits a task (such as analyzing a whole dataset) into smaller jobs, according to user requirements.
82 nsmirnov 1.1
83 slacapra 1.46 CRAB can be used in two ways: StandAlone and with a Server.
84     The StandAlone mode is suited for small task, of the order of O(100) jobs: it submits the jobs directly to the scheduler, and these jobs are under user responsibility.
85 ewv 1.52 In the Server mode, suited for larger tasks, the jobs are prepared locally and then passed to a dedicated CRAB server, which then interacts with the scheduler on behalf of the user, including additional services, such as automatic resubmission, status caching, output retrieval, and more.
86 slacapra 1.46 The CRAB commands are exactly the same in both cases.
87    
88 slacapra 1.13 CRAB web page is available at
89    
90 spiga 1.94 I<https://twiki.cern.ch/twiki/bin/view/CMS/SWGuideCrab>
91 slacapra 1.6
92 slacapra 1.19 =head1 HOW TO RUN CRAB FOR THE IMPATIENT USER
93    
94 ewv 1.52 Please, read all the way through in any case!
95 slacapra 1.19
96     Source B<crab.(c)sh> from the CRAB installation area, which have been setup either by you or by someone else for you.
97    
98 ewv 1.52 Modify the CRAB configuration file B<crab.cfg> according to your need: see below for a complete list. A template and commented B<crab.cfg> can be found on B<$CRABDIR/python/crab.cfg>
99 slacapra 1.19
100 ewv 1.44 ~>crab -create
101 slacapra 1.19 create all jobs (no submission!)
102    
103 spiga 1.25 ~>crab -submit 2 -continue [ui_working_dir]
104 slacapra 1.19 submit 2 jobs, the ones already created (-continue)
105    
106 slacapra 1.26 ~>crab -create -submit 2
107 slacapra 1.19 create _and_ submit 2 jobs
108    
109 spiga 1.25 ~>crab -status
110 slacapra 1.19 check the status of all jobs
111    
112 spiga 1.25 ~>crab -getoutput
113 slacapra 1.19 get back the output of all jobs
114    
115 ewv 1.44 ~>crab -publish
116     publish all user outputs in the DBS specified in the crab.cfg (dbs_url_for_publication) or written as argument of this option
117 fanzago 1.42
118 slacapra 1.20 =head1 RUNNING CMSSW WITH CRAB
119 nsmirnov 1.1
120 slacapra 1.3 =over 4
121    
122     =item B<A)>
123    
124 ewv 1.52 Develop your code in your CMSSW working area. Do anything which is needed to run interactively your executable, including the setup of run time environment (I<eval `scramv1 runtime -sh|csh`>), a suitable I<ParameterSet>, etc. It seems silly, but B<be extra sure that you actually did compile your code> I<scramv1 b>.
125 slacapra 1.3
126 ewv 1.44 =item B<B)>
127 slacapra 1.3
128 slacapra 1.20 Source B<crab.(c)sh> from the CRAB installation area, which have been setup either by you or by someone else for you. Modify the CRAB configuration file B<crab.cfg> according to your need: see below for a complete list.
129    
130     The most important parameters are the following (see below for complete description of each parameter):
131    
132     =item B<Mandatory!>
133    
134     =over 6
135    
136     =item B<[CMSSW]> section: datasetpath, pset, splitting parameters, output_file
137    
138     =item B<[USER]> section: output handling parameters, such as return_data, copy_data etc...
139    
140     =back
141    
142     =item B<Run it!>
143    
144 fanzago 1.37 You must have a valid voms-enabled Grid proxy. See CRAB web page for details.
145 slacapra 1.20
146     =back
147    
148 spiga 1.94 =head1 RUNNING MULTICRAB
149    
150 ewv 1.98 MultiCRAB is a CRAB extension to submit the same job to multiple datasets in one go.
151 spiga 1.94
152 ewv 1.98 The use case for multicrab is when you have your analysis code that you want to run on several datasets, typically some signals plus some backgrounds (for MC studies)
153 spiga 1.94 or on different streams/configuration/runs for real data taking. You want to run exactly the same code, and also the crab.cfg are different only for few keys:
154 ewv 1.98 for sure datasetpath but also other keys, such as eg total_number_of_events, in case you want to run on all signals but only a fraction of background, or anything else.
155 spiga 1.94 So far, you would have to create a set of crab.cfg, one for each dataset you want to access, and submit several instances of CRAB, saving the output to different locations.
156     Multicrab is meant to automatize this procedure.
157     In addition to the usual crab.cfg, there is a new configuration file called multicrab.cfg. The syntax is very similar to that of crab.cfg, namely
158     [SECTION] <crab.cfg Section>.Key=Value
159    
160     Please note that it is mandatory to add explicitly the crab.cfg [SECTION] in front of [KEY].
161     The role of multicrab.cfg is to apply modification to the template crab.cfg, some which are common to all tasks, and some which are task specific.
162    
163     =head2 So there are two sections:
164    
165     =over 2
166    
167 ewv 1.98 =item B<[COMMON]>
168 spiga 1.94
169     section: which applies to all task, and which is fully equivalent to modify directly the template crab.cfg
170    
171 ewv 1.98 =item B<[DATASET]>
172 spiga 1.94
173 ewv 1.98 section: there could be an arbitrary number of sections, one for each dataset you want to run. The names are free (but COMMON and MULTICRAB), and they will be used as ui_working_dir for the task as well as an appendix to the user_remote_dir in case of output copy to remote SE. So, the task corresponding to section, say [SIGNAL] will be placed in directory SIGNAL, and the output will be put on /SIGNAL/, so SIGNAL will be added as last subdir in the user_remote_dir.
174 spiga 1.94
175     =back
176    
177     For further details please visit
178    
179     I<https://twiki.cern.ch/twiki/bin/view/CMS/SWGuideMultiCrab>
180    
181 slacapra 1.19 =head1 HOW TO RUN ON CONDOR-G
182    
183     The B<Condor-G> mode for B<CRAB> is a special submission mode next to the standard Resource Broker submission. It is designed to submit jobs directly to a site and not using the Resource Broker.
184    
185 ewv 1.52 Due to the nature of B<Condor-G> submission, the B<Condor-G> mode is restricted to OSG sites within the CMS Grid, currently the 7 US T2: Florida(ufl.edu), Nebraska(unl.edu), San Diego(ucsd.edu), Purdue(purdue.edu), Wisconsin(wisc.edu), Caltech(ultralight.org), MIT(mit.edu).
186 slacapra 1.19
187     =head2 B<Requirements:>
188    
189     =over 2
190    
191     =item installed and running local Condor scheduler
192    
193     (either installed by the local Sysadmin or self-installed using the VDT user interface: http://www.uscms.org/SoftwareComputing/UserComputing/Tutorials/vdt.html)
194    
195     =item locally available LCG or OSG UI installation
196    
197 ewv 1.44 for authentication via Grid certificate proxies ("voms-proxy-init -voms cms" should result in valid proxy)
198 slacapra 1.19
199 spiga 1.96 =item set the environment variable GRID_WL_LOCATION to the edg directory of the local LCG or OSG UI installation
200 slacapra 1.19
201     =back
202    
203     =head2 B<What the Condor-G mode can do:>
204    
205     =over 2
206    
207 ewv 1.52 =item submission directly to multiple OSG sites,
208 slacapra 1.19
209 ewv 1.52 the requested dataset must be published correctly by the site in the local and global services.
210     Previous restrictions on submitting only to a single site have been removed. SE and CE whitelisting
211     and blacklisting work as in the other modes.
212 slacapra 1.19
213     =back
214    
215     =head2 B<What the Condor-G mode cannot do:>
216    
217     =over 2
218    
219     =item submit jobs if no condor scheduler is running on the submission machine
220    
221     =item submit jobs if the local condor installation does not provide Condor-G capabilities
222    
223 ewv 1.52 =item submit jobs to an LCG site
224 slacapra 1.19
225 fanzago 1.37 =item support Grid certificate proxy renewal via the myproxy service
226 slacapra 1.19
227     =back
228    
229     =head2 B<CRAB configuration for Condor-G mode:>
230    
231 ewv 1.52 The CRAB configuration for the Condor-G mode only requires one change in crab.cfg:
232 nsmirnov 1.1
233 slacapra 1.19 =over 2
234 slacapra 1.3
235 slacapra 1.19 =item select condor_g Scheduler:
236 slacapra 1.4
237 slacapra 1.19 scheduler = condor_g
238 slacapra 1.4
239 slacapra 1.19 =back
240 slacapra 1.4
241 ewv 1.52 =head1 COMMANDS
242 slacapra 1.4
243     =over 4
244    
245 slacapra 1.26 =item B<-create>
246 slacapra 1.4
247 slacapra 1.26 Create the jobs: from version 1_3_0 it is only possible to create all jobs.
248 ewv 1.52 The maximum number of jobs depends on dataset and splitting directives. This set of identical jobs accessing the same dataset are defined as a task.
249 slacapra 1.4 This command create a directory with default name is I<crab_0_date_time> (can be changed via ui_working_dir parameter, see below). Inside this directory it is placed whatever is needed to submit your jobs. Also the output of your jobs (once finished) will be place there (see after). Do not cancel by hand this directory: rather use -clean (see).
250     See also I<-continue>.
251    
252 slacapra 1.46 =item B<-submit [range]>
253 slacapra 1.4
254 ewv 1.98 Submit n jobs: 'n' is either a positive integer or 'all' or a [range]. The default is all.
255     If 'n' is passed as an argument, the first 'n' suitable jobs will be submitted. Please note that this is behaviour is different from other commands, where -command N means act the command to the job N, and not to the first N jobs. If a [range] is passed, the selected jobs will be submitted.
256     This option may be used in conjunction with -create (to create and submit immediately) or with -continue (which is assumed by default) to submit previously created jobs. Failure to do so will stop CRAB and generate an error message. See also I<-continue>.
257 slacapra 1.4
258     =item B<-continue [dir] | -c [dir]>
259    
260 ewv 1.98 Apply the action on the task stored in directory [dir]. If the task directory is the standard one (crab_0_date_time), the most recent in time is assumed. Any other directory must be specified.
261     Basically all commands (except -create) need -continue, so it is automatically assumed. Of course, the standard task directory is used in this case.
262 slacapra 1.4
263 slacapra 1.102 =item B<-status [v|verbose]>
264 nsmirnov 1.1
265 slacapra 1.102 Check the status of the jobs, in all states. With the server, the full status, including application and wrapper exit codes, is available as soon as the jobs end. In StandAlone mode it is necessary to retrieve (-get) the job output first. With B<v|verbose> some more information is displayed.
266 nsmirnov 1.1
267 slacapra 1.20 =item B<-getoutput|-get [range]>
268 nsmirnov 1.1
269 slacapra 1.102 Retrieve the output declared by the user via the output sandbox. By default the output will be put in task working dir under I<res> subdirectory. This can be changed via config parameters. B<Be extra sure that you have enough free space>. From version 2_3_x, the available free space is checked in advance. See I<range> below for syntax.
270 nsmirnov 1.1
271 spiga 1.100 =item B<-publish>
272 fanzago 1.42
273 ewv 1.98 Publish user output in a local DBS instance after the retrieval of output. By default publish uses the dbs_url_for_publication specified in the crab.cfg file, otherwise you can supply it as an argument of this option.
274 fanzago 1.117 Warnings about publication:
275    
276     CRAB publishes only EDM files (in the FJR they are written in the tag <File>)
277    
278     By default the publication of files containing 0 events is desabled. If you want to enable it you have to set the parameter [USER].publish_zero_event=1 in crab.cfg.
279    
280     CRAB publishes in the same USER dataset more EDM files if they are produced by a job and written in the tag <File> of FJR.
281    
282     It is not possible for the user to select only one file to publish, nor to publish two files in two different USER datasets.
283    
284 fanzago 1.42
285 fanzago 1.97 =item B<-checkPublication [-USER.dbs_url_for_publication=dbs_url -USER.dataset_to_check=datasetpath -debug]>
286    
287 ewv 1.98 Check if a dataset is published in a DBS. This option is automaticaly called at the end of the publication step, but it can be also used as a standalone command. By default it reads the parameters (USER.dbs_url_for_publication and USER.dataset_to_check) in your crab.cfg. You can overwrite the defaults in crab.cfg by passing these parameters as option. Using the -debug option, you will get detailed info about the files of published blocks.
288 fanzago 1.97
289 slacapra 1.4 =item B<-resubmit [range]>
290 nsmirnov 1.1
291 fanzago 1.37 Resubmit jobs which have been previously submitted and have been either I<killed> or are I<aborted>. See I<range> below for syntax.
292 nsmirnov 1.1
293 spiga 1.60 =item B<-extend>
294    
295 ewv 1.64 Create new jobs for an existing task, checking if new blocks are available for the given dataset.
296 spiga 1.60
297 slacapra 1.4 =item B<-kill [range]>
298 nsmirnov 1.1
299 slacapra 1.4 Kill (cancel) jobs which have been submitted to the scheduler. A range B<must> be used in all cases, no default value is set.
300 nsmirnov 1.1
301 fanzago 1.115 =item B<-copyData [range -dest_se=the official SE name or -dest_endpoint=the complete endpoint of the remote SE]>
302 slacapra 1.58
303 ewv 1.118 Option that can be used only if your output have been previously copied by CRAB on a remote SE.
304 fanzago 1.115 By default the copyData copies your output from the remote SE locally on the current CRAB working directory (under res). Otherwise you can copy the output from the remote SE to another one, specifying either -dest_se=<the remote SE official name> or -dest_endpoint=<the complete endpoint of remote SE>. If dest_se is used, CRAB finds the correct path where the output can be stored.
305    
306     Example: crab -copyData --> output copied to crab_working_dir/res directory
307     crab -copyData -dest_se=T2_IT_Legnaro --> output copied to the legnaro SE, directory discovered by CRAB
308 ewv 1.118 crab -copyData -dest_endpoint=srm://<se_name>:8443/xxx/yyyy/zzzz --> output copied to the se <se_name> under
309     /xxx/yyyy/zzzz directory.
310 slacapra 1.58
311 spiga 1.80 =item B<-renewCredential >
312 mcinquil 1.59
313 spiga 1.80 If using the server modality, this command allows to delegate a valid credential (proxy/token) to the server associated with the task.
314 mcinquil 1.59
315 spiga 1.85 =item B<-match|-testJdl [range]>
316 nsmirnov 1.1
317 fanzago 1.71 Check if the job can find compatible resources. It is equivalent of doing I<edg-job-list-match> on edg.
318 nsmirnov 1.1
319 slacapra 1.20 =item B<-printId [range]>
320    
321 slacapra 1.82 Just print the job identifier, which can be the SID (Grid job identifier) of the job(s) or the taskId if you are using CRAB with the server or local scheduler Id. If [range] is "full", the the SID of all the jobs are printed, also in the case of submission with server.
322 slacapra 1.20
323 spiga 1.53 =item B<-printJdl [range]>
324    
325 ewv 1.64 Collect the full Job Description in a file located under share directory. The file base name is File- .
326 spiga 1.53
327 slacapra 1.4 =item B<-postMortem [range]>
328 nsmirnov 1.1
329 slacapra 1.46 Try to collect more information of the job from the scheduler point of view.
330 nsmirnov 1.1
331 slacapra 1.13 =item B<-list [range]>
332    
333 ewv 1.52 Dump technical information about jobs: for developers only.
334 slacapra 1.13
335 slacapra 1.89 =item B<-report>
336    
337     Print a short report about the task, namely the total number of events and files processed/requested/available, the name of the datasetpath, a summary of the status of the jobs, the list of runs and lumi sections, and so on. In principle it should contain all the info needed for analysis. Work in progress.
338    
339 slacapra 1.4 =item B<-clean [dir]>
340 nsmirnov 1.1
341 slacapra 1.26 Clean up (i.e. erase) the task working directory after a check whether there are still running jobs. In case, you are notified and asked to kill them or retrieve their output. B<Warning> this will possibly delete also the output produced by the task (if any)!
342 nsmirnov 1.1
343 calloni 1.114 =item B<-cleanCache>
344 calloni 1.110
345 ewv 1.112 Clean up (i.e. erase) the SiteDb, WMS and CrabServer caches in your submitting directory
346 calloni 1.110
347 slacapra 1.4 =item B<-help [format] | -h [format]>
348 nsmirnov 1.1
349 slacapra 1.4 This help. It can be produced in three different I<format>: I<man> (default), I<tex> and I<html>.
350 nsmirnov 1.1
351 slacapra 1.4 =item B<-v>
352 nsmirnov 1.1
353 slacapra 1.4 Print the version and exit.
354 nsmirnov 1.1
355 slacapra 1.4 =item B<range>
356 nsmirnov 1.1
357 slacapra 1.13 The range to be used in many of the above commands has the following syntax. It is a comma separated list of jobs ranges, each of which may be a job number, or a job range of the form first-last.
358 slacapra 1.4 Example: 1,3-5,8 = {1,3,4,5,8}
359 nsmirnov 1.1
360 ewv 1.44 =back
361 slacapra 1.6
362 slacapra 1.4 =head1 OPTION
363 nsmirnov 1.1
364 slacapra 1.6 =over 4
365    
366 slacapra 1.4 =item B<-cfg [file]>
367 nsmirnov 1.1
368 slacapra 1.4 Configuration file name. Default is B<crab.cfg>.
369 nsmirnov 1.1
370 slacapra 1.4 =item B<-debug [level]>
371 nsmirnov 1.1
372 slacapra 1.13 Set the debug level: high number for high verbosity.
373 nsmirnov 1.1
374 ewv 1.44 =back
375 slacapra 1.6
376 slacapra 1.5 =head1 CONFIGURATION PARAMETERS
377    
378 spiga 1.25 All the parameter describe in this section can be defined in the CRAB configuration file. The configuration file has different sections: [CRAB], [USER], etc. Each parameter must be defined in its proper section. An alternative way to pass a config parameter to CRAB is via command line interface; the syntax is: crab -SECTION.key value . For example I<crab -USER.outputdir MyDirWithFullPath> .
379 slacapra 1.5 The parameters passed to CRAB at the creation step are stored, so they cannot be changed by changing the original crab.cfg . On the other hand the task is protected from any accidental change. If you want to change any parameters, this require the creation of a new task.
380 slacapra 1.6 Mandatory parameters are flagged with a *.
381 slacapra 1.5
382     B<[CRAB]>
383 slacapra 1.6
384 slacapra 1.13 =over 4
385 slacapra 1.5
386 slacapra 1.6 =item B<jobtype *>
387 slacapra 1.5
388 slacapra 1.26 The type of the job to be executed: I<cmssw> jobtypes are supported
389 slacapra 1.6
390     =item B<scheduler *>
391    
392 ewv 1.52 The scheduler to be used: I<glitecoll> is the more efficient grid scheduler and should be used. Other choice are I<glite>, same as I<glitecoll> but without bulk submission (and so slower) or I<condor_g> (see specific paragraph) or I<edg> which is the former Grid scheduler, which will be dismissed in some future
393     From version 210, also local scheduler are supported, for the time being only at CERN. I<LSF> is the standard CERN local scheduler or I<CAF> which is LSF dedicated to CERN Analysis Facilities.
394 slacapra 1.5
395 slacapra 1.81 =item B<use_server>
396    
397     To use the server for job handling (recommended) 0=no (default), 1=true. The server to be used will be found automatically from a list of available ones: it can also be specified explicitly by using I<server_name> (see below)
398    
399 mcinquil 1.35 =item B<server_name>
400    
401 slacapra 1.81 To use the CRAB-server support it is needed to fill this key with server name as <Server_DOMAIN> (e.g. cnaf,fnal). If this is set, I<use_server> is set to true automatically.
402     If I<server_name=None> crab works in standalone way, same as using I<use_server=0> and no I<server_name>.
403 spiga 1.48 The server available to users can be found from CRAB web page.
404 mcinquil 1.35
405 slacapra 1.5 =back
406    
407 slacapra 1.20 B<[CMSSW]>
408    
409     =over 4
410    
411 slacapra 1.22 =item B<datasetpath *>
412 slacapra 1.20
413 ewv 1.108 The path of the processed or analysis dataset as defined in DBS. It comes with the format I</PrimaryDataset/DataTier/Process[/OptionalADS]>. If no input is needed I<None> must be specified. When running on an analysis dataset, the job splitting must be specified by luminosity block rather than event. Analysis datasets are only treated accurately on a lumi-by-lumi level with CMSSW 3_1_x and later.
414 spiga 1.90
415 afanfani 1.50 =item B<runselection *>
416 ewv 1.52
417 ewv 1.108 Within a dataset you can restrict to run on a specific run number or run number range. For example runselection=XYZ or runselection=XYZ1-XYZ2 .
418 afanfani 1.50
419 spiga 1.57 =item B<use_parent *>
420    
421 ewv 1.108 Within a dataset you can ask to run over the related parent files too. E.g., this will give you access to the RAW data while running over a RECO sample. Setting use_parent=1 CRAB determines the parent files from DBS and will add secondaryFileNames = cms.untracked.vstring( <LIST of parent FIles> ) to the pool source section of your parameter set.
422 spiga 1.57
423 slacapra 1.22 =item B<pset *>
424 slacapra 1.20
425 ewv 1.112 The python ParameterSet to be used.
426 slacapra 1.20
427 ewv 1.111 =item B<pycfg_params *>
428    
429     These parameters are passed to the python config file, as explained in https://twiki.cern.ch/twiki/bin/view/CMS/SWGuideAboutPythonConfigFile#Passing_Command_Line_Arguments_T
430    
431 slacapra 1.26 =item I<Of the following three parameter exactly two must be used, otherwise CRAB will complain.>
432 slacapra 1.20
433 slacapra 1.22 =item B<total_number_of_events *>
434    
435 ewv 1.108 The number of events to be processed. To access all available events, use I<-1>. Of course, the latter option is not viable in case of no input. In this case, the total number of events will be used to split the task in jobs, together with I<events_per_job>.
436 slacapra 1.22
437 slacapra 1.26 =item B<events_per_job*>
438 slacapra 1.22
439 ewv 1.108 The number of events to be accessed by each job. Since a job cannot cross the boundary of a fileblock it might be that the actual number of events per job is not exactly what you asked for. It can be used also with no input.
440    
441     =item B<total_number_of_lumis *>
442    
443     The number of luminosity blocks to be processed. This option is only valid when using analysis datasets. Since a job cannot access less than a whole file, it may be that the actual number of lumis per job is more than you asked for. Two of I<total_number_of_lumis>, I<lumis_per_job>, and I<number_of_jobs> must be supplied to run on an analysis dataset.
444    
445     =item B<lumis_per_job*>
446    
447     The number of luminosity blocks to be accessed by each job. This option is only valid when using analysis datasets. Since a job cannot access less than a whole file, it may be that the actual number of lumis per job is more than you asked for.
448 slacapra 1.22
449     =item B<number_of_jobs *>
450    
451 ewv 1.108 Define the number of jobs to be run for the task. The number of event for each job is computed taking into account the total number of events required as well as the granularity of EventCollections. Can be used also with No input.
452 slacapra 1.22
453 spiga 1.90 =item B<split_by_run *>
454    
455 ewv 1.108 To activate the split run based (each job will access a different run) use I<split_by_run>=1. You can also define I<number_of_jobs> and/or I<runselection>. NOTE: the Run Based combined with Event Based split is not yet available.
456 spiga 1.90
457 slacapra 1.22 =item B<output_file *>
458    
459 ewv 1.108 The output files produced by your application (comma separated list). From CRAB 2_2_2 onward, if TFileService is defined in user Pset, the corresponding output file is automatically added to the list of output files. User can avoid this by setting B<skip_TFileService_output> = 1 (default is 0 == file included). The Edm output produced via PoolOutputModule can be automatically added by setting B<get_edm_output> = 1 (default is 0 == no). B<warning> it is not allowed to have a PoolOutputSource and not save it somewhere, since it is a waste of resource on the WN. In case you really want to do that, and if you really know what you are doing (hint: you dont!) you can user I<ignore_edm_output=1>.
460 slacapra 1.61
461     =item B<skip_TFileService_output>
462    
463     Force CRAB to skip the inclusion of file produced by TFileService to list of output files. Default is I<0>, namely the file is included.
464 slacapra 1.20
465 slacapra 1.63 =item B<get_edm_output>
466    
467     Force CRAB to add the EDM output file, as defined in PSET in PoolOutputModule (if any) to be added to the list of output files. Default is 0 (== no inclusion)
468    
469 ewv 1.47 =item B<increment_seeds>
470    
471     Specifies a comma separated list of seeds to increment from job to job. The initial value is taken
472     from the CMSSW config file. I<increment_seeds=sourceSeed,g4SimHits> will set sourceSeed=11,12,13 and g4SimHits=21,22,23 on
473     subsequent jobs if the values of the two seeds are 10 and 20 in the CMSSW config file.
474    
475     See also I<preserve_seeds>. Seeds not listed in I<increment_seeds> or I<preserve_seeds> are randomly set for each job.
476    
477     =item B<preserve_seeds>
478    
479 ewv 1.78 Specifies a comma separated list of seeds to which CRAB will not change from their values in the user
480 ewv 1.47 CMSSW config file. I<preserve_seeds=sourceSeed,g4SimHits> will leave the Pythia and GEANT seeds the same for every job.
481    
482     See also I<increment_seeds>. Seeds not listed in I<increment_seeds> or I<preserve_seeds> are randomly set for each job.
483    
484 slacapra 1.30 =item B<first_run>
485    
486 ewv 1.119 Relevant only for Monte Carlo production for which it defaults to 1. The first job will generate events with this run number, subsequent jobs will
487 ewv 1.118 increment the run number. Failing to set this number means CMSSW will not be able to read multiple such files as they
488     will all have the same run and event numbers. This check in CMSSW can be bypassed by setting
489     I<process.source.duplicateCheckMode = cms.untracked.string('noDuplicateCheck')> in the input source, should you need to
490     read files produced without setting first_run.
491 slacapra 1.30
492 ewv 1.78 =item B<generator>
493 ewv 1.79
494     Name of the generator your MC job is using. Some generators require CRAB to skip events, others do not.
495 ewv 1.104 Possible values are pythia (default), comphep, lhe, and madgraph. This will skip events in your generator input file.
496 ewv 1.78
497 slacapra 1.31 =item B<executable>
498 slacapra 1.30
499 slacapra 1.31 The name of the executable to be run on remote WN. The default is cmsrun. The executable is either to be found on the release area of the WN, or has been built on user working area on the UI and is (automatically) shipped to WN. If you want to run a script (which might internally call I<cmsrun>, use B<USER.script_exe> instead.
500 slacapra 1.30
501     =item I<DBS and DLS parameters:>
502    
503 slacapra 1.26 =item B<dbs_url>
504 slacapra 1.6
505 slacapra 1.40 The URL of the DBS query page. For expert only.
506 slacapra 1.13
507 spiga 1.84 =item B<show_prod>
508    
509 ewv 1.98 To enable CRAB to show data hosted on Tier1s sites specify I<show_prod> = 1. By default those data are masked.
510 spiga 1.86
511 spiga 1.116 =item B<subscribed>
512    
513     By setting the flag I<subscribed> = 1 only the replicas that are subscribed to its site are considered.The default is to return all replicas. The intended use of this flag is to avoid sending jobs to sites based on data that is being moved or deleted (and thus not subscribed).
514    
515 spiga 1.86 =item B<no_block_boundary>
516    
517 ewv 1.98 To remove fileblock boundaries in job splitting specify I<no_block_boundary> = 1.
518 spiga 1.84
519 slacapra 1.13 =back
520    
521     B<[USER]>
522    
523     =over 4
524    
525 slacapra 1.6 =item B<additional_input_files>
526    
527 spiga 1.67 Any additional input file you want to ship to WN: comma separated list. IMPORTANT NOTE: they will be placed in the WN working dir, and not in ${CMS_SEARCH_PATH}. Specific files required by CMSSW application must be placed in the local data directory, which will be automatically shipped by CRAB itself. You do not need to specify the I<ParameterSet> you are using, which will be included automatically. Wildcards are allowed.
528 slacapra 1.6
529 slacapra 1.31 =item B<script_exe>
530    
531 ewv 1.112 A user script that will be run on WN (instead of default cmsrun). It is up to the user to setup properly the script itself to run on WN enviroment. CRAB guarantees that the CMSSW environment is setup (e.g. scram is in the path) and that the modified pset.py will be placed in the working directory, with name CMSSW.py . The user must ensure that a job report named crab_fjr.xml will be written. This can be guaranteed by passing the arguments "-j crab_fjr.xml" to cmsRun in the script. The script itself will be added automatically to the input sandbox so user MUST NOT add it within the B<USER.additional_input_files>.
532 slacapra 1.31
533 spiga 1.105 =item B<script_arguments>
534    
535     Any arguments you want to pass to the B<USER.script_exe>: comma separated list.
536    
537 slacapra 1.6 =item B<ui_working_dir>
538    
539 ewv 1.52 Name of the working directory for the current task. By default, a name I<crab_0_(date)_(time)> will be used. If this card is set, any CRAB command which require I<-continue> need to specify also the name of the working directory. A special syntax is also possible, to reuse the name of the dataset provided before: I<ui_working_dir : %(dataset)s> . In this case, if e.g. the dataset is SingleMuon, the ui_working_dir will be set to SingleMuon as well.
540 slacapra 1.6
541 mcinquil 1.35 =item B<thresholdLevel>
542    
543     This has to be a value between 0 and 100, that indicates the percentage of task completeness (jobs in a ended state are complete, even if failed). The server will notify the user by e-mail (look at the field: B<eMail>) when the task will reach the specified threshold. Works just with the server_mode = 1.
544    
545     =item B<eMail>
546    
547 ewv 1.52 The server will notify the specified e-mail when the task will reaches the specified B<thresholdLevel>. A notification is also sent when the task will reach the 100\% of completeness. This field can also be a list of e-mail: "B<eMail = user1@cern.ch, user2@cern.ch>". Works just with the server_mode = 1.
548 mcinquil 1.35
549 slacapra 1.6 =item B<return_data *>
550    
551 ewv 1.52 The output produced by the executable on WN is returned (via output sandbox) to the UI, by issuing the I<-getoutput> command. B<Warning>: this option should be used only for I<small> output, say less than 10MB, since the sandbox cannot accommodate big files. Depending on Resource Broker used, a size limit on output sandbox can be applied: bigger files will be truncated. To be used in alternative to I<copy_data>.
552 slacapra 1.6
553     =item B<outputdir>
554    
555 ewv 1.52 To be used together with I<return_data>. Directory on user interface where to store the output. Full path is mandatory, "~/" is not allowed: the default location of returned output is ui_working_dir/res .
556 slacapra 1.6
557     =item B<logdir>
558    
559 ewv 1.52 To be used together with I<return_data>. Directory on user interface where to store the standard output and error. Full path is mandatory, "~/" is not allowed: the default location of returned output is ui_working_dir/res .
560 slacapra 1.6
561     =item B<copy_data *>
562    
563 ewv 1.52 The output (only that produced by the executable, not the std-out and err) is copied to a Storage Element of your choice (see below). To be used as an alternative to I<return_data> and recommended in case of large output.
564 slacapra 1.6
565     =item B<storage_element>
566    
567 fanzago 1.71 To be used with <copy_data>=1
568     If you want to copy the output of your analysis in a official CMS Tier2 or Tier3, you have to write the CMS Site Name of the site, as written in the SiteDB https://cmsweb.cern.ch/sitedb/reports/showReport?reportid=se_cmsname_map.ini (i.e T2_IT_legnaro). You have also to specify the <remote_dir>(see below)
569    
570 ewv 1.78 If you want to copy the output in a not_official_CMS remote site you have to specify the complete storage element name (i.e se.xxx.infn.it).You have also to specify the <storage_path> and the <storage_port> if you do not use the default one(see below).
571 fanzago 1.71
572     =item B<user_remote_dir>
573    
574     To be used with <copy_data>=1 and <storage_element> official CMS sites.
575 ewv 1.104 This is the directory or tree of directories where your output will be stored. This directory will be created under the mountpoint ( which will be discover by CRAB if an official CMS storage Element has been used, or taken from the crab.cfg as specified by the user). B<NOTE> This part of the path will be used as logical file name of your files in the case of publication without using an official CMS storage Element. Generally it should start with "/store".
576 slacapra 1.6
577     =item B<storage_path>
578    
579 fanzago 1.71 To be used with <copy_data>=1 and <storage_element> not official CMS sites.
580     This is the full path of the Storage Element writeable by all, the mountpoint of SE (i.e /srm/managerv2?SFN=/pnfs/se.xxx.infn.it/yyy/zzz/)
581    
582 slacapra 1.6
583 fanzago 1.72 =item B<storage_pool>
584    
585     If you are using CAF scheduler, you can specify the storage pool where to write your output.
586     The default is cmscafuser. If you do not want to use the default, you can overwrite it specifing None
587    
588 spiga 1.70 =item B<storage_port>
589    
590     To choose the storage port specify I<storage_port> = N (default is 8443) .
591    
592 fanzago 1.101 =item B<local_stage_out *>
593    
594 ewv 1.104 This option enables the local stage out of produced output to the "close storage element" where the job is running, in case of failure of the remote copy to the Storage element decided by the user in che crab.cfg. It has to be used with the copy_data option. In the case of backup copy, the publication of data is forbidden. Set I<local_stage_out> = 1
595 fanzago 1.101
596 fanzago 1.71 =item B<publish_data*>
597    
598     To be used with <copy_data>=1
599     To publish your produced output in a local istance of DBS set publish_data = 1
600 fanzago 1.77 All the details about how to use this functionality are written in https://twiki.cern.ch/twiki/bin/view/CMS/SWGuideCrabForPublication
601 ewv 1.78 N.B 1) if you are using an official CMS site to stored data, the remote dir will be not considered. The directory where data will be stored is decided by CRAB, following the CMS policy in order to be able to re-read published data.
602     2) if you are using a not official CMS site to store data, you have to check the <lfn>, that will be part of the logical file name of you published files, in order to be able to re-read the data.
603 fanzago 1.71
604 fanzago 1.106 =item B<publish_with_import_all_parents>
605 ewv 1.108
606 fanzago 1.107 To publish your data in your local DBS importing also the complete parents tree, set publish_with_import_all_parents=1, otherwise 0. In this last case only the dataset that you have analyzed will be imported as parent in your local DBS. Default value is 1.
607 fanzago 1.106
608 fanzago 1.71 =item B<publish_data_name>
609    
610     You produced output will be published in your local DBS with dataset name <primarydataset>/<publish_data_name>/USER
611    
612     =item B<dbs_url_for_publication>
613    
614     Specify the URL of your local DBS istance where CRAB has to publish the output files
615    
616 fanzago 1.101 =item B<publish_zero_event>
617 spiga 1.93
618 fanzago 1.101 T0 force zero event files publication specify I<publish_zero_event> = 1
619 spiga 1.93
620 spiga 1.55 =item B<srm_version>
621 slacapra 1.46
622 spiga 1.69 To choose the srm version specify I<srm_version> = (srmv1 or srmv2).
623 slacapra 1.46
624 spiga 1.51 =item B<xml_report>
625    
626     To be used to switch off the screen report during the status query, enabling the db serialization in a file. Specifying I<xml_report> = FileName CRAB will serialize the DB into CRAB_WORKING_DIR/share/FileName.
627 slacapra 1.6
628 spiga 1.55 =item B<usenamespace>
629    
630 ewv 1.64 To use the automate namespace definition (perfomed by CRAB) it is possible to set I<usenamespace>=1. The same policy used for the stage out in case of data publication will be applied.
631 spiga 1.54
632 spiga 1.55 =item B<debug_wrapper>
633    
634 spiga 1.87 To enable the higer verbose level on wrapper specify I<debug_wrapper> = 1. The Pset contents before and after the CRAB maipulation will be written together with other useful infos.
635 spiga 1.54
636 spiga 1.75 =item B<deep_debug>
637    
638 ewv 1.78 To be used in case of unexpected job crash when the sdtout and stderr files are lost. Submitting again the same jobs specifying I<deep_debug> = 1 these files will be reported back. NOTE: it works only on standalone mode for debugging purpose.
639 spiga 1.75
640 slacapra 1.68 =item B<dontCheckSpaceLeft>
641    
642     Set it to 1 to skip the check of free space left on your working directory before attempting to get the output back. Default is 0 (=False)
643    
644 slacapra 1.6 =back
645    
646 spiga 1.96 B<[GRID]>
647 nsmirnov 1.1
648 slacapra 1.13 =over 4
649 slacapra 1.6
650 slacapra 1.13 =item B<RB>
651 slacapra 1.6
652 spiga 1.96 Which RB you want to use instead of the default one, as defined in the configuration of your UI. The ones available for CMS are I<CERN> and I<CNAF>. They are actually identical, being a collection of all WMSes available for CMS: the configuration files needed to change the broker will be automatically downloaded from CRAB web page and used.
653     You can use any other RB which is available, if you provide the proper configuration files. E.g., for gLite WMS XYZ, you should provide I<glite.conf.CMS_XYZ>. These files are searched for in the current working directory, and, if not found, on crab web page. So, if you put your private configuration files in the working directory, they will be used, even if they are not available on crab web page.
654 slacapra 1.29 Please get in contact with crab team if you wish to provide your RB or WMS as a service to the CMS community.
655 slacapra 1.6
656 slacapra 1.14 =item B<proxy_server>
657    
658     The proxy server to which you delegate the responsibility to renew your proxy once expired. The default is I<myproxy.cern.ch> : change only if you B<really> know what you are doing.
659    
660 slacapra 1.26 =item B<role>
661    
662     The role to be set in the VOMS. See VOMS documentation for more info.
663    
664 slacapra 1.27 =item B<group>
665    
666     The group to be set in the VOMS, See VOMS documentation for more info.
667    
668 slacapra 1.28 =item B<dont_check_proxy>
669    
670 ewv 1.52 If you do not want CRAB to check your proxy. The creation of the proxy (with proper length), its delegation to a myproxyserver is your responsibility.
671 slacapra 1.28
672 spiga 1.95 =item B<dont_check_myproxy>
673    
674     If you want to to switch off only the proxy renewal set I<dont_check_myproxy>=1. The proxy delegation to a myproxyserver is your responsibility.
675    
676 slacapra 1.6 =item B<requirements>
677    
678     Any other requirements to be add to JDL. Must be written in compliance with JDL syntax (see LCG user manual for further info). No requirement on Computing element must be set.
679    
680 slacapra 1.27 =item B<additional_jdl_parameters:>
681    
682 spiga 1.48 Any other parameters you want to add to jdl file:semicolon separated list, each
683 ewv 1.44 item B<must> be complete, including the closing ";".
684 spiga 1.48
685     =item B<wms_service>
686    
687 fanzago 1.71 With this field it is also possible to specify which WMS you want to use (https://hostname:port/pathcode) where "hostname" is WMS name, the "port" generally is 7443 and the "pathcode" should be something like "glite_wms_wmproxy_server".
688 slacapra 1.27
689 slacapra 1.6 =item B<max_cpu_time>
690    
691     Maximum CPU time needed to finish one job. It will be used to select a suitable queue on the CE. Time in minutes.
692    
693     =item B<max_wall_clock_time>
694    
695     Same as previous, but with real time, and not CPU one.
696    
697 spiga 1.88 =item B<ce_black_list>
698 slacapra 1.6
699 ewv 1.66 All the CE (Computing Element) whose name contains the following strings (comma separated list) will not be considered for submission. Use the dns domain (e.g. fnal, cern, ifae, fzk, cnaf, lnl,....). You may use hostnames or CMS Site names (T2_DE_DESY) or substrings.
700 slacapra 1.6
701 spiga 1.88 =item B<ce_white_list>
702 slacapra 1.6
703 ewv 1.66 Only the CE (Computing Element) whose name contains the following strings (comma separated list) will be considered for submission. Use the dns domain (e.g. fnal, cern, ifae, fzk, cnaf, lnl,....). You may use hostnames or CMS Site names (T2_DE_DESY) or substrings. Please note that if the selected CE(s) does not contain the data you want to access, no submission can take place.
704 slacapra 1.27
705 spiga 1.88 =item B<se_black_list>
706 slacapra 1.27
707 ewv 1.66 All the SE (Storage Element) whose name contains the following strings (comma separated list) will not be considered for submission.It works only if a datasetpath is specified. You may use hostnames or CMS Site names (T2_DE_DESY) or substrings.
708 slacapra 1.27
709 spiga 1.88 =item B<se_white_list>
710 slacapra 1.27
711 ewv 1.66 Only the SE (Storage Element) whose name contains the following strings (comma separated list) will be considered for submission.It works only if a datasetpath is specified. Please note that if the selected CE(s) does not contain the data you want to access, no submission can take place. You may use hostnames or CMS Site names (T2_DE_DESY) or substrings.
712 slacapra 1.6
713 spiga 1.73 =item B<remove_default_blacklist>
714    
715 ewv 1.78 CRAB enforce the T1s Computing Eelements Black List. By default it is appended to the user defined I<CE_black_list>. To remove the enforced T1 black lists set I<remove_default_blacklist>=1.
716 spiga 1.73
717 slacapra 1.6 =item B<virtual_organization>
718    
719 spiga 1.94 You do not want to change this: it is cms!
720 slacapra 1.6
721     =item B<retry_count>
722    
723 fanzago 1.37 Number of time the Grid will try to resubmit your job in case of Grid related problem.
724 slacapra 1.6
725 slacapra 1.27 =item B<shallow_retry_count>
726    
727 fanzago 1.37 Number of time shallow resubmission the Grid will try: resubmissions are tried B<only> if the job aborted B<before> start. So you are guaranteed that your jobs run strictly once.
728 slacapra 1.27
729 slacapra 1.30 =item B<maxtarballsize>
730    
731     Maximum size of tar-ball in Mb. If bigger, an error will be generated. The actual limit is that on the RB input sandbox. Default is 9.5 Mb (sandbox limit is 10 Mb)
732    
733 spiga 1.55 =item B<skipwmsauth>
734    
735 ewv 1.64 Temporary useful parameter to allow the WMSAuthorisation handling. Specifying I<skipwmsauth> = 1 the pyopenssl problmes will disappear. It is needed working on gLite UI outside of CERN.
736 spiga 1.55
737 slacapra 1.6 =back
738    
739 spiga 1.55 B<[LSF]> or B<[CAF]>
740 slacapra 1.46
741     =over 4
742    
743     =item B<queue>
744    
745 ewv 1.52 The LSF queue you want to use: if none, the default one will be used. For CAF, the proper queue will be automatically selected.
746 slacapra 1.46
747     =item B<resource>
748    
749     The resources to be used within a LSF queue. Again, for CAF, the right one is selected.
750    
751     =back
752    
753 nsmirnov 1.1 =head1 FILES
754    
755 slacapra 1.6 I<crab> uses a configuration file I<crab.cfg> which contains configuration parameters. This file is written in the INI-style. The default filename can be changed by the I<-cfg> option.
756 nsmirnov 1.1
757 slacapra 1.6 I<crab> creates by default a working directory 'crab_0_E<lt>dateE<gt>_E<lt>timeE<gt>'
758 nsmirnov 1.1
759     I<crab> saves all command lines in the file I<crab.history>.
760    
761     =head1 HISTORY
762    
763 ewv 1.52 B<CRAB> is a tool for the CMS analysis on the Grid environment. It is based on the ideas from CMSprod, a production tool originally implemented by Nikolai Smirnov.
764 nsmirnov 1.1
765     =head1 AUTHORS
766    
767     """
768     author_string = '\n'
769     for auth in common.prog_authors:
770     #author = auth[0] + ' (' + auth[2] + ')' + ' E<lt>'+auth[1]+'E<gt>,\n'
771     author = auth[0] + ' E<lt>' + auth[1] +'E<gt>,\n'
772     author_string = author_string + author
773     pass
774     help_string = help_string + author_string[:-2] + '.'\
775     """
776    
777     =cut
778 slacapra 1.19 """
779 nsmirnov 1.1
780     pod = tempfile.mktemp()+'.pod'
781     pod_file = open(pod, 'w')
782     pod_file.write(help_string)
783     pod_file.close()
784    
785     if option == 'man':
786     man = tempfile.mktemp()
787     pod2man = 'pod2man --center=" " --release=" " '+pod+' >'+man
788     os.system(pod2man)
789     os.system('man '+man)
790     pass
791     elif option == 'tex':
792     fname = common.prog_name+'-v'+common.prog_version_str
793     tex0 = tempfile.mktemp()+'.tex'
794     pod2tex = 'pod2latex -full -out '+tex0+' '+pod
795     os.system(pod2tex)
796     tex = fname+'.tex'
797     tex_old = open(tex0, 'r')
798     tex_new = open(tex, 'w')
799     for s in tex_old.readlines():
800     if string.find(s, '\\begin{document}') >= 0:
801     tex_new.write('\\title{'+common.prog_name+'\\\\'+
802     '(Version '+common.prog_version_str+')}\n')
803     tex_new.write('\\author{\n')
804     for auth in common.prog_authors:
805     tex_new.write(' '+auth[0]+
806     '\\thanks{'+auth[1]+'} \\\\\n')
807     tex_new.write('}\n')
808     tex_new.write('\\date{}\n')
809     elif string.find(s, '\\tableofcontents') >= 0:
810     tex_new.write('\\maketitle\n')
811     continue
812     elif string.find(s, '\\clearpage') >= 0:
813     continue
814     tex_new.write(s)
815     tex_old.close()
816     tex_new.close()
817     print 'See '+tex
818     pass
819     elif option == 'html':
820     fname = common.prog_name+'-v'+common.prog_version_str+'.html'
821     pod2html = 'pod2html --title='+common.prog_name+\
822     ' --infile='+pod+' --outfile='+fname
823     os.system(pod2html)
824     print 'See '+fname
825     pass
826 slacapra 1.33 elif option == 'txt':
827     fname = common.prog_name+'-v'+common.prog_version_str+'.txt'
828     pod2text = 'pod2text '+pod+' '+fname
829     os.system(pod2text)
830     print 'See '+fname
831     pass
832 nsmirnov 1.1
833     sys.exit(0)