ViewVC Help
View File | Revision Log | Show Annotations | Root Listing
root/cvsroot/COMP/CRAB/python/crab_help.py
Revision: 1.135
Committed: Wed Jan 20 17:59:10 2010 UTC (15 years, 3 months ago) by spiga
Content type: text/x-python
Branch: MAIN
CVS Tags: CRAB_2_7_1, CRAB_2_7_1_pre12, CRAB_2_7_1_pre11, CRAB_2_7_1_pre10, CRAB_2_7_1_pre9, CRAB_2_7_1_pre8, CRAB_2_7_1_pre6, CRAB_2_7_1_pre5, CRAB_2_7_1_wmbs_pre4, CRAB_2_7_1_pre4
Branch point for: LumiMask, CRAB_2_7_1_branch
Changes since 1.134: +2 -1 lines
Log Message:
document a bit better cleanCache

File Contents

# Content
1
2 ###########################################################################
3 #
4 # H E L P F U N C T I O N S
5 #
6 ###########################################################################
7
8 import common
9
10 import sys, os, string
11
12 import tempfile
13
14 ###########################################################################
15 def usage():
16 print 'in usage()'
17 usa_string = common.prog_name + """ [options]
18
19 The most useful general options (use '-h' to get complete help):
20
21 -create -- Create all the jobs.
22 -submit n -- Submit the first n available jobs. Default is all.
23 -status -- check status of all jobs.
24 -getoutput|-get [range] -- get back the output of all jobs: if range is defined, only of selected jobs.
25 -extend -- Extend an existing task to run on new fileblocks if there.
26 -publish -- after the getouput, publish the data user in a local DBS instance.
27 -checkPublication [dbs_url datasetpath] -- checks if a dataset is published in a DBS.
28 -kill [range] -- kill submitted jobs.
29 -resubmit [range] -- resubmit killed/aborted/retrieved jobs.
30 -forceResubmit [range] -- resubmit jobs regardless to their status.
31 -copyData [range [dest_se or dest_endpoint]] -- copy locally (in crab_working_dir/res dir) or on a remote SE your produced output,
32 already stored on remote SE.
33 -renewCredential -- renew credential on the server.
34 -clean -- gracefully cleanup the directory of a task.
35 -match|-testJdl [range] -- check if resources exist which are compatible with jdl.
36 -report -- print a short report about the task
37 -list [range] -- show technical job details.
38 -postMortem [range] -- provide a file with information useful for post-mortem analysis of the jobs.
39 -printId [range] -- print the job SID or Task Unique ID while using the server.
40 -createJdl [range] -- provide files with a complete Job Description (JDL).
41 -validateCfg [fname] -- parse the ParameterSet using the framework's Python API.
42 -cleanCache -- clean SiteDB and CRAB caches.
43 -continue|-c [dir] -- Apply command to task stored in [dir].
44 -h [format] -- Detailed help. Formats: man (default), tex, html, txt.
45 -cfg fname -- Configuration file name. Default is 'crab.cfg'.
46 -debug N -- set the verbosity level to N.
47 -v -- Print version and exit.
48
49 "range" has syntax "n,m,l-p" which correspond to [n,m,l,l+1,...,p-1,p] and all possible combination
50
51 Example:
52 crab -create -submit 1
53 """
54 print usa_string
55 sys.exit(2)
56
57 ###########################################################################
58 def help(option='man'):
59 help_string = """
60 =pod
61
62 =head1 NAME
63
64 B<CRAB>: B<C>ms B<R>emote B<A>nalysis B<B>uilder
65
66 """+common.prog_name+""" version: """+common.prog_version_str+"""
67
68 This tool B<must> be used from an User Interface and the user is supposed to
69 have a valid Grid certificate.
70
71 =head1 SYNOPSIS
72
73 B<"""+common.prog_name+"""> [I<options>] [I<command>]
74
75 =head1 DESCRIPTION
76
77 CRAB is a Python program intended to simplify the process of creation and submission of CMS analysis jobs to the Grid environment .
78
79 Parameters for CRAB usage and configuration are provided by the user changing the configuration file B<crab.cfg>.
80
81 CRAB generates scripts and additional data files for each job. The produced scripts are submitted directly to the Grid. CRAB makes use of BossLite to interface to the Grid scheduler, as well as for logging and bookkeeping.
82
83 CRAB supports any CMSSW based executable, with any modules/libraries, including user provided ones, and deals with the output produced by the executable. CRAB provides an interface to CMS data discovery services (DBS and DLS), which are completely hidden to the final user. It also splits a task (such as analyzing a whole dataset) into smaller jobs, according to user requirements.
84
85 CRAB can be used in two ways: StandAlone and with a Server.
86 The StandAlone mode is suited for small task, of the order of O(100) jobs: it submits the jobs directly to the scheduler, and these jobs are under user responsibility.
87 In the Server mode, suited for larger tasks, the jobs are prepared locally and then passed to a dedicated CRAB server, which then interacts with the scheduler on behalf of the user, including additional services, such as automatic resubmission, status caching, output retrieval, and more.
88 The CRAB commands are exactly the same in both cases.
89
90 CRAB web page is available at
91
92 I<https://twiki.cern.ch/twiki/bin/view/CMS/SWGuideCrab>
93
94 =head1 HOW TO RUN CRAB FOR THE IMPATIENT USER
95
96 Please, read all the way through in any case!
97
98 Source B<crab.(c)sh> from the CRAB installation area, which have been setup either by you or by someone else for you.
99
100 Modify the CRAB configuration file B<crab.cfg> according to your need: see below for a complete list. A template and commented B<crab.cfg> can be found on B<$CRABDIR/python/crab.cfg>
101
102 ~>crab -create
103 create all jobs (no submission!)
104
105 ~>crab -submit 2 -continue [ui_working_dir]
106 submit 2 jobs, the ones already created (-continue)
107
108 ~>crab -create -submit 2
109 create _and_ submit 2 jobs
110
111 ~>crab -status
112 check the status of all jobs
113
114 ~>crab -getoutput
115 get back the output of all jobs
116
117 ~>crab -publish
118 publish all user outputs in the DBS specified in the crab.cfg (dbs_url_for_publication) or written as argument of this option
119
120 =head1 RUNNING CMSSW WITH CRAB
121
122 =over 4
123
124 =item B<A)>
125
126 Develop your code in your CMSSW working area. Do anything which is needed to run interactively your executable, including the setup of run time environment (I<eval `scramv1 runtime -sh|csh`>), a suitable I<ParameterSet>, etc. It seems silly, but B<be extra sure that you actually did compile your code> I<scramv1 b>.
127
128 =item B<B)>
129
130 Source B<crab.(c)sh> from the CRAB installation area, which have been setup either by you or by someone else for you. Modify the CRAB configuration file B<crab.cfg> according to your need: see below for a complete list.
131
132 The most important parameters are the following (see below for complete description of each parameter):
133
134 =item B<Mandatory!>
135
136 =over 6
137
138 =item B<[CMSSW]> section: datasetpath, pset, splitting parameters, output_file
139
140 =item B<[USER]> section: output handling parameters, such as return_data, copy_data etc...
141
142 =back
143
144 =item B<Run it!>
145
146 You must have a valid voms-enabled Grid proxy. See CRAB web page for details.
147
148 =back
149
150 =head1 RUNNING MULTICRAB
151
152 MultiCRAB is a CRAB extension to submit the same job to multiple datasets in one go.
153
154 The use case for multicrab is when you have your analysis code that you want to run on several datasets, typically some signals plus some backgrounds (for MC studies)
155 or on different streams/configuration/runs for real data taking. You want to run exactly the same code, and also the crab.cfg are different only for few keys:
156 for sure datasetpath but also other keys, such as eg total_number_of_events, in case you want to run on all signals but only a fraction of background, or anything else.
157 So far, you would have to create a set of crab.cfg, one for each dataset you want to access, and submit several instances of CRAB, saving the output to different locations.
158 Multicrab is meant to automatize this procedure.
159 In addition to the usual crab.cfg, there is a new configuration file called multicrab.cfg. The syntax is very similar to that of crab.cfg, namely
160 [SECTION] <crab.cfg Section>.Key=Value
161
162 Please note that it is mandatory to add explicitly the crab.cfg [SECTION] in front of [KEY].
163 The role of multicrab.cfg is to apply modification to the template crab.cfg, some which are common to all tasks, and some which are task specific.
164
165 =head2 So there are two sections:
166
167 =over 2
168
169 =item B<[COMMON]>
170
171 section: which applies to all task, and which is fully equivalent to modify directly the template crab.cfg
172
173 =item B<[DATASET]>
174
175 section: there could be an arbitrary number of sections, one for each dataset you want to run. The names are free (but COMMON and MULTICRAB), and they will be used as ui_working_dir for the task as well as an appendix to the user_remote_dir in case of output copy to remote SE. So, the task corresponding to section, say [SIGNAL] will be placed in directory SIGNAL, and the output will be put on /SIGNAL/, so SIGNAL will be added as last subdir in the user_remote_dir.
176
177 =back
178
179 For further details please visit
180
181 I<https://twiki.cern.ch/twiki/bin/view/CMS/SWGuideMultiCrab>
182
183 =head1 HOW TO RUN ON CONDOR-G
184
185 The B<Condor-G> mode for B<CRAB> is a special submission mode next to the standard Resource Broker submission. It is designed to submit jobs directly to a site and not using the Resource Broker.
186
187 Due to the nature of B<Condor-G> submission, the B<Condor-G> mode is restricted to OSG sites within the CMS Grid, currently the 7 US T2: Florida(ufl.edu), Nebraska(unl.edu), San Diego(ucsd.edu), Purdue(purdue.edu), Wisconsin(wisc.edu), Caltech(ultralight.org), MIT(mit.edu).
188
189 =head2 B<Requirements:>
190
191 =over 2
192
193 =item installed and running local Condor scheduler
194
195 (either installed by the local Sysadmin or self-installed using the VDT user interface: http://www.uscms.org/SoftwareComputing/UserComputing/Tutorials/vdt.html)
196
197 =item locally available LCG or OSG UI installation
198
199 for authentication via Grid certificate proxies ("voms-proxy-init -voms cms" should result in valid proxy)
200
201 =item set the environment variable GRID_WL_LOCATION to the edg directory of the local LCG or OSG UI installation
202
203 =back
204
205 =head2 B<What the Condor-G mode can do:>
206
207 =over 2
208
209 =item submission directly to multiple OSG sites,
210
211 the requested dataset must be published correctly by the site in the local and global services.
212 Previous restrictions on submitting only to a single site have been removed. SE and CE whitelisting
213 and blacklisting work as in the other modes.
214
215 =back
216
217 =head2 B<What the Condor-G mode cannot do:>
218
219 =over 2
220
221 =item submit jobs if no condor scheduler is running on the submission machine
222
223 =item submit jobs if the local condor installation does not provide Condor-G capabilities
224
225 =item submit jobs to an LCG site
226
227 =item support Grid certificate proxy renewal via the myproxy service
228
229 =back
230
231 =head2 B<CRAB configuration for Condor-G mode:>
232
233 The CRAB configuration for the Condor-G mode only requires one change in crab.cfg:
234
235 =over 2
236
237 =item select condor_g Scheduler:
238
239 scheduler = condor_g
240
241 =back
242
243
244 =head1 HOW TO RUN ON NORDUGRID ARC
245
246 The ARC scheduler can be used to submit jobs to sites running the NorduGrid
247 ARC grid middleware. To use it you'll need to have the ARC client
248 installed.
249
250 =head2 B<CRAB configuration for ARC mode:>
251
252 The ARC scheduler requires some changes to crab.cfg:
253
254 =over 2
255
256 =item B<scheduler:>
257
258 Select the ARC scheduler:
259 scheduler = arc
260
261 =item B<requirements>, B<additional_jdl_parameters:>
262
263 Use xrsl code instead of jdl for these parameters.
264
265 =item B<max_cpu_time>, B<max_wall_clock_time:>
266
267 For parameters max_cpu_time and max_wall_clock_time, you can use
268 units, e.g. "72 hours" or "3 days", just like with the xrsl attributes
269 cpuTime and wallTime. If no unit is given, minutes is assumed by default.
270
271 =back
272
273 =head2 B<CRAB Commands:>
274
275 Most CRAB commands behave approximately the same with the ARC scheduler, with only some minor differences:
276
277 =over 2
278
279 =item B<*> B<-printJdl|-createJdl> will print xrsl code instead of jdl.
280
281 =back
282
283
284
285
286 =head1 COMMANDS
287
288 =over 4
289
290 =item B<-create>
291
292 Create the jobs: from version 1_3_0 it is only possible to create all jobs.
293 The maximum number of jobs depends on dataset and splitting directives. This set of identical jobs accessing the same dataset are defined as a task.
294 This command create a directory with default name is I<crab_0_date_time> (can be changed via ui_working_dir parameter, see below). Inside this directory it is placed whatever is needed to submit your jobs. Also the output of your jobs (once finished) will be place there (see after). Do not cancel by hand this directory: rather use -clean (see).
295 See also I<-continue>.
296
297 =item B<-submit [range]>
298
299 Submit n jobs: 'n' is either a positive integer or 'all' or a [range]. The default is all.
300 If 'n' is passed as an argument, the first 'n' suitable jobs will be submitted. Please note that this is behaviour is different from other commands, where -command N means act the command to the job N, and not to the first N jobs. If a [range] is passed, the selected jobs will be submitted.
301 This option may be used in conjunction with -create (to create and submit immediately) or with -continue (which is assumed by default) to submit previously created jobs. Failure to do so will stop CRAB and generate an error message. See also I<-continue>.
302
303 =item B<-continue [dir] | -c [dir]>
304
305 Apply the action on the task stored in directory [dir]. If the task directory is the standard one (crab_0_date_time), the most recent in time is assumed. Any other directory must be specified.
306 Basically all commands (except -create) need -continue, so it is automatically assumed. Of course, the standard task directory is used in this case.
307
308 =item B<-status [v|verbose]>
309
310 Check the status of the jobs, in all states. With the server, the full status, including application and wrapper exit codes, is available as soon as the jobs end. In StandAlone mode it is necessary to retrieve (-get) the job output first. With B<v|verbose> some more information is displayed.
311
312 =item B<-getoutput|-get [range]>
313
314 Retrieve the output declared by the user via the output sandbox. By default the output will be put in task working dir under I<res> subdirectory. This can be changed via config parameters. B<Be extra sure that you have enough free space>. From version 2_3_x, the available free space is checked in advance. See I<range> below for syntax.
315
316 =item B<-publish>
317
318 Publish user output in a local DBS instance after the retrieval of output. By default publish uses the dbs_url_for_publication specified in the crab.cfg file, otherwise you can supply it as an argument of this option.
319 Warnings about publication:
320
321 CRAB publishes only EDM files (in the FJR they are written in the tag <File>)
322
323 By default the publication of files containing 0 events is desabled. If you want to enable it you have to set the parameter [USER].publish_zero_event=1 in crab.cfg.
324
325 CRAB publishes in the same USER dataset more EDM files if they are produced by a job and written in the tag <File> of FJR.
326
327 It is not possible for the user to select only one file to publish, nor to publish two files in two different USER datasets.
328
329
330 =item B<-checkPublication [-USER.dbs_url_for_publication=dbs_url -USER.dataset_to_check=datasetpath -debug]>
331
332 Check if a dataset is published in a DBS. This option is automaticaly called at the end of the publication step, but it can be also used as a standalone command. By default it reads the parameters (USER.dbs_url_for_publication and USER.dataset_to_check) in your crab.cfg. You can overwrite the defaults in crab.cfg by passing these parameters as option. Using the -debug option, you will get detailed info about the files of published blocks.
333
334 =item B<-resubmit [range]>
335
336 Resubmit jobs which have been previously submitted and have been either I<killed> or are I<aborted>. See I<range> below for syntax.
337
338 =item B<-forceResubmit [range]>
339
340 iSame as -resubmit but without any check about the actual status of the job: please use with caution, you can have problem if both the original job and the resubmitted ones actually run and tries to write the output ona a SE. This command is meant to be used if the killing is not possible or not working but you know that the job failed or will. See I<range> below for syntax.
341
342 =item B<-extend>
343
344 Create new jobs for an existing task, checking if new blocks are published for the given dataset.
345
346 =item B<-kill [range]>
347
348 Kill (cancel) jobs which have been submitted to the scheduler. A range B<must> be used in all cases, no default value is set.
349
350 =item B<-copyData [range -dest_se=the official SE name or -dest_endpoint=the complete endpoint of the remote SE]>
351
352 Option that can be used only if your output have been previously copied by CRAB on a remote SE.
353 By default the copyData copies your output from the remote SE locally on the current CRAB working directory (under res). Otherwise you can copy the output from the remote SE to another one, specifying either -dest_se=<the remote SE official name> or -dest_endpoint=<the complete endpoint of remote SE>. If dest_se is used, CRAB finds the correct path where the output can be stored.
354
355 Example: crab -copyData --> output copied to crab_working_dir/res directory
356 crab -copyData -dest_se=T2_IT_Legnaro --> output copied to the legnaro SE, directory discovered by CRAB
357 crab -copyData -dest_endpoint=srm://<se_name>:8443/xxx/yyyy/zzzz --> output copied to the se <se_name> under
358 /xxx/yyyy/zzzz directory.
359
360 =item B<-renewCredential >
361
362 If using the server modality, this command allows to delegate a valid credential (proxy/token) to the server associated with the task.
363
364 =item B<-match|-testJdl [range]>
365
366 Check if the job can find compatible resources. It is equivalent of doing I<edg-job-list-match> on edg.
367
368 =item B<-printId [range]>
369
370 Just print the job identifier, which can be the SID (Grid job identifier) of the job(s) or the taskId if you are using CRAB with the server or local scheduler Id. If [range] is "full", the the SID of all the jobs are printed, also in the case of submission with server.
371
372 =item B<-createJdl [range]>
373
374 Collect the full Job Description in a file located under share directory. The file base name is File- .
375
376 =item B<-postMortem [range]>
377
378 Try to collect more information of the job from the scheduler point of view.
379
380 =item B<-list [range]>
381
382 Dump technical information about jobs: for developers only.
383
384 =item B<-report>
385
386 Print a short report about the task, namely the total number of events and files processed/requested/available, the name of the dataset path, a summary of the status of the jobs, and so on. A summary file of the runs and luminosity sections processed is written to res/. In principle -report should generate all the info needed for an analysis. Work in progress.
387
388 =item B<-clean [dir]>
389
390 Clean up (i.e. erase) the task working directory after a check whether there are still running jobs. In case, you are notified and asked to kill them or retrieve their output. B<Warning> this will possibly delete also the output produced by the task (if any)!
391
392 =item B<-cleanCache>
393
394 Clean up (i.e. erase) the SiteDb and CRAB cache content.
395
396 =item B<-help [format] | -h [format]>
397
398 This help. It can be produced in three different I<format>: I<man> (default), I<tex> and I<html>.
399
400 =item B<-v>
401
402 Print the version and exit.
403
404 =item B<range>
405
406 The range to be used in many of the above commands has the following syntax. It is a comma separated list of jobs ranges, each of which may be a job number, or a job range of the form first-last.
407 Example: 1,3-5,8 = {1,3,4,5,8}
408
409 =back
410
411 =head1 OPTION
412
413 =over 4
414
415 =item B<-cfg [file]>
416
417 Configuration file name. Default is B<crab.cfg>.
418
419 =item B<-debug [level]>
420
421 Set the debug level: high number for high verbosity.
422
423 =back
424
425 =head1 CONFIGURATION PARAMETERS
426
427 All the parameter describe in this section can be defined in the CRAB configuration file. The configuration file has different sections: [CRAB], [USER], etc. Each parameter must be defined in its proper section. An alternative way to pass a config parameter to CRAB is via command line interface; the syntax is: crab -SECTION.key value . For example I<crab -USER.outputdir MyDirWithFullPath> .
428 The parameters passed to CRAB at the creation step are stored, so they cannot be changed by changing the original crab.cfg . On the other hand the task is protected from any accidental change. If you want to change any parameters, this require the creation of a new task.
429 Mandatory parameters are flagged with a *.
430
431 B<[CRAB]>
432
433 =over 4
434
435 =item B<jobtype *>
436
437 The type of the job to be executed: I<cmssw> jobtypes are supported
438
439 =item B<scheduler *>
440
441 The scheduler to be used: I<glitecoll> is the more efficient grid scheduler and should be used. Other choice are I<glite>, same as I<glitecoll> but without bulk submission (and so slower) or I<condor_g> (see specific paragraph) or I<edg> which is the former Grid scheduler, which will be dismissed in some future. In addition, there's an I<arc> scheduler to be used with the NorduGrid ARC middleware.
442 From version 210, also local scheduler are supported, for the time being only at CERN. I<LSF> is the standard CERN local scheduler or I<CAF> which is LSF dedicated to CERN Analysis Facilities.
443
444 =item B<use_server>
445
446 To use the server for job handling (recommended) 0=no (default), 1=true. The server to be used will be found automatically from a list of available ones: it can also be specified explicitly by using I<server_name> (see below)
447
448 =item B<server_name>
449
450 To use the CRAB-server support it is needed to fill this key with server name as <Server_DOMAIN> (e.g. cnaf,fnal). If this is set, I<use_server> is set to true automatically.
451 If I<server_name=None> crab works in standalone way, same as using I<use_server=0> and no I<server_name>.
452 The server available to users can be found from CRAB web page.
453
454 =back
455
456 B<[CMSSW]>
457
458 =over 4
459
460 =item B<datasetpath *>
461
462 The path of the processed or analysis dataset as defined in DBS. It comes with the format I</PrimaryDataset/DataTier/Process[/OptionalADS]>. If no input is needed I<None> must be specified. When running on an analysis dataset, the job splitting must be specified by luminosity block rather than event. Analysis datasets are only treated accurately on a lumi-by-lumi level with CMSSW 3_1_x and later.
463
464 =item B<runselection *>
465
466 Within a dataset you can restrict to run on a specific run number or run number range. For example runselection=XYZ or runselection=XYZ1-XYZ2 .
467
468 =item B<use_parent *>
469
470 Within a dataset you can ask to run over the related parent files too. E.g., this will give you access to the RAW data while running over a RECO sample. Setting use_parent=1 CRAB determines the parent files from DBS and will add secondaryFileNames = cms.untracked.vstring( <LIST of parent FIles> ) to the pool source section of your parameter set.
471
472 =item B<pset *>
473
474 The python ParameterSet to be used.
475
476 =item B<pycfg_params *>
477
478 These parameters are passed to the python config file, as explained in https://twiki.cern.ch/twiki/bin/view/CMS/SWGuideAboutPythonConfigFile#Passing_Command_Line_Arguments_T
479
480 =item I<Of the following three parameter exactly two must be used, otherwise CRAB will complain.>
481
482 =item B<total_number_of_events *>
483
484 The number of events to be processed. To access all available events, use I<-1>. Of course, the latter option is not viable in case of no input. In this case, the total number of events will be used to split the task in jobs, together with I<events_per_job>.
485
486 =item B<events_per_job*>
487
488 The number of events to be accessed by each job. Since a job cannot cross the boundary of a fileblock it might be that the actual number of events per job is not exactly what you asked for. It can be used also with no input.
489
490 =item B<total_number_of_lumis *>
491
492 The number of luminosity blocks to be processed. This option is only valid when using analysis datasets. Since a job cannot access less than a whole file, it may be that the actual number of lumis per job is more than you asked for. Two of I<total_number_of_lumis>, I<lumis_per_job>, and I<number_of_jobs> must be supplied to run on an analysis dataset.
493
494 =item B<lumis_per_job*>
495
496 The number of luminosity blocks to be accessed by each job. This option is only valid when using analysis datasets. Since a job cannot access less than a whole file, it may be that the actual number of lumis per job is more than you asked for.
497
498 =item B<number_of_jobs *>
499
500 Define the number of jobs to be run for the task. The number of event for each job is computed taking into account the total number of events required as well as the granularity of EventCollections. Can be used also with No input.
501
502 =item B<split_by_run *>
503
504 To activate the split run based (each job will access a different run) use I<split_by_run>=1. You can also define I<number_of_jobs> and/or I<runselection>. NOTE: the Run Based combined with Event Based split is not yet available.
505
506 =item B<output_file *>
507
508 The output files produced by your application (comma separated list). From CRAB 2_2_2 onward, if TFileService is defined in user Pset, the corresponding output file is automatically added to the list of output files. User can avoid this by setting B<skip_TFileService_output> = 1 (default is 0 == file included). The Edm output produced via PoolOutputModule can be automatically added by setting B<get_edm_output> = 1 (default is 0 == no). B<warning> it is not allowed to have a PoolOutputSource and not save it somewhere, since it is a waste of resource on the WN. In case you really want to do that, and if you really know what you are doing (hint: you dont!) you can user I<ignore_edm_output=1>.
509
510 =item B<skip_TFileService_output>
511
512 Force CRAB to skip the inclusion of file produced by TFileService to list of output files. Default is I<0>, namely the file is included.
513
514 =item B<get_edm_output>
515
516 Force CRAB to add the EDM output file, as defined in PSET in PoolOutputModule (if any) to be added to the list of output files. Default is 0 (== no inclusion)
517
518 =item B<increment_seeds>
519
520 Specifies a comma separated list of seeds to increment from job to job. The initial value is taken
521 from the CMSSW config file. I<increment_seeds=sourceSeed,g4SimHits> will set sourceSeed=11,12,13 and g4SimHits=21,22,23 on
522 subsequent jobs if the values of the two seeds are 10 and 20 in the CMSSW config file.
523
524 See also I<preserve_seeds>. Seeds not listed in I<increment_seeds> or I<preserve_seeds> are randomly set for each job.
525
526 =item B<preserve_seeds>
527
528 Specifies a comma separated list of seeds to which CRAB will not change from their values in the user
529 CMSSW config file. I<preserve_seeds=sourceSeed,g4SimHits> will leave the Pythia and GEANT seeds the same for every job.
530
531 See also I<increment_seeds>. Seeds not listed in I<increment_seeds> or I<preserve_seeds> are randomly set for each job.
532
533 =item B<first_lumi>
534
535 Relevant only for Monte Carlo production for which it defaults to 1. The first job will generate events with this lumi block number, subsequent jobs will
536 increment the lumi block number. Setting this number to 0 (not recommend) means CMSSW will not be able to read multiple such files as they
537 will all have the same run, lumi and event numbers. This check in CMSSW can be bypassed by setting
538 I<process.source.duplicateCheckMode = cms.untracked.string('noDuplicateCheck')> in the input source, should you need to
539 read files produced without setting first_run (in old versions of CRAB) or first_lumi.
540
541 =item B<generator>
542
543 Name of the generator your MC job is using. Some generators require CRAB to skip events, others do not.
544 Possible values are pythia (default), comphep, lhe, and madgraph. This will skip events in your generator input file.
545
546 =item B<executable>
547
548 The name of the executable to be run on remote WN. The default is cmsrun. The executable is either to be found on the release area of the WN, or has been built on user working area on the UI and is (automatically) shipped to WN. If you want to run a script (which might internally call I<cmsrun>, use B<USER.script_exe> instead.
549
550 =item I<DBS and DLS parameters:>
551
552 =item B<dbs_url>
553
554 The URL of the DBS query page. For expert only.
555
556 =item B<show_prod>
557
558 To enable CRAB to show data hosted on Tier1s sites specify I<show_prod> = 1. By default those data are masked.
559
560 =item B<subscribed>
561
562 By setting the flag I<subscribed> = 1 only the replicas that are subscribed to its site are considered.The default is to return all replicas. The intended use of this flag is to avoid sending jobs to sites based on data that is being moved or deleted (and thus not subscribed).
563
564 =item B<no_block_boundary>
565
566 To remove fileblock boundaries in job splitting specify I<no_block_boundary> = 1.
567
568 =back
569
570 B<[USER]>
571
572 =over 4
573
574 =item B<additional_input_files>
575
576 Any additional input file you want to ship to WN: comma separated list. IMPORTANT NOTE: they will be placed in the WN working dir, and not in ${CMS_SEARCH_PATH}. Specific files required by CMSSW application must be placed in the local data directory, which will be automatically shipped by CRAB itself. You do not need to specify the I<ParameterSet> you are using, which will be included automatically. Wildcards are allowed.
577
578 =item B<script_exe>
579
580 A user script that will be run on WN (instead of default cmsrun). It is up to the user to setup properly the script itself to run on WN enviroment. CRAB guarantees that the CMSSW environment is setup (e.g. scram is in the path) and that the modified pset.py will be placed in the working directory, with name CMSSW.py . The user must ensure that a job report named crab_fjr.xml will be written. This can be guaranteed by passing the arguments "-j crab_fjr.xml" to cmsRun in the script. The script itself will be added automatically to the input sandbox so user MUST NOT add it within the B<USER.additional_input_files>.
581
582 =item B<script_arguments>
583
584 Any arguments you want to pass to the B<USER.script_exe>: comma separated list.
585
586 =item B<ui_working_dir>
587
588 Name of the working directory for the current task. By default, a name I<crab_0_(date)_(time)> will be used. If this card is set, any CRAB command which require I<-continue> need to specify also the name of the working directory. A special syntax is also possible, to reuse the name of the dataset provided before: I<ui_working_dir : %(dataset)s> . In this case, if e.g. the dataset is SingleMuon, the ui_working_dir will be set to SingleMuon as well.
589
590 =item B<thresholdLevel>
591
592 This has to be a value between 0 and 100, that indicates the percentage of task completeness (jobs in a ended state are complete, even if failed). The server will notify the user by e-mail (look at the field: B<eMail>) when the task will reach the specified threshold. Works just when using the server.
593
594 =item B<eMail>
595
596 The server will notify the specified e-mail when the task will reaches the specified B<thresholdLevel>. A notification is also sent when the task will reach the 100\% of completeness. This field can also be a list of e-mail: "B<eMail = user1@cern.ch, user2@cern.ch>". Works just when using the server.
597
598 =item B<client>
599
600 Specify the client that can be used to interact with the server in B<CRAB.server_name>. The default is the value in the server configuration.
601
602 =item B<return_data *>
603
604 The output produced by the executable on WN is returned (via output sandbox) to the UI, by issuing the I<-getoutput> command. B<Warning>: this option should be used only for I<small> output, say less than 10MB, since the sandbox cannot accommodate big files. Depending on Resource Broker used, a size limit on output sandbox can be applied: bigger files will be truncated. To be used in alternative to I<copy_data>.
605
606 =item B<outputdir>
607
608 To be used together with I<return_data>. Directory on user interface where to store the output. Full path is mandatory, "~/" is not allowed: the default location of returned output is ui_working_dir/res .
609
610 =item B<logdir>
611
612 To be used together with I<return_data>. Directory on user interface where to store the standard output and error. Full path is mandatory, "~/" is not allowed: the default location of returned output is ui_working_dir/res .
613
614 =item B<copy_data *>
615
616 The output (only that produced by the executable, not the std-out and err) is copied to a Storage Element of your choice (see below). To be used as an alternative to I<return_data> and recommended in case of large output.
617
618 =item B<storage_element>
619
620 To be used with <copy_data>=1
621 If you want to copy the output of your analysis in a official CMS Tier2 or Tier3, you have to write the CMS Site Name of the site, as written in the SiteDB https://cmsweb.cern.ch/sitedb/reports/showReport?reportid=se_cmsname_map.ini (i.e T2_IT_legnaro). You have also to specify the <remote_dir>(see below)
622
623 If you want to copy the output in a not_official_CMS remote site you have to specify the complete storage element name (i.e se.xxx.infn.it).You have also to specify the <storage_path> and the <storage_port> if you do not use the default one(see below).
624
625 =item B<user_remote_dir>
626
627 To be used with <copy_data>=1 and <storage_element> official CMS sites.
628 This is the directory or tree of directories where your output will be stored. This directory will be created under the mountpoint ( which will be discover by CRAB if an official CMS storage Element has been used, or taken from the crab.cfg as specified by the user). B<NOTE> This part of the path will be used as logical file name of your files in the case of publication without using an official CMS storage Element. Generally it should start with "/store".
629
630 =item B<storage_path>
631
632 To be used with <copy_data>=1 and <storage_element> not official CMS sites.
633 This is the full path of the Storage Element writeable by all, the mountpoint of SE (i.e /srm/managerv2?SFN=/pnfs/se.xxx.infn.it/yyy/zzz/)
634
635
636 =item B<storage_pool>
637
638 If you are using CAF scheduler, you can specify the storage pool where to write your output.
639 The default is cmscafuser. If you do not want to use the default, you can overwrite it specifing None
640
641 =item B<storage_port>
642
643 To choose the storage port specify I<storage_port> = N (default is 8443) .
644
645 =item B<local_stage_out *>
646
647 This option enables the local stage out of produced output to the "close storage element" where the job is running, in case of failure of the remote copy to the Storage element decided by the user in che crab.cfg. It has to be used with the copy_data option. In the case of backup copy, the publication of data is forbidden. Set I<local_stage_out> = 1
648
649 =item B<publish_data*>
650
651 To be used with <copy_data>=1
652 To publish your produced output in a local istance of DBS set publish_data = 1
653 All the details about how to use this functionality are written in https://twiki.cern.ch/twiki/bin/view/CMS/SWGuideCrabForPublication
654 N.B 1) if you are using an official CMS site to stored data, the remote dir will be not considered. The directory where data will be stored is decided by CRAB, following the CMS policy in order to be able to re-read published data.
655 2) if you are using a not official CMS site to store data, you have to check the <lfn>, that will be part of the logical file name of you published files, in order to be able to re-read the data.
656
657 =item B<publish_data_name>
658
659 You produced output will be published in your local DBS with dataset name <primarydataset>/<publish_data_name>/USER
660
661 =item B<dbs_url_for_publication>
662
663 Specify the URL of your local DBS istance where CRAB has to publish the output files
664
665 =item B<publish_zero_event>
666
667 T0 force zero event files publication specify I<publish_zero_event> = 1
668
669 =item B<srm_version>
670
671 To choose the srm version specify I<srm_version> = (srmv1 or srmv2).
672
673 =item B<xml_report>
674
675 To be used to switch off the screen report during the status query, enabling the db serialization in a file. Specifying I<xml_report> = FileName CRAB will serialize the DB into CRAB_WORKING_DIR/share/FileName.
676
677 =item B<usenamespace>
678
679 To use the automate namespace definition (perfomed by CRAB) it is possible to set I<usenamespace>=1. The same policy used for the stage out in case of data publication will be applied.
680
681 =item B<debug_wrapper>
682
683 To enable the higer verbose level on wrapper specify I<debug_wrapper> = 1. The Pset contents before and after the CRAB maipulation will be written together with other useful infos.
684
685 =item B<deep_debug>
686
687 To be used in case of unexpected job crash when the sdtout and stderr files are lost. Submitting again the same jobs specifying I<deep_debug> = 1 these files will be reported back. NOTE: it works only on standalone mode for debugging purpose.
688
689 =item B<dontCheckSpaceLeft>
690
691 Set it to 1 to skip the check of free space left on your working directory before attempting to get the output back. Default is 0 (=False)
692
693 =item B<check_user_remote_dir>
694
695 To avoid stage out failures CRAB checks the remote location content at the creation time. By setting I<check_user_remote_dir>=0 crab will skip the check.
696
697 =back
698
699 B<[GRID]>
700
701 =over 4
702
703 =item B<RB>
704
705 Which RB you want to use instead of the default one, as defined in the configuration of your UI. The ones available for CMS are I<CERN> and I<CNAF>. They are actually identical, being a collection of all WMSes available for CMS: the configuration files needed to change the broker will be automatically downloaded from CRAB web page and used.
706 You can use any other RB which is available, if you provide the proper configuration files. E.g., for gLite WMS XYZ, you should provide I<glite.conf.CMS_XYZ>. These files are searched for in the current working directory, and, if not found, on crab web page. So, if you put your private configuration files in the working directory, they will be used, even if they are not available on crab web page.
707 Please get in contact with crab team if you wish to provide your RB or WMS as a service to the CMS community.
708
709 =item B<proxy_server>
710
711 The proxy server to which you delegate the responsibility to renew your proxy once expired. The default is I<myproxy.cern.ch> : change only if you B<really> know what you are doing.
712
713 =item B<role>
714
715 The role to be set in the VOMS. See VOMS documentation for more info.
716
717 =item B<group>
718
719 The group to be set in the VOMS, See VOMS documentation for more info.
720
721 =item B<dont_check_proxy>
722
723 If you do not want CRAB to check your proxy. The creation of the proxy (with proper length), its delegation to a myproxyserver is your responsibility.
724
725 =item B<dont_check_myproxy>
726
727 If you want to to switch off only the proxy renewal set I<dont_check_myproxy>=1. The proxy delegation to a myproxyserver is your responsibility.
728
729 =item B<requirements>
730
731 Any other requirements to be add to JDL. Must be written in compliance with JDL syntax (see LCG user manual for further info). No requirement on Computing element must be set.
732
733 =item B<additional_jdl_parameters:>
734
735 Any other parameters you want to add to jdl file:semicolon separated list, each
736 item B<must> be complete, including the closing ";".
737
738 =item B<wms_service>
739
740 With this field it is also possible to specify which WMS you want to use (https://hostname:port/pathcode) where "hostname" is WMS name, the "port" generally is 7443 and the "pathcode" should be something like "glite_wms_wmproxy_server".
741
742 =item B<max_cpu_time>
743
744 Maximum CPU time needed to finish one job. It will be used to select a suitable queue on the CE. Time in minutes.
745
746 =item B<max_wall_clock_time>
747
748 Same as previous, but with real time, and not CPU one.
749
750 =item B<ce_black_list>
751
752 All the CE (Computing Element) whose name contains the following strings (comma separated list) will not be considered for submission. Use the dns domain (e.g. fnal, cern, ifae, fzk, cnaf, lnl,....). You may use hostnames or CMS Site names (T2_DE_DESY) or substrings.
753
754 =item B<ce_white_list>
755
756 Only the CE (Computing Element) whose name contains the following strings (comma separated list) will be considered for submission. Use the dns domain (e.g. fnal, cern, ifae, fzk, cnaf, lnl,....). You may use hostnames or CMS Site names (T2_DE_DESY) or substrings. Please note that if the selected CE(s) does not contain the data you want to access, no submission can take place.
757
758 =item B<se_black_list>
759
760 All the SE (Storage Element) whose name contains the following strings (comma separated list) will not be considered for submission.It works only if a datasetpath is specified. You may use hostnames or CMS Site names (T2_DE_DESY) or substrings.
761
762 =item B<se_white_list>
763
764 Only the SE (Storage Element) whose name contains the following strings (comma separated list) will be considered for submission.It works only if a datasetpath is specified. Please note that if the selected CE(s) does not contain the data you want to access, no submission can take place. You may use hostnames or CMS Site names (T2_DE_DESY) or substrings.
765
766 =item B<remove_default_blacklist>
767
768 CRAB enforce the T1s Computing Eelements Black List. By default it is appended to the user defined I<CE_black_list>. To remove the enforced T1 black lists set I<remove_default_blacklist>=1.
769
770 =item B<virtual_organization>
771
772 You do not want to change this: it is cms!
773
774 =item B<retry_count>
775
776 Number of time the Grid will try to resubmit your job in case of Grid related problem.
777
778 =item B<shallow_retry_count>
779
780 Number of time shallow resubmission the Grid will try: resubmissions are tried B<only> if the job aborted B<before> start. So you are guaranteed that your jobs run strictly once.
781
782 =item B<maxtarballsize>
783
784 Maximum size of tar-ball in Mb. If bigger, an error will be generated. The actual limit is that on the RB input sandbox. Default is 9.5 Mb (sandbox limit is 10 Mb)
785
786 =item B<skipwmsauth>
787
788 Temporary useful parameter to allow the WMSAuthorisation handling. Specifying I<skipwmsauth> = 1 the pyopenssl problmes will disappear. It is needed working on gLite UI outside of CERN.
789
790 =back
791
792 B<[LSF]> or B<[CAF]> or B<[PBS]>
793
794 =over 4
795
796 =item B<queue>
797
798 The LSF/PBS queue you want to use: if none, the default one will be used. For CAF, the proper queue will be automatically selected.
799
800 =item B<resource>
801
802 The resources to be used within a LSF/PBS queue. Again, for CAF, the right one is selected.
803
804 =back
805
806 =head1 FILES
807
808 I<crab> uses a configuration file I<crab.cfg> which contains configuration parameters. This file is written in the INI-style. The default filename can be changed by the I<-cfg> option.
809
810 I<crab> creates by default a working directory 'crab_0_E<lt>dateE<gt>_E<lt>timeE<gt>'
811
812 I<crab> saves all command lines in the file I<crab.history>.
813
814 =head1 HISTORY
815
816 B<CRAB> is a tool for the CMS analysis on the Grid environment. It is based on the ideas from CMSprod, a production tool originally implemented by Nikolai Smirnov.
817
818 =head1 AUTHORS
819
820 """
821 author_string = '\n'
822 for auth in common.prog_authors:
823 #author = auth[0] + ' (' + auth[2] + ')' + ' E<lt>'+auth[1]+'E<gt>,\n'
824 author = auth[0] + ' E<lt>' + auth[1] +'E<gt>,\n'
825 author_string = author_string + author
826 pass
827 help_string = help_string + author_string[:-2] + '.'\
828 """
829
830 =cut
831 """
832
833 pod = tempfile.mktemp()+'.pod'
834 pod_file = open(pod, 'w')
835 pod_file.write(help_string)
836 pod_file.close()
837
838 if option == 'man':
839 man = tempfile.mktemp()
840 pod2man = 'pod2man --center=" " --release=" " '+pod+' >'+man
841 os.system(pod2man)
842 os.system('man '+man)
843 pass
844 elif option == 'tex':
845 fname = common.prog_name+'-v'+common.prog_version_str
846 tex0 = tempfile.mktemp()+'.tex'
847 pod2tex = 'pod2latex -full -out '+tex0+' '+pod
848 os.system(pod2tex)
849 tex = fname+'.tex'
850 tex_old = open(tex0, 'r')
851 tex_new = open(tex, 'w')
852 for s in tex_old.readlines():
853 if string.find(s, '\\begin{document}') >= 0:
854 tex_new.write('\\title{'+common.prog_name+'\\\\'+
855 '(Version '+common.prog_version_str+')}\n')
856 tex_new.write('\\author{\n')
857 for auth in common.prog_authors:
858 tex_new.write(' '+auth[0]+
859 '\\thanks{'+auth[1]+'} \\\\\n')
860 tex_new.write('}\n')
861 tex_new.write('\\date{}\n')
862 elif string.find(s, '\\tableofcontents') >= 0:
863 tex_new.write('\\maketitle\n')
864 continue
865 elif string.find(s, '\\clearpage') >= 0:
866 continue
867 tex_new.write(s)
868 tex_old.close()
869 tex_new.close()
870 print 'See '+tex
871 pass
872 elif option == 'html':
873 fname = common.prog_name+'-v'+common.prog_version_str+'.html'
874 pod2html = 'pod2html --title='+common.prog_name+\
875 ' --infile='+pod+' --outfile='+fname
876 os.system(pod2html)
877 print 'See '+fname
878 pass
879 elif option == 'txt':
880 fname = common.prog_name+'-v'+common.prog_version_str+'.txt'
881 pod2text = 'pod2text '+pod+' '+fname
882 os.system(pod2text)
883 print 'See '+fname
884 pass
885
886 sys.exit(0)