ViewVC Help
View File | Revision Log | Show Annotations | Root Listing
root/cvsroot/COMP/CRAB/python/crab_help.py
Revision: 1.43
Committed: Fri Jan 4 17:30:57 2008 UTC (17 years, 3 months ago) by slacapra
Content type: text/x-python
Branch: MAIN
CVS Tags: CRAB_2_1_0_pre4, CRAB_2_1_0_pre3, CRAB_2_1_0_pre2, CRAB_2_1_0_pre1
Changes since 1.42: +3 -2 lines
Log Message:
Add support for LSF/CAF direct submission
Re-establish a correct inheritance pattern for Scheruled* classes
Start to remove some unneeded try: catch: statement and replace them with appropriate if:then:
Erase any use of cfg_param as a common block (expecially in conjuction with apmon) and replace it with the user of task DB
Several minor cleanup

File Contents

# Content
1
2 ###########################################################################
3 #
4 # H E L P F U N C T I O N S
5 #
6 ###########################################################################
7
8 import common
9
10 import sys, os, string
11
12 import tempfile
13
14 ###########################################################################
15 def usage():
16 print 'in usage()'
17 usa_string = common.prog_name + """ [options]
18
19 The most useful general options (use '-h' to get complete help):
20
21 -create -- Create all the jobs.
22 -submit n -- Submit the first n available jobs. Default is all.
23 -status -- check status of all jobs.
24 -getoutput|-get [range] -- get back the output of all jobs: if range is defined, only of selected jobs
25 -publish [dbs_url] -- after the getouput, publish the data user in a local DBS instance
26 -kill [range] -- kill submitted jobs
27 -cancelAndResubmit [range] -- kill and resubmit submitted jobs
28 -clean -- gracefully cleanup the idrectory of a task
29 -testJdl [range] -- check if resources exist which are compatible with jdl
30 -list [range] -- show technical job details
31 -postMortem [range] -- provide a file with information useful for post-mortem analysis of the jobs
32 -printId [range] -- print the job SID
33 -continue|-c [dir] -- Apply command to task stored in [dir].
34 -h [format] -- Detailed help. Formats: man (default), tex, html, txt.
35 -cfg fname -- Configuration file name. Default is 'crab.cfg'.
36 -debug N -- set the verbosity level to N
37 -v -- Print version and exit.
38
39 "range" has syntax "n,m,l-p" which correspond to [n,m,l,l+1,...,p-1,p] and all possible combination
40
41 Example:
42 crab -create -submit 1
43 """
44 print usa_string
45 sys.exit(2)
46
47 ###########################################################################
48 def help(option='man'):
49 help_string = """
50 =pod
51
52 =head1 NAME
53
54 B<CRAB>: B<C>ms B<R>emote B<A>nalysis B<B>uilder
55
56 """+common.prog_name+""" version: """+common.prog_version_str+"""
57
58 This tool B<must> be used from an User Interface and the user is supposed to
59 have a valid Grid certificate.
60
61 =head1 SYNOPSIS
62
63 B<"""+common.prog_name+"""> [I<options>] [I<command>]
64
65 =head1 DESCRIPTION
66
67 CRAB is a Python program intended to simplify the process of creation and submission into Grid environment of CMS analysis jobs.
68
69 Parameters for CRAB usage and configuration are provided by the user changing the configuration file B<crab.cfg>.
70
71 CRAB generates scripts and additional data files for each job. The produced scripts are submitted directly to the Grid. CRAB makes use of BOSS to interface to the Grid scheduler, as well as for logging and bookkeeping and eventually real time monitoring.
72
73 CRAB support any CMSSW based executable, with any modules/libraries, including the user provided one, and deals with the output produced by the executable. Up to version 1_2_1, also ORCA (and FAMOS) based executable were supported. CRAB provides an interface with CMS data discovery services (DBS and DLS), which are completely hidden to the final user. It also splits a task (such as analyzing a whole dataset) into smaller jobs, according with user requirements.
74
75 CRAB web page is available at
76
77 I<http://cmsdoc.cern.ch/cms/ccs/wm/www/Crab/>
78
79 =head1 HOW TO RUN CRAB FOR THE IMPATIENT USER
80
81 Please, read all anyway!
82
83 Source B<crab.(c)sh> from the CRAB installation area, which have been setup either by you or by someone else for you.
84
85 Modify the CRAB configuration file B<crab.cfg> according to your need: see below for a complete list: in particular set your jobtype (orca or famos) and fill the corresponding section. A template and commented B<crab.cfg> can be found on B<$CRABDIR/python/crab.cfg>
86
87 ~>crab -create
88 create all jobs (no submission!)
89
90 ~>crab -submit 2 -continue [ui_working_dir]
91 submit 2 jobs, the ones already created (-continue)
92
93 ~>crab -create -submit 2
94 create _and_ submit 2 jobs
95
96 ~>crab -status
97 check the status of all jobs
98
99 ~>crab -getoutput
100 get back the output of all jobs
101
102 ~>crab -publish
103 publish all user outputs in the DBS specified in the crab.cfg (dbs_url_for_publication) or written as argument of this option
104
105 =head1 RUNNING CMSSW WITH CRAB
106
107 =over 4
108
109 =item B<A)>
110
111 Develop your code in your CMSSW working area. Do anything which is needed to run interactively your executable, including the setup of run time environment (I<eval `scramv1 runtime -sh|csh`>), a suitable I<ParameterSet>, etc. It seems silly, but B<be extra sure that you actaully did compile your code> I<scramv1 b>.
112
113 =item B<B)>
114
115 Source B<crab.(c)sh> from the CRAB installation area, which have been setup either by you or by someone else for you. Modify the CRAB configuration file B<crab.cfg> according to your need: see below for a complete list.
116
117 The most important parameters are the following (see below for complete description of each parameter):
118
119 =item B<Mandatory!>
120
121 =over 6
122
123 =item B<[CMSSW]> section: datasetpath, pset, splitting parameters, output_file
124
125 =item B<[USER]> section: output handling parameters, such as return_data, copy_data etc...
126
127 =back
128
129 =item B<Run it!>
130
131 You must have a valid voms-enabled Grid proxy. See CRAB web page for details.
132
133 =back
134
135 =head1 HOW TO RUN ON CONDOR-G
136
137 The B<Condor-G> mode for B<CRAB> is a special submission mode next to the standard Resource Broker submission. It is designed to submit jobs directly to a site and not using the Resource Broker.
138
139 Due to the nature of this submission possibility, the B<Condor-G> mode is restricted to OSG sites within the CMS Grid, currently the 7 US T2: Florida(ufl.edu), Nebraska(unl.edu), San Diego(ucsd.edu), Purdue(purdue.edu), Wisconsin(wisc.edu), Caltech(ultralight.org), MIT(mit.edu).
140
141 =head2 B<Requirements:>
142
143 =over 2
144
145 =item installed and running local Condor scheduler
146
147 (either installed by the local Sysadmin or self-installed using the VDT user interface: http://www.uscms.org/SoftwareComputing/UserComputing/Tutorials/vdt.html)
148
149 =item locally available LCG or OSG UI installation
150
151 for authentication via Grid certificate proxies ("voms-proxy-init -voms cms" should result in valid proxy)
152
153 =item set of the environment variable EDG_WL_LOCATION to the edg directory of the local LCG or OSG UI installation
154
155 =back
156
157 =head2 B<What the Condor-G mode can do:>
158
159 =over 2
160
161 =item submission directly to a single OSG site,
162
163 the requested dataset has to be published correctly by the site in the local and global services
164
165 =back
166
167 =head2 B<What the Condor-G mode cannot do:>
168
169 =over 2
170
171 =item submit jobs if no condor scheduler is running on the submission machine
172
173 =item submit jobs if the local condor installation does not provide Condor-G capabilities
174
175 =item submit jobs to more than one site in parallel
176
177 =item submit jobs to a LCG site
178
179 =item support Grid certificate proxy renewal via the myproxy service
180
181 =back
182
183 =head2 B<CRAB configuration for Condor-G mode:>
184
185 The CRAB configuration for the Condor-G mode only requires changes in crab.cfg:
186
187 =over 2
188
189 =item select condor_g Scheduler:
190
191 scheduler = condor_g
192
193 =item select the domain for a single OSG site:
194
195 CE_white_list = "one of unl.edu,ufl.edu,ucsd.edu,wisc.edu,purdue.edu,ultralight.org,mit.edu"
196
197 =back
198
199 =head1 COMMAND
200
201 =over 4
202
203 =item B<-create>
204
205 Create the jobs: from version 1_3_0 it is only possible to create all jobs.
206 The maximum number of jobs depens on dataset and splittig directives. This set of identical jobs accessing the same dataset are defined as a task.
207 This command create a directory with default name is I<crab_0_date_time> (can be changed via ui_working_dir parameter, see below). Inside this directory it is placed whatever is needed to submit your jobs. Also the output of your jobs (once finished) will be place there (see after). Do not cancel by hand this directory: rather use -clean (see).
208 See also I<-continue>.
209
210 =item B<-submit n>
211
212 Submit n jobs: 'n' is either a positive integer or 'all'. Default is all.
213 The first 'n' suitable jobs will be submitted. This option must be used in conjunction with -create (to create and submit immediately) or with -continue, to submit previously created jobs. Failure to do so will stop CRAB and generate an error message.
214 See also I<-continue>.
215
216 =item B<-continue [dir] | -c [dir]>
217
218 Apply the action on the task stored on directory [dir]. If the task directory is the standard one (crab_0_date_time), the more recent in time is taken. Any other directory must be specified.
219 Basically all commands (but -create) need -continue, so it is automatically assumed, with exception of -submit, where it must be explicitly used. Of course, the standard task directory is used in this case.
220
221 =item B<-status>
222
223 Check the status of the jobs, in all states. If BOSS real time monitor is enabled, also some real time information are available, otherwise all the info will be available only after the output retrieval.
224
225 =item B<-getoutput|-get [range]>
226
227 Retrieve the output declared by the user via the output sandbox. By default the output will be put in task working dir under I<res> subdirectory. This can be changed via config parameters. B<Be extra sure that you have enough free space>. See I<range> below for syntax.
228
229 =item B<-publish [dbs_url]>
230
231 Publish user output in a local DBS instance after retrieving of output. By default the publish uses the dbs_url_for_publication specified in the crab.cfg file, otherwise you can write it as argument of this option.
232
233 =item B<-resubmit [range]>
234
235 Resubmit jobs which have been previously submitted and have been either I<killed> or are I<aborted>. See I<range> below for syntax.
236 The resubmit option can be used only with CRAB without server. For the server this option will be implemented as soon as possible
237
238 =item B<-kill [range]>
239
240 Kill (cancel) jobs which have been submitted to the scheduler. A range B<must> be used in all cases, no default value is set.
241
242 =item B<-testJdl [range]>
243
244 Check if the job can find compatible resources. It's equivalent of doing I<edg-job-list-match> on edg.
245
246 =item B<-printId [range]>
247
248 Just print the SID (Grid job identifier) of the job(s) or the taskId if you are using CRAB with the server
249
250 =item B<-postMortem [range]>
251
252 Produce a file (via I<edg-job-logging-info -v 2>) which might help in understanding Grid related problem for a job.
253
254 =item B<-list [range]>
255
256 Dump technical informations about jobs: for developers only.
257
258 =item B<-clean [dir]>
259
260 Clean up (i.e. erase) the task working directory after a check whether there are still running jobs. In case, you are notified and asked to kill them or retrieve their output. B<Warning> this will possibly delete also the output produced by the task (if any)!
261
262 =item B<-help [format] | -h [format]>
263
264 This help. It can be produced in three different I<format>: I<man> (default), I<tex> and I<html>.
265
266 =item B<-v>
267
268 Print the version and exit.
269
270 =item B<range>
271
272 The range to be used in many of the above commands has the following syntax. It is a comma separated list of jobs ranges, each of which may be a job number, or a job range of the form first-last.
273 Example: 1,3-5,8 = {1,3,4,5,8}
274
275 =back
276
277 =head1 OPTION
278
279 =over 4
280
281 =item B<-cfg [file]>
282
283 Configuration file name. Default is B<crab.cfg>.
284
285 =item B<-debug [level]>
286
287 Set the debug level: high number for high verbosity.
288
289 =back
290
291 =head1 CONFIGURATION PARAMETERS
292
293 All the parameter describe in this section can be defined in the CRAB configuration file. The configuration file has different sections: [CRAB], [USER], etc. Each parameter must be defined in its proper section. An alternative way to pass a config parameter to CRAB is via command line interface; the syntax is: crab -SECTION.key value . For example I<crab -USER.outputdir MyDirWithFullPath> .
294 The parameters passed to CRAB at the creation step are stored, so they cannot be changed by changing the original crab.cfg . On the other hand the task is protected from any accidental change. If you want to change any parameters, this require the creation of a new task.
295 Mandatory parameters are flagged with a *.
296
297 B<[CRAB]>
298
299 =over 4
300
301 =item B<jobtype *>
302
303 The type of the job to be executed: I<cmssw> jobtypes are supported
304
305 =item B<scheduler *>
306
307 The scheduler to be used: I<edg> is the standard Grid one. Other choice are I<glite> or I<glitecoll> for bulk submission or I<condor_g> (see specific paragraph)
308
309 =item B<server_mode>
310
311 To use the CRAB-server mode put 1 in this field. If the server_mode key is equal to 0 crab works, as usual, in standalone way.
312
313 =item B<server_name>
314
315 The name of the server that you want to use plus the path of the server storage (eg: hostname/data/cms/). For the server names that are dedicated to the user analysis you have to contact the CRAB\' developers (use hyper-news mailing list).
316
317 =back
318
319 B<[CMSSW]>
320
321 =over 4
322
323 =item B<datasetpath *>
324
325 the path of processed dataset as defined on the DBS. It comes with the format I</PrimaryDataset/DataTier/Process> . In case no input is needed I<None> must be specified.
326
327 =item B<pset *>
328
329 the ParameterSet to be used
330
331 =item I<Of the following three parameter exactly two must be used, otherwise CRAB will complain.>
332
333 =item B<total_number_of_events *>
334
335 the number of events to be processed. To access all available events, use I<-1>. Of course, the latter option is not viable in case of no input. In this case, the total number of events will be used to split the task in jobs, together with I<event_per_job>.
336
337 =item B<events_per_job*>
338
339 number of events to be accessed by each job. Since a job cannot cross the boundary of a fileblock it might be that the actual number of events per job is not exactly what you asked for. It can be used also with No input.
340
341 =item B<number_of_jobs *>
342
343 Define the number of job to be run for the task. The number of event for each job is computed taking into account the total number of events required as well as the granularity of EventCollections. Can be used also with No input.
344
345 =item B<output_file *>
346
347 the output files produced by your application (comma separated list).
348
349 =item B<pythia_seed>
350
351 If the job is pythia based, and has I<untracked uint32 sourceSeed = x> in the ParameterSet, the seed value can be changed using this parameter. Each job will have a different seed, of the type I<pythia_seed>I<$job_number> .
352
353 =item B<vtx_seed>
354
355 Seed for random number generation used for vertex smearing: to be used only if PSet has I<untracked uint32 VtxSmeared = x>. It is modified if and only if also I<pythia_seed> is set. As for I<pythia_seed> the actual seed will be of the type I<vtx_seed>I<$job_number>.
356
357 =item B<g4_seed>
358
359 Seed for randome generation of Geant4 SimHits I<untracked uint32 g4SimHits = x>. The treatment is that of I<vtx_seed> above
360
361 =item B<mix_seed>
362
363 Seed for randome generation of mixing module I<untracked uint32 mix = x>. The treatment is that of I<vtx_seed> above
364
365 =item B<first_run>
366
367 First run to be generated in a generation jobs. Relevant only for no-input workflow.
368
369 =item B<executable>
370
371 The name of the executable to be run on remote WN. The default is cmsrun. The executable is either to be found on the release area of the WN, or has been built on user working area on the UI and is (automatically) shipped to WN. If you want to run a script (which might internally call I<cmsrun>, use B<USER.script_exe> instead.
372
373 =item I<DBS and DLS parameters:>
374
375 =item B<dbs_url>
376
377 The URL of the DBS query page. For expert only.
378
379 =back
380
381 B<[USER]>
382
383 =over 4
384
385 =item B<additional_input_files>
386
387 Any additional input file you want to ship to WN: comma separated list. These are the files which might be needed by your executable: they will be placed in the WN working dir. You don\'t need to specify the I<ParameterSet> you are using, which will be included automatically. Wildcards are allowed.
388
389 =item B<script_exe>
390
391 A user script that will be run on WN (instead of default cmsrun). It\'s up to the user to setup properly the script itself to run on WN enviroment. CRAB guarantees that the CMSSW environment is setup (eg scram is in the path) and that the modified pset.cfg will be placed in the working directory, with name CMSSW.cfg . The script itself will be added automatically to the input sandbox.
392
393 =item B<ui_working_dir>
394
395 Name of the working directory for the current task. By default, a name I<crab_0_(date)_(time)> will be used. If this card is set, any CRAB command which require I<-continue> need to specify also the name of the working directory. A special syntax is also possible, to reuse the name of the dataset provided before: I<ui_working_dir : %(dataset)s> . In this case, if eg the dataset is SingleMuon, the ui_working_dir will be set to SingleMuon as well.
396
397 =item B<thresholdLevel>
398
399 This has to be a value between 0 and 100, that indicates the percentage of task completeness (jobs in a ended state are complete, even if failed). The server will notify the user by e-mail (look at the field: B<eMail>) when the task will reach the specified threshold. Works just with the server_mode = 1.
400
401 =item B<eMail>
402
403 The server will notify the specified e-mail when the task will reaches the specified B<thresholdLevel>. A notification is also sended when the task will reach the 100\% of completeness. This field can also be a list of e-mail: "B<eMail = user1@cern.ch, user2@cern.ch>". Works just with the server_mode = 1.
404
405 =item B<return_data *>
406
407 The output produced by the executable on WN is returned (via output sandbox) to the UI, by issuing the I<-getoutput> command. B<Warning>: this option should be used only for I<small> output, say less than 10MB, since the sandbox cannot accomodate big files. Depending on Resource Broker used, a size limit on output sandbox can be applied: bigger files will be truncated. To be used in alternative to I<copy_data>.
408
409 =item B<outputdir>
410
411 To be used together with I<return_data>. Directory on user interface where to store the output. Full path is mandatory, "~/" is not allowed: the defaul location of returned output is ui_working_dir/res .
412
413 =item B<logdir>
414
415 To be used together with I<return_data>. Directory on user interface where to store the standard output and error. Full path is mandatory, "~/" is not allowed: the defaul location of returned output is ui_working_dir/res .
416
417 =item B<copy_data *>
418
419 The output (only that produced by the executable, not the std-out and err) is copied to a Storage Element of your choice (see below). To be used as an alternative to I<return_data> and recomended in case of large output.
420
421 =item B<storage_element>
422
423 To be used together with I<copy_data>. Storage Element name.
424
425 =item B<storage_path>
426
427 To be used together with I<copy_data>. Path where to put output files on Storage Element. Full path is needed, and the directory must be writeable by all.
428
429 =item B<register_data>
430
431 Not more supported.
432
433 =item B<use_central_bossDB>
434
435 Use central BOSS DB instead of one for each task: the DB must be already been setup. See installation istruction for more details.
436
437 =item B<use_boss_rt>
438
439 Use BOSS real time monitoring.
440
441 =back
442
443 B<[EDG]>
444
445 =over 4
446
447 =item B<RB>
448
449 Which RB you want to use instead of the default one, as defined in the configuration of your UI. The ones available for CMS are I<CERN> and I<CNAF>: the configuration files needed to change the broker will be automatically downloaded from CRAB web page and used. If the files are already present on the working directory they will be used.
450 You can use any other RB which is available, if you provide the proper configuration files. Eg, for RB XYZ, you should provide I<edg_wl_ui.conf.CMS_XYZ> and I<edg_wl_ui_cmd_var.conf.CMS_XYZ> for EDG RB, or I<glite.conf.CMS_XYZ> for glite WMS. These files are searched for in the current working directory, and, if not found, on crab web page. So, if you put your private configuration files in the working directory, they will be used, even if they are not available on crab web page.
451 Please get in contact with crab team if you wish to provide your RB or WMS as a service to the CMS community.
452
453 =item B<proxy_server>
454
455 The proxy server to which you delegate the responsibility to renew your proxy once expired. The default is I<myproxy.cern.ch> : change only if you B<really> know what you are doing.
456
457 =item B<role>
458
459 The role to be set in the VOMS. See VOMS documentation for more info.
460
461 =item B<group>
462
463 The group to be set in the VOMS, See VOMS documentation for more info.
464
465 =item B<dont_check_proxy>
466
467 If you do not want CRAB to check your proxy. The creation of the proxy (with proper lenght), its delegation to a myproxyserver is your responsability.
468
469 =item B<requirements>
470
471 Any other requirements to be add to JDL. Must be written in compliance with JDL syntax (see LCG user manual for further info). No requirement on Computing element must be set.
472
473 =item B<additional_jdl_parameters:>
474
475 Any other parameters you want to add to jdl file: comma separated list, each
476 item B<must> be complete, including the closing ";".
477 With this field it\'s also possible to specify which WMS you want to use, adding the parameter "additional_jdl_parameters = WMProxyEndpoints ={"https://hostname:port/pathcode"};" where "hostname" is WMS\' name, the "port" generally is 7443 and the "pathcode" should be something like "glite_wms_wmproxy_server".
478
479 =item B<max_cpu_time>
480
481 Maximum CPU time needed to finish one job. It will be used to select a suitable queue on the CE. Time in minutes.
482
483 =item B<max_wall_clock_time>
484
485 Same as previous, but with real time, and not CPU one.
486
487 =item B<CE_black_list>
488
489 All the CE (Computing Element) whose name contains the following strings (comma separated list) will not be considered for submission. Use the dns domain (eg fnal, cern, ifae, fzk, cnaf, lnl,....)
490
491 =item B<CE_white_list>
492
493 Only the CE (Computing Element) whose name contains the following strings (comma separated list) will be considered for submission. Use the dns domain (eg fnal, cern, ifae, fzk, cnaf, lnl,....)
494
495 =item B<SE_black_list>
496
497 All the SE (Storage Element) whose name contains the following strings (comma separated list) will not be considered for submission.It works only if a datasetpath is specified.
498
499 =item B<SE_white_list>
500
501 Only the SE (Storage Element) whose name contains the following strings (comma separated list) will be considered for submission.It works only if a datasetpath is specified
502
503 =item B<virtual_organization>
504
505 You don\'t want to change this: it\'s cms!
506
507 =item B<retry_count>
508
509 Number of time the Grid will try to resubmit your job in case of Grid related problem.
510
511 =item B<shallow_retry_count>
512
513 Number of time shallow resubmission the Grid will try: resubmissions are tried B<only> if the job aborted B<before> start. So you are guaranteed that your jobs run strictly once.
514
515 =item B<maxtarballsize>
516
517 Maximum size of tar-ball in Mb. If bigger, an error will be generated. The actual limit is that on the RB input sandbox. Default is 9.5 Mb (sandbox limit is 10 Mb)
518
519 =back
520
521 =head1 FILES
522
523 I<crab> uses a configuration file I<crab.cfg> which contains configuration parameters. This file is written in the INI-style. The default filename can be changed by the I<-cfg> option.
524
525 I<crab> creates by default a working directory 'crab_0_E<lt>dateE<gt>_E<lt>timeE<gt>'
526
527 I<crab> saves all command lines in the file I<crab.history>.
528
529 =head1 HISTORY
530
531 B<CRAB> is a tool for the CMS analysis on the Grid environment. It is based on the ideas from CMSprod, a production tools implemented originally by Nikolai Smirnov.
532
533 =head1 AUTHORS
534
535 """
536 author_string = '\n'
537 for auth in common.prog_authors:
538 #author = auth[0] + ' (' + auth[2] + ')' + ' E<lt>'+auth[1]+'E<gt>,\n'
539 author = auth[0] + ' E<lt>' + auth[1] +'E<gt>,\n'
540 author_string = author_string + author
541 pass
542 help_string = help_string + author_string[:-2] + '.'\
543 """
544
545 =cut
546 """
547
548 pod = tempfile.mktemp()+'.pod'
549 pod_file = open(pod, 'w')
550 pod_file.write(help_string)
551 pod_file.close()
552
553 if option == 'man':
554 man = tempfile.mktemp()
555 pod2man = 'pod2man --center=" " --release=" " '+pod+' >'+man
556 os.system(pod2man)
557 os.system('man '+man)
558 pass
559 elif option == 'tex':
560 fname = common.prog_name+'-v'+common.prog_version_str
561 tex0 = tempfile.mktemp()+'.tex'
562 pod2tex = 'pod2latex -full -out '+tex0+' '+pod
563 os.system(pod2tex)
564 tex = fname+'.tex'
565 tex_old = open(tex0, 'r')
566 tex_new = open(tex, 'w')
567 for s in tex_old.readlines():
568 if string.find(s, '\\begin{document}') >= 0:
569 tex_new.write('\\title{'+common.prog_name+'\\\\'+
570 '(Version '+common.prog_version_str+')}\n')
571 tex_new.write('\\author{\n')
572 for auth in common.prog_authors:
573 tex_new.write(' '+auth[0]+
574 '\\thanks{'+auth[1]+'} \\\\\n')
575 tex_new.write('}\n')
576 tex_new.write('\\date{}\n')
577 elif string.find(s, '\\tableofcontents') >= 0:
578 tex_new.write('\\maketitle\n')
579 continue
580 elif string.find(s, '\\clearpage') >= 0:
581 continue
582 tex_new.write(s)
583 tex_old.close()
584 tex_new.close()
585 print 'See '+tex
586 pass
587 elif option == 'html':
588 fname = common.prog_name+'-v'+common.prog_version_str+'.html'
589 pod2html = 'pod2html --title='+common.prog_name+\
590 ' --infile='+pod+' --outfile='+fname
591 os.system(pod2html)
592 print 'See '+fname
593 pass
594 elif option == 'txt':
595 fname = common.prog_name+'-v'+common.prog_version_str+'.txt'
596 pod2text = 'pod2text '+pod+' '+fname
597 os.system(pod2text)
598 print 'See '+fname
599 pass
600
601 sys.exit(0)