ViewVC Help
View File | Revision Log | Show Annotations | Root Listing
root/cvsroot/UserCode/TIBTIDNotes/TIBTIDIntNote/IntegrationTests.tex
(Generate patch)

Comparing UserCode/TIBTIDNotes/TIBTIDIntNote/IntegrationTests.tex (file contents):
Revision 1.1 by sguazz, Tue Jan 20 11:13:35 2009 UTC vs.
Revision 1.3 by carlo, Tue Apr 28 10:52:43 2009 UTC

# Line 5 | Line 5 | The CMS tracker has been designed to sur
5   for maintenance. Its various components have been tested during the production
6   to meet stringent quality requirements. Few important problems have been spotted and
7   solved.\\
8 < During the TIB integration all the operations have been monitored step by step by a chain of tests
9 < aimed at a final control of the components just after the installation and at a verification of the
10 < shell overall quality and functionality. The step by step tests are of particular importance
8 > During the TIB/TID integration all the operations have been monitored step by step by a chain of tests
9 > aimed at a final control of the components just after the installation described here and at a verification of the
10 > shell overall quality and functionality (~\ref{burnin}). The step by step tests are of particular importance
11   because in most cases it is very difficult and in some cases even dangerous
12   to replace a single faulty component when it is embedded in a fully equipped shell.
13  
14   \subsection{Test Setup}
15   \label{sec:TestSetup}
16 < The integration activities started well before the tracker
16 > The integration started well before the tracker
17   final data acquisition hardware and software
18 < were available to the Collaboration, and thus had to rely on prototype peripherals
19 < and a developing software freezing at the integration start up time.
20 < Several further upgrades were actually implemented in time, but they all relied on the
21 < same version of the underlying framework.
18 > were available to the Collaboration. Integration tests thus had to
19 > rely on prototype DAQ hardware and peripherals and software versions
20 > that has been frozen to ensure consistent conditions during the
21 > integration activities time-span, except few minor upgrades and bug
22 > fixes.
23  
24   \subsubsection{Hardware}
25   Here a brief account of the hardware used in DAQ for the integration
# Line 40 | Line 41 | is given (see Fig.~\ref{fig:integration
41   \label{fig:integration daq}
42   \end{figure}
43  
44 < The A/D conversion is managed by FEDs~\cite{bib:fedpci},
44 > \begin{description}
45 > \item[TSC] The Trigger Sequencer Card or TSC~\cite{bib:specs:tsc}  generate the
46 > 40~MHz clock for the entire system and triggers as well, either
47 > internally via software or by accepting external inputs. It has up to four
48 > electrical clock/trigger outputs, enough to drive the FEDs used during the
49 > integration, and an optical clock/trigger output for the FEC.
50 > The TSC may also generate the reset and calibration signal that are
51 > also encoded on the clock/trigger line.\\
52 > \item[FED] The analog-to-digital conversion is done by special PCI FEDs~\cite{bib:fedpci},
53   with electrical differential analog input, mounted on
54   PCI carrier boards and installed in an industrial PC.
55 < The opto-electrical conversion of the analog signals is done externally by a 24-channel unit.
55 > The opto-electrical conversion of the analog signals coming from the module under test
56 > is done externally by a 24-channel unit.
57   A setup containing 3 FEDs, with the electro-opical converter, is able
58 < to readout 48 APV; this is equivalent to 12 single sided modules (4 complete strings)
59 < or 4 double sided modules (1 string plus 1 module). This is more than what is needed to
60 < test the strings during the module installation.\\
61 < The trigger and clock signals
62 < were provided by the Trigger Sequencer Card (or TSC,~\cite{bib:specs:tsc}). It has up to four
63 < electrical clock/trigger outputs, which were enough to drive the FEDs, and an optical clock/trigger
64 < output for the FEC.
65 < This card is used to generate a 40~MHz clock and provide it to the system and also to generate
66 < triggers, either internally via software or by accepting external inputs.
67 < The TSC may also generate the RESET and CALIBRATION signal, by coding them properly on the
68 < clock/trigger line.\\
69 < The FEC mezzanine used during the integration
70 < was laid on a PCI carrier, supporting trigger/clock optical input as fed by the TSC. Its
71 < output was also optical, and could be directly connected to DOHs on the DOHM. \\
72 < The peculiarity of a PCI FED with respect to a final VME FED, other than having electrical inputs
73 < instead of optical ones, is its timing system: a final FED is able to recognise which
74 < conversion count
75 < corresponds to a frame header coming from a module on each input channel separately,
76 < while a PCI FED has this capability (\textit{header finding}) implemented only on the first channel
77 < out of the eight available and assumes that the signals coming from the modules are synchronised.
78 < This makes the PLL-based time alignment procedure even more important in a setup with PCI FEDs.
79 < Furthermore, PCI FEDs are not able to insert a programmable delay on their inputs and thus
80 < it is important that the clock/trigger connections from the TSC to FEDs have all the same delay.
81 < Last, PCI FEDs cannot perform an online pedestal subtraction and zero suppression (they cannot
82 < run in Processed Raw data nor in Zero Suppression mode).
58 > to readout 48 APV; this is equivalent to 12 single sided modules
59 > %(four complete strings)
60 > or four double sided modules assemblies.
61 > %(one string plus one module).
62 > These figures are
63 > pefectly suited for the tests during the integration.\\
64 > Since the readout of the data from the APVs is not
65 > synchronous with the L1 trigger, a crucial capability of the FED is the
66 > \textit{header finding}, i.e. the automatical tagging of the
67 > analog data stream from APV pairs with respect to the idle
68 > signals at its inputs. This is possible since the APVs embeds the
69 > analogue data stream within a {\em digital frame} made up of a leading
70 > digital header and a trailing tick-mark.
71 > %The peculiarity of the PCI FED
72 > %with respect to the VME FED that will be used in the experiment, other than having electrical inputs
73 > %instead of optical ones, is the timing system: the VME FED is able to
74 > %perform the header finding on each input channel independently; the PCI FED has
75 > %this capability only on the first channel of the eight available
76 > %and assumes that the all input signals are synchronised.
77 > %This makes the PLL-based time alignment procedure of crucial
78 > %importance in a setup with PCI FEDs. Furthermore, PCI FEDs are not
79 > %able to insert a programmable delay on their inputs and thus
80 > %it is important that the clock/trigger connections from the TSC to
81 > %FEDs have all the same delay. Last, PCI FEDs cannot perform an online
82 > %pedestal subtraction and zero suppression (they cannot run in
83 > %Processed Raw data nor in Zero Suppression mode).
84 > \item[FEC] The special FEC used during the integration is the {\em FEC
85 >  mezzanine} also installed into a PC on a PCI carrier. It supports
86 >  the optical trigger/clock provided by the TSC and features an
87 >  optical output directly connected to DOHs on the DOHM.
88 > \end{description}
89  
74 \subsubsection{Software}
75 The software used to carry out integration tests was essentially the CMS
76 tracker implementation of a more general software, named xDAQ, which is the official CMS
77 DAQ framework. This implementation is known as TrackerOnline.\\
78 The present version of TrackerOnline makes use of a set of
79 xml files which store the parameters needed by all the devices present on the structure
80 to be tested. In the final implementation of the software the use of a database is foreseen.
81
82 \begin{figure}[bth!]
83 \centering
84 \includegraphics[width=\textwidth]{Figs/fecmodulexml_2.pdf}
85 \caption{An example of data contained in fec.xml and module.xml files for one module.
86 Part of the data is not shown for simplicity.}
87 \label{fig:fecmodulexml}
88 \end{figure}
89 The information on a given setup can be divided in two sections: one describing the
90 readout hardware (and corresponding software) setup and another describing which part of
91 the tracker is going to be connected to the Control System and to FEDs.\\
92 The hardware/software setup is written in a single xml file, which was prepared once
93 and for all at the start of the integration.
94 The information regarding the readout tracker section
95 is stored in two more files, commonly named fec.xml and module.xml. The first
96 contains all data which the FEC will need to download to modules before starting the data
97 taking, that is all the values to be written in devices' $I^2C$ registers.
98 The second contains all other
99 information needed by the DAQ setup to rearrange data coming from the FEDs: from an input channel
100 based indexing to a module based one (see Fig.~\ref{fig:fecmodulexml}).\\
101 The module.xml file contains two tables: the first joins FEDs input channel indexes
102 with the respective modules' ring, CCU and $I^2C$ address (i.e.\ readout coordinates
103 with Control System coordinates). Each row corresponds to an APV
104 pair, an AOH laser, an optical fibre and a FED input.\\
105 The second table of module.xml reassembles the information on a module basis.
106 Here all the active modules are listed, with a row for every module.
107 The Control System coordinates are repeated both in module.xml and fec.xml,
108 so that they can be used as a pivot between the 3 tables.\\
109 It is clear that these files have to be archived along with raw data taken during a run, and
110 to do it automatically is the first task of an integration validation software.
111 Another required task is an automatic run logging; a fast data analysis is also
112 desirable.
113 Last, but not least, this software should easily allow the user to perform all preliminary
114 (commissioning) runs, adjusting modules' parameters accordingly.
115
116 \begin{figure}
117 \centering
118 \includegraphics[width=.9\textwidth]{Figs/integration_package.pdf}
119 \caption{Scheme of the integration software.
120 \textit{Arrows}: a relation.
121 \textit{Gears}: an application.
122 \textit{Mouse:} an interactive application.
123 \textit{Sheet}: a file.}
124 \label{fig:integration_package}
125 \end{figure}
90  
91 < Figure~\ref{fig:integration_package} shows a scheme of the relations between all the software
92 < components that will be described, the local files and the remote database.
93 < \paragraph*{FecTool}
94 < FecTool is a front-end Graphics User Interface (GUI) aimed to ease the creation of the device
95 < description, i.e.\ fec.xml. This program is used to launch
96 < two standalone applications deployed along with TrackerOnline: ProgramTest and FecProfiler.
97 < The first application can test the ring functionality, the connection to all devices reporting
98 < a list of detected hardware. Also the second application is able to retrieve a
99 < list of hardware connected to CCUs, but its output is the fec.xml file needed by TrackerOnline.\\
100 < By accessing the output of these two programs, the FecTool GUI enables the user to
101 < test the functionality of a string, or of a whole ring. FecTool also checks that the found hardware
102 < corresponds to what one expects to find in tracker's modules: for every $I^2C$ address
103 < there should be either 4 or 6 APVs, one PLL, one AOH, and so on.\\
104 < The user can also input the GeoId(s) of tested string(s) before starting the test. In this case
105 < FecTool also checks that the DCU Hardware Id read from each module matches the one declared
106 < in the construction database performing an important consistency check between what is
107 < registered on the integration database and what is really present on the structure spotting
108 < possible module registration error.\\
109 < Only if this last test is passed, the user is allowed to create the fec.xml description file
110 < needed to go forth with integration tests. Hence tests proceed with the data readout from
111 < modules, which rely on TrackerOnline.
112 < \paragraph*{The integration package}
113 < The xDAQ version installed on the integration setups is very user-unfriendly, and requires
114 < an expert user: many run-specific parameters are set manually and there is no
115 < input validation.
116 <
117 < \begin{figure}
118 < \centering
119 < \includegraphics[width=0.35\textwidth]{Figs/FedGuiMain.png}
120 < \caption{Main GUI window.}
121 < \label{fig:fedgui}
122 < \end{figure}
123 <
124 < A package interacting with TrackerOnline, capable
125 < of automatically setting all relevant parameters and performing all data collection at the end
126 < of a run has been written. This is a
127 < finite state machine which cycle through the needed states setting run-specific parameters
128 < in the xDAQ software.
129 <
130 < The state machine cycles through the following states:
91 > \subsubsection{Software}
92 > The software used to carry out integration tests is
93 > based on the CMS general data acquisition framework
94 > %{\em TrackerOnline}, the CMS
95 > %tracker implementation of a more general software,
96 > named xDAQ~\cite{ref:xdaq}.
97 > % which is the official CMS DAQ framework.
98 > In place of a database as in the
99 > experiment version, the integration version of TrackerOnline uses a set of
100 > xml files for all the configurations needed to perform a test run.
101 > A description of the xml configuration files follows.
102 > %, i.e. all the parameters needed by the devices and the
103 > %software involved in the test run,
104 > %\begin{figure}[bth!]
105 > %\centering
106 > %\includegraphics[width=\textwidth]{Figs/fecmodulexml_2.pdf}
107 > %\caption{An example of data contained in fec.xml and module.xml files for one module.
108 > %Part of the data is not shown for simplicity.}
109 > %\label{fig:fecmodulexml}
110 > %\end{figure}
111 > The hardware and software configuration of the DAQ is written into the file
112 > named daq.xml. It reflects the setup used for the test and has to be rarely changed
113 > during the integration procedures. The system settings uploaded, and
114 > read back for verification, by the FEC are contained into fec.xml.
115 > The data decoding map (i.e., information needed to map each FED input to an APV
116 > pair of a specific module) is written into module.xml.
117 >
118 > %\begin{description}
119 > %\item[Configuration of the DAQ hardware and software, daq.xml] The hardware
120 > %  and software configuration of the DAQ is written in a single xml file, which
121 > %  reflect the setup used for the test and has to be rarely changed
122 > %  during the integration procedures.
123 > %\item[Configuration of the control system, fec.xml] The most important
124 > %  part of this configuration section is the settings the FEC must
125 > %  upload to the modules and AOHs and in general any configurable
126 > %  $I^2C$ device, before starting the data
127 > %taking. Altough these settings are not specifically
128 > %  related to the control system, it is duty of the control system to
129 > %  write them to the devices' $I^2C$ registers and read them back for verification.
130 > %\item[Configuration of the readout, module.xml] All other information needed by the DAQ to
131 > %rearrange data coming from the FEDs is in module.xml that allows each
132 > %FEDs input channel to be mapped to an APV pair of a specific module as
133 > %identified by ring, CCU and $I^2C$ addresses (i.e. the correspondance
134 > %between readout coordinates and Control System coordinates);
135 > %Each row corresponds to an APV
136 > %pair, an AOH laser, an optical fibre and a FED input.\\
137 > %The second table of module.xml reassembles the information on a module basis.
138 > %Here all the active modules are listed, with a row for every module.
139 > %The Control System coordinates are repeated both in module.xml and fec.xml,
140 > %so that they can be used as a pivot between the 3 tables.\\
141 > %\end{description}
142 >
143 > The tasks performed by the integration software are the following:
144 > execution of the commissioning runs needed to optimal adjust of the
145 > module parameters (preparation of fec.xml and module.xml); execution
146 > of the test runs with complete and automated logging; fast analysis
147 > for immediate feedback; archival of xml configuration files to log the
148 > test conditions; archival of the raw data.
149 > %\begin{figure}
150 > %\centering
151 > %\includegraphics[width=.9\textwidth]{Figs/integration_package.pdf}
152 > %\caption{Scheme of the integration software.
153 > %\textit{Arrows}: a relation.
154 > %\textit{Gears}: an application.
155 > %\textit{Mouse:} an interactive application.
156 > %\textit{Sheet}: a file.}
157 > %\label{fig:integration_package}
158 > %\end{figure}
159 > %Figure~\ref{fig:integration_package} shows a scheme of the relations between all the software
160 > %components that will be described, the local files and the remote database.
161 > This is achieved by using a specific set of components of integration
162 > data acquisition software as summarized in the following.
163 > \begin{description}
164 > \item[FecTool.]
165 > FecTool is GUI based front-end to two standalone
166 > applications: FecProfiler
167 > and ProgramTest, aimed to ease the creation of the device
168 > description.
169 > FecProfiler is able to detect
170 > the devices connected to the CCUs and builds the fec.xml file needed by
171 > TrackerOnline. FecTool takes care of checking that thedetected devices
172 > corresponds to expected ones, i.e., per module, 4 or 6 APVs, one
173 > PLL, one AOH, and so on.  ProgramTest allows the ring functionalities,
174 > i.e. the redundancy, to be deeply tested.
175 >
176 > The geographical identity of the strings under test must be
177 > entered to allows for verification from FecTool of the matching
178 > between the DCU Hardware Id read from each
179 > module and the one declared in the module database. This
180 > consistency check is crucial to spot possible errors in recording the
181 > location where a module is mounted during the assembly. If the check
182 > is passed, the fec.xml description file needed to go forth with
183 > integration tests can be created.
184 > %Hence tests proceed with the data
185 > %readout from modules, which rely on .
186 > \item[The Integration Package.]
187 > %TrackerOnline as any xDAQ implementation requires an expert user as
188 > %many run-specific parameters must be set and there is no  input
189 > %validation.
190 > %\begin{figure}
191 > %\centering
192 > %\includegraphics[width=0.35\textwidth]{Figs/FedGuiMain.png}
193 > %\caption{Main GUI window.}
194 > %\label{fig:fedgui}
195 > %\end{figure}
196 > The integration setup is made more user-friendly by a special {\em
197 >  Integration Package}, a GUI based front end that interacts with
198 >  the data aquisition program to automatically set all relevant parameters
199 >  and to harvest all data at the end
200 > of a run. The package is organized as a finite state machine by which
201 >  the user can cycle between the various states, i.e. the following integration test steps:
202   \begin{enumerate}
203 < \item TrackerOnline initialisation (only once)
204 < \item Ask for the desired run
205 < \item Launch run in TrackerOnline
206 < \item Wait for run completion (polling the number of acquired events)
207 < \item Stop data taking
208 < \item Launch fast data analysis package
209 < \item Ask user for data validation
210 < \item If acknowledged, pack all the data and log the run with proper data quality flag. If the run
211 < was a commissioning run, update fec.xml and module.xml
203 > \item Daq initialisation (only once);
204 > \item choice of the the desired run;
205 > \item execution of the run via Daq program;
206 > \item on run completion (i.e. after a given number of events), stop
207 >  data taking and execution of the fast data analysis;
208 > \item presentation of the run outcome on summary GUIs;
209 > \item on positive validation from the user, data are stored together
210 >  with run logs and data quality flags;
211 > \item in case of commissioning runs, on positive validation from the
212 >  user, fec.xml and module.xml are updated accordingly to be used from
213 >  now on.
214   \end{enumerate}
215  
216 < For every required run, the integration package shows the proper TrackerOnline output
217 < through some GUIs, so that the user can acknowledge the outcome of the run and give the approval
218 < for archiving the run data or, in case of a commissioning run, for updating the parameter set.\\
219 < The TrackerOnline software computes the new parameter set for each commissioning run
220 < (except for the ``find connection'' run, as we'll see below) and writes
221 < it locally as a fec.xml file, which is retrieved by the integration package at need.\\
222 < A main integration database, for centralized archiving purposes,
223 < was also installed. Software implementing data analysis runs automatically on the
224 < archived files producing validation outputs, accessible througth a web interface,
225 < for every tested module.
216 > %For every required run, the integration package shows the TrackerOnline output
217 > %through some GUIs, so that the user can acknowledge the outcome of the run and give the approval
218 > %for archiving the data or, in case of a commissioning run, for updating the parameter set.\\
219 > %The TrackerOnline software computes the new parameter set for each commissioning run
220 > %(except for the ``find connection'' run, as we'll see below) and writes
221 > %it locally as a fec.xml file, which is retrieved by the integration package at need.\\
222 > A main integration database has been setup for centralized archiving purposes,
223 > was also installed. For later reference, if needed, the data analysis
224 > can also be run on the archived files with validation outputs made
225 > available through a web interface.
226 > \end{description}
227  
228   \subsection{Test Description}
229   \label{ref:test-description}
192 \paragraph*{Find connections:}
193 This commissioning run is used to know which module is connected to which FED input channel.
194 This procedure consists of switching on all the lasers of all AOH one by one, while checking the
195 signal on all the FED inputs. If the difference of the signal seen on a FED channel is above
196 a given threshold, the connection between that laser and input channel is tagged and registered
197 in the module.xml as a connection table.
198
199 \begin{figure}
200 \centering
201 \includegraphics[width=0.27\textwidth]{Figs/FedGuiChannels.png}
202 \caption{Connections displayed during the run}
203 \label{fig:fedguichan}
204 \end{figure}
230  
231 < At this point of the commissioning procedures both the descriptions for FECs
232 < (i.e. tracker hardware connected to the FECs) and for FEDs are present.
208 <
209 < \paragraph*{Time alignment:}
210 < This step is used to compensate the different delays in the control and readout chain,
211 < i.e. the connections between FECs and FEEs.
212 < This also makes the APVs' sampling time synchronous provided that AOH
213 < fibres and ribbons are the same length. The latter is not important during integration
214 < quality checks, as no external signal is ever measured, but it becomes so when
215 < one tries to detect ionising particles.\\
216 < This run type is relevant because, if FEDs are to sample properly
217 < the APV signal, the clock must go to the modules synchronously, with a skew of the order of a few ns.
218 < Also the clock coming to the three FEDs must be synchronous but this is guaranteed
219 < by using cables of the same length between the clock-generating
220 < board (TSC) and the FEDs themselves.\\
221 < The time allignment run
222 < makes use of the periodic tick mark signal sent by the APVs when it is clocked:
223 < after these devices receive a reset, they produce a tick mark signal every 70 clock cycles.
224 < During this run the FEDs continously sample the signals at the full clock frequency.
225 < This means that for every DAQ cycle the output of all APVs is measured as with a 40~MSample scope.
226 < After every cycle is completed all the PLLs' delays are icreased by (25/24)~ns
227 < (the minimum delay step), and the signal readout is performed again. After 24 such cycles the full
228 < tick mark signal is measured as with a 960~MSample scope.
231 > Each run type that can be choosen by the user corresponds to a
232 > commissioning run or to a test run, as described below.
233  
234 + \begin{description}
235 + \item[Find connections.] This commissioning run is used associate each
236 +  FED input channel to a module. The procedure, repeated in sequence
237 +  for all AOHs laser drivers, consists of switching on only a given
238 +  AOH laser driver at a time while checking the signal on all the FED inputs. If
239 +  the difference of the signal seen on a FED channel is above
240 + a given threshold, the connection between that laser and input channel is tagged and stored
241 + in module.xml.
242 + \item[Time alignment.]
243 + This commissioning run measures the appropriate delays to be later set
244 + in the PLLs delay registers.
245 + Doing so the different delays in the control and
246 + readout chain are compensated, the clock arrives to the modules
247 + synchronously, with a skew of the order of a few ns, and the APVs
248 + signal are properly sampled by the FEDs. This requires also the clock
249 + to all the FEDs to be synchronous, but this is guaranteed
250 + by using cables equal in length between the TSC and the FEDs.\\
251 + The time alignment run uses the periodic tick mark signal issued by
252 + the idle APVs every 70 clock cycles. The APV signals are sampled by FEDs in
253 + scope mode, i.e. without waiting for an header but continously,
254 + sampling the inputs at the full clock frequency as with a 40~MSample/s
255 + scope. The measurement is repeated after all the PLL delays are
256 + increased by the minimum delay step, 25/24~ns. After 24 such cycles the
257 + idle APV output and thus the tick mark signal also are measured with
258 + an effective 960~MSample/s scope.
259   \begin{figure}
260   \centering
261   \includegraphics[width=.6\textwidth]{Figs/tickmark.pdf}
# Line 235 | Line 264 | are marked. In the picture are reported
264   during the time alignment an interval of $1\,\mu\mathrm{s}$ is scanned.}
265   \label{fig:tick}
266   \end{figure}
267 < The DAQ takes the time delay between tick marks as a measurement of the difference in delays in the
268 < FEC-FEE connections and computes what delay must be set on each PLL in order to compensate this.
267 > The time differences between the variuos APV tick marks are a
268 > measurement of the relative delays introduced by the connections and
269 > can be used to compute the optimal delay to be set on each PLL for compensation.
270   The tick mark raising edge $t_R$ time is measured by taking the time corresponding to the highest
271   signal derivative (see Fig.~\ref{fig:tick}).
272   The best sampling point is considered $t_R+15\,\mathrm{ns}$, to avoid
273 < the possible overshoot. This is important also later, when measuring the analogue data frame, as
274 < it allows measuring the signal coming from each strip after transient effects due to the signal
275 < switching between strips are over.\\
276 < At the end of this procedure, the user is shown all proposed adjustments to PLL delay values.
277 < Then he can accept the outcome of the first time alignment run and, possibly,
278 < repeat it. If the time allignment has been done correctly maximum variation of two or less
279 < nanoseconds will be found. The delays are written in the correspoding xml file.
280 <
281 < \paragraph*{Laser Scan:}
282 < This run makes a scan of all AOH bias values for the four possible gain settings
283 < to determine the optimal gain and the corresponding optimal bias (see Fig.~\ref{fig:laserscan}).
284 < In this run the APV generated tick marks are sampled (a correctly done
285 < time allignment has to be done before) for all gain and bias values.
286 <
273 > the possible overshoot.
274 > %[???This is important also later, when measuring the analogue data frame, as
275 > %it allows measuring the signal coming from each strip after transient effects due to the signal
276 > %switching between strips are over.???]\\
277 > At the end of the procedure, all proposed adjustments to PLL delay
278 > values are proposed to the user. If accepted the delays are written in
279 > fec.xml. If the setup is correctly aligned in time, a further time
280 > alignment procedure should not propose delay corrections greater than
281 > $\sim 2$ns.
282 > %It is worth noticing that by the time alignment procedure all APVs are
283 > %made sampling synchronously, since in the integration setup AOH fibres
284 > %and ribbons are all equal in length. This is not important during
285 > %integration quality checks, as no external signal is ever measured,
286 > %but it would be so in trying to detect ionising particles.
287 > \item[Laser Scan.] By this commissioning run a scan of any bias and
288 >  gain value pair is done to determine the optimal working point for
289 >  each AOH.
290 > The procedure requires a sucessfull time alignment since again tick marks are
291 > sampled but in this case changing gain and bias values.
292 > %
293   \begin{figure}[t!]
294   \begin{center}
259 \subfigure[A pictorial representation of a tick mark as produced by the APVs (dotted)
260 and as transmitted by the lasers (solid) when the laser driver's bias is too
261 low (left), correct (centre) or too high (right), with the subsequent signal saturation.]
262 {
263        \label{fig:laserscan}
264        \includegraphics[width=.9\textwidth]{Figs/laserscan.pdf}
265 }
266 \\
295   \subfigure[The sampled tick mark top and baseline as a function of the laser driver's bias.]
296   {
297          \label{fig:gainscan_basetop}
# Line 275 | Line 303 | low (left), correct (centre) or too high
303          \label{fig:gainscan_range}
304          \includegraphics[width=.45\textwidth]{Figs/gainscan_range.pdf}
305   }
306 + \subfigure[A pictorial representation of a tick mark as produced by the APVs (dotted)
307 + and as transmitted by the lasers (solid) when the laser driver's bias is too
308 + low (left), correct (centre) or too high (right), with the subsequent signal saturation.]
309 + {
310 +        \label{fig:laserscan}
311 +        \includegraphics[width=.9\textwidth]{Figs/laserscan.pdf}
312 + }
313   \caption{Plots computed during a gain scan run.}
314   \end{center}
315   \end{figure}
316 < For each trigger sent to FEDs, the tick mark's top is sampled twice and all other samplings fall
317 < on the baseline. The highest samples are used to estimate the higher limit of the signal
318 < for a given gain/bias pair and the lower values provide an estimate of the lower limit.
319 < For each gain value three plots are computed: in the first two the high and low edges of
320 < the tick mark are represented as a function of bias (Fig.~\ref{fig:gainscan_basetop})
321 < and the third, being the difference between the former, represent the dynamic range
322 < as a function of bias (Fig.~\ref{fig:gainscan_range}).\\
323 < For each laser these plots are created 4 times: one for each possible gain value. The best laser
324 < gain is computed as that providing an overall gain as close as possible to a given optimal value,
325 < the overall gain being estimated as the slope of the first two curves in their central section.
326 < The best bias value is taken as the one maximising the tick mark height keeping
327 < the maximum and minimum not saturated.\\
328 < After the run is completed, values proposed by TrackerOnline are shown to the user, who
329 < intervenes if there is any abnormal proposed gain, which may indicate a problem either on the
330 < AOH or on the fibre.
331 <
332 < \paragraph*{VPSP Scan:}
333 < This is the first run with FEDs with the header finding function active. In this run the
334 < trigger is dispatched also to modules, which send their data frames.
335 < During this run a scan of VPSP parameter is performed on the modules, and for each value
336 < their frames are acquired several times. As there is no physics
337 < signal on the detectors, the sampled signal is a measurement of the pedestal of the analog channels.
338 < The average strip pedestal is computed for every APV as a function of the VPSP parameter and
339 < at the end of the run the best VPSP pedestal is computed as that which moves the baseline
340 < to 1/3 of the dynamic range. This choice avoids setting a baseline too near the
341 < lower saturation value leaving anyway enough range for possible signals from particles
342 < ($\sim 6$ MIP equivalent).
343 < At the end of the run computed values are proposed to the user for approval and written
344 < to the xml file.
345 <
316 > %
317 > For each trigger sent to FEDs, the tick mark is sampled twice and all other samplings fall
318 > on the baseline. The tick mark's top samples are used to estimate the higher limit of the signal
319 > for a given gain/bias setting pair and the baseline samples provide an estimate of the lower limit.
320 > For each gain setting the tick mark top samples and the baseline samples are measured as a
321 > function of the bias, as shown in Fig.~\ref{fig:gainscan_basetop}. For
322 > each bias setting their difference is the tick mark height, shown in Fig.~\ref{fig:gainscan_range},
323 > that represents the dynamic range as a function of the bias.\\
324 > The best bias setting for a given gain setting is taken as the one maximising the tick mark height keeping
325 > the maximum and minimum not saturated, as pictorially represented in Fig.~\ref{fig:laserscan}.
326 > The same measurement is done for each possible gain setting.
327 > The best laser gain setting is the one providing an overall gain of
328 > the optical chain as close as possible to the design one, 0.8~\cite{ref:gain}.
329 > The overall gain is estimated from the slope of the curves of
330 > Fig.~\ref{fig:gainscan_basetop} in their central section.
331 > After the run is completed a set of values are proposed to the
332 > user. Abnormal gain values may indicate a problem either on the AOH or
333 > on the fibre and are investigated.
334 > \item[VPSP Scan.]
335 > This commissioning run is devoted to optimise the pedestal of
336 > the APV, i.e. the average output level in absence of any signal, with
337 > respect to the dynamical range of the FEDs. This level is managed by a
338 > specific APV register, know as {\em VPSP}, which controls a voltage
339 > setting within the deconvolution circuitry. The procedure consists of
340 > a scan of VPSP values while acquiring data frames from modules in the
341 > standard way.
342 > %%, i.e. trigger sent to the modules and FEDs in
343 > %%``header finding'' mode.
344 > In the final tracker operation the optimal VPSP setting correspond to a pedestal baseline placed
345 > around 1/3 of the available dynamic range. This is a good compromise to keep
346 > the baseline not too close the lower saturation value while leaving a good
347 > range for particle signals, corresponding to $\sim 6$ MIP.
348 > During the integration tests the only important point is to keep the baseline
349 > away from the saturation levels; in such a way the module noise measurements will
350 > not be affected by the wrong common mode subtractions which are present in case
351 > of events with baseline saturation.  
352 > At the end of the run a set of values are proposed to the user for
353 > approval and in case written in the relevant xml file.
354   \begin{figure}
355   \begin{center}
356 < \subfigure[Strip pedestals of a module in ADC counts vs.\ strip number.]
356 > %\subfigure[Strip pedestals of a module in ADC counts vs.\ strip number.]
357 > \subfigure[]
358   {
359          \label{fig:saturationpedestal}
360          \includegraphics[width=.45\textwidth]{Figs/saturation_pedestal.pdf}
361   }
362   \hspace{5mm}
363 < \subfigure[Strip noise of a module in ADC counts vs.\ strip number.]
363 > %\subfigure[Strip noise of a module in ADC counts vs.\ strip number.]
364 > \subfigure[]
365   {
366          \label{fig:saturationnoise}
367          \includegraphics[width=.45\textwidth]{Figs/saturation_noise.pdf}
368   }
369 < \caption{The pedestal of strips after \#{}640 is low, approaching to the bottom of the
370 < dynamic range, and their noise is therefore lower.}
369 > \caption{Pedestal (left) and noise (right) vs. strip number for a 6 APV module.
370 > The pedestals of strips after strip \#{}640 are low, approaching to the bottom of the
371 > dynamic range. Their noise is therefore altered with respect to the not saturated
372 > channels.}
373   \label{fig:saturation}
374   \end{center}
375   \end{figure}
376 < This run is not normally performed during the integration, as a default value was set on
377 < all the APVs after a measurement on a sample. Anyway it is sometimes needed because the optimal
378 < VPSP value may change from APV to APV and is also strongly dependent on temperature.
379 < An example of this is shown in Fig.~\ref{fig:saturation}, where
380 < only a few readout channels suffer from a signal saturation, while most of the strips of a module
381 < are placed correctly inside the dynamic range.
382 < This usually happens because the pedestal values of the channels of an APV are different one
383 < from another and may have a dependency on the strip index like that shown in
384 < Fig.~\ref{fig:saturationpedestal}.
385 < In these cases when the pedestal of a strip approaches to the edge of dynamic range     the
386 < APV-AOH output is no more linear and the channel's RMS is lower, (see Fig.~\ref{fig:saturationnoise}).
387 < This is one of the most frequent problems with the pedestals and it is solved
388 < by a VPSP scan.
389 <
390 < \paragraph*{Pedestal and Noise:}
391 < This is the main run for qualifying the detector performances during
392 < the integration. The 400V bias are applied to the module under test which are
393 < also checked for any possible overcurrent or breakdown.\\
394 < Triggers are sent to the modules and FEDs are placed in header recognition mode.
348 < All the analogue frames from the modules are acquired and for each channel both the average value
349 < and the RMS are computed.
350 < Two analyses are performed on these data: one is done online by the TrackerOnline software, and
351 < another one is performed offline through procedures taken from the ORCA package (ORCA was the
352 < official reconstruction package of CMS at the time, now substituted by CMSSW) and run
353 < on the files containing the raw data as acquired by FEDs in just the same way as it would
354 < do with data coming to a Filter Farm.\\
376 > The VPSP scan is not sistematically performed during the integration,
377 > since the default VPSP setting is adequate in most of the
378 > cases. Nevertheless, VPSP optimal values change considerably within
379 > the APV population and are strongly temperature dependent and is
380 > rather common to have a stuation in which the pedestal of few readout
381 > channels approaches to the lower edge of dynamic range
382 > (Fig.~\ref{fig:saturationpedestal}) resulting in a lower RMS (see
383 > Fig.~\ref{fig:saturationnoise}). The VPSP scan allows for this issue to be
384 > fixed.
385 > \item[Pedestal and Noise Run.]
386 > This is the main run type for qualifying the performances during
387 > the integration. Tipically a bias of 400V is applied to the modules
388 > under test to check for any possible overcurrent or breakdown.\\
389 > Triggers are sent to the modules and FEDs work in ``header finding'' mode.
390 > All the analogue frames from the modules are collected two analyses
391 > are performed on these data: online, by the TrackerOnline
392 > software; offline, in a way very similar to the final experiment by
393 > using algorythms of the ORCA package~\cite{bib:orca}, the CMS
394 > reconstruction package at that time, now replaced by CMSSW.
395   The average value of the signal read on each strip is an estimate of
396   its pedestal, while the RMS is a good estimate of its noise, provided that the noise itself
397   is Gaussian, which is true to a first approximation. This value is often referred to as
398 < \textit{raw noise}, as opposed to the \textit{common-mode subtracted noise} (or CMN). The latter
399 < can be computed after pedestals are measured, which happens after the first hundreds of frames
400 < are acquired.
401 < The common mode noise subtraction performed by ORCA and TrackerOnline is similar to that
402 < performed by the final FEDs.
403 < This subtraction
404 < eliminates the Common Mode Noise contribution to a specific event. After the Common
405 < Mode Noise subtraction the noise can be computed again as the RMS of the remaining signal on strips
406 < and this new noise measurement is called Common Mode Subtracted Noise (or just CMN).\\
407 < Because of the difference in gain between the various optical links to compare the noise
408 < on different APV pairs a renormalization is needed.
409 < To implement this a gain measuring procedure to correct
410 < noise measurements has been done. When FEDs acquire analogue frames, they store all acquired
411 < raw data, comprising the samplings on the digital header. As the digital header has
412 < the same amplitude on every APV, its measurement in terms of FED ADC counts
413 < was used as an estimate of the relative gain of optical links.
414 < This method allows a contextual measurement of noise and gain, and it is accurate,
415 < provided that there is no signal saturation both on low and high values.\\
416 < The normalisation factor was chosen so as to bring the normalised header height to 220 ADC counts,
377 < which was the value of header's height as it was read out in the module test setup
378 < of the module production line. This allowed the scaled measurements to be easily
379 < compared with those done during module production tests. Both normalised CMN noise and raw noise
380 < are shown to the user, while only uncalibrated CMN noise is plotted as a reference.
381 < Also the distribution of normalised CMN and raw strip noise is computed and shown to
382 < complete the information.
383 <
398 > \textit{raw noise}, as opposed to the \textit{common-mode subtracted
399 >  noise} (or CMN). The latter is the RMS computed after having
400 > subtracted the {\em common noise}, i.e. the correlated noise-like fluctuation
401 > common to a given group of channels (tipically an entire APV).
402 > The common mode noise subtraction method implemented in ORCA and
403 > TrackerOnline is similar to that performed by the final FEDs.\\
404 > Because of the difference in gain between the various
405 > optical links, noise comparison between different APV pairs requires a
406 > normalization. This procedure relies on the digital
407 > headers whose amplitude, being the same on each APV, is used to
408 > estimate of the relative gain of optical links so to apply an
409 > appropriate correction. In such a way noise and gain are
410 > simultaneously measured provided that the signal is not saturation both on low and high values.
411 > The normalisation factor is chosen so as to bring the normalised
412 > header height to 220 ADC counts, as measured in quality controls
413 > during the module production. This allowed the scaled measurements to be easily
414 > compared with those done during module production tests~\cite{ref:modtest}.\\
415 > At the end of the run, pedestal and noise profiles and distributions
416 > are shown to the user.
417   \begin{figure}[t!]
418   \centering
419   \includegraphics[width=0.6\textwidth]{Figs/noiseprofile.pdf}
# Line 389 | Line 422 | and y axis represent the noise in ADC co
422   normalised noise).}
423   \label{fig:noiseprofile}
424   \end{figure}
425 < Figure~\ref{fig:noiseprofile} shows an example of the output
426 < at the end of a noise measurement run: the normalised raw noise and CMN and the uncalibrated CMN
425 > Figure~\ref{fig:noiseprofile} shows an example of the noise output:
426 > the normalised raw noise and CMN and the uncalibrated CMN
427   for each strip are plotted against the strip index. The first 256 strips belong to the
428   first APV pair and are multiplexed to a single optical line and the strips from 257 to 512
429   belong to the second APV pair. It can be noted here that the
430 < raw noise without normalisation suffers from a different gain of optical links
431 < and that after the normalisation procedures the noise level is the same
432 < for the two APV pairs.\\
400 < At the end of this run, once the outcomes are showed to the user, he can decide whether to
401 < store these data or to cancel the procedure. In the first case data are packed along with
430 > raw noise without normalisation reflecs the different gain of
431 > optical links, this is corrected by the normalisation procedure.\\
432 > If validated by the user, data are packed along with
433   possible comments and sent to the central archive system, where they are processed
434 < again and made available on a web page. The system also automatically recognises where
435 < the modules where mounted by checking their DCU Hardware Id (which is written in the fec.xml file)
436 < in the construction database, this information is stored in the test table of
437 < the integration database and allows to build a geographical table of mounted modules with
438 < a link to a page containing all the tests performed for each module.\\
434 > again and made available on a web page.
435 > %The system also automatically recognises where
436 > %the modules where mounted by checking their DCU Hardware Id (which is written in the fec.xml file)
437 > %in the construction database, this information is stored in the test table of
438 > %the integration database and allows to build a geographical table of mounted modules with
439 > %a link to a page containing all the tests performed for each module.\\
440 > \end{description}
441  
442   %%%%%%% aggiunta C.G.
443 < \subsubsection{Single Module}
444 < After a module of a string was mounted, its basic functionalities were tested.
445 < A fast test on the I$^2$C communication permitted to spot possible electrical problems
446 < in the module front-end electronics in the AOH or in the mother cable. Since  the safe removal of a MC
447 < from the shell requires the dismounting of al the modules of a string, it is very important
448 < to perform this test of as soon as the first module of a string is integrated. After the
449 < I$^2$C test FecTool checks the identity of the components against the construction database.
450 < The results of the tests can then be monitored through a web page.
451 <
452 < \subsubsection{String}
453 < When the third and last module of a string is mounted the commissioning runs described in section~\ref{ref:test-description} are performed after the I$^2$C communication tests. The first run, ``Find connections'' permitted to check the full functionality of the AOH. Since the AOHs can be tested only after the modules are mounted this is the first test which can spot possibly broken fibres. It was necessary to perform this test after the integration of each string, because the subtitution of an AOH implies the dismounting of all the modules of the string mounted between the AOH and the front flange.\\
454 < After this test a ``Time Alignement'' run, a  ``Laser scan'' run and finally a ``Pedestal and Noise'' run with HV on were performed. The ``Laser scan'' run was done limiting the laser gain to the lower value which was found to be optimal for all the AOHs in the integration setup.
455 <
456 < \subsubsection{Control Ring and Redundancy}
457 < The control electronics can be fully tested only after complete assembly of a shell. This last test forsees a check of the correct operation of the ring and of the redundancy circuitry. A failure on a CCU or a control cable can be immediatly spotted as it causes the interruption of the ring; the communication with the other components is then checked using ProgramTest.\\
458 < Finally the test of redundancy circuitry is performed by-passing each CCU one by one and verifying the correct response of the ring. The test is successful only if both the primary and secondary circuits of the DOHM are working correctly and if the CCUs are connected in the right order to the DOHM ports.
443 > The basic run types and tests described above are appropriately
444 > combined according to the devices and/or the group of devices under test.
445 > \begin{description}
446 > \item[Single Module.] After a module is mounted, its basic
447 >  functionalities can be tested. In particular, a fast test on the
448 >  I$^2$C communication permits possible
449 > electrical problems to be spotted in the module front-end electronics,
450 > in the AOH or in the mother cable. For mother cable and AOH this is of
451 >  particular importance: an AOHs can be practically tested only
452 >  after the corresponding modules is mounted and this is the first
453 >  test which can spot possibly broken fibres; similarly for the MC, it
454 >  is very important to perform the test of as soon as possible. In
455 >  fact, a safe removal and replacement of either an AOH or the MC is a
456 >  difficult intervention possibly requiring the dismounting of many
457 >  modules already put in place. \\
458 >  Since the
459 > I$^2$C test FecTool checks the identity of the components with respect
460 > to the construction database an alarm issued in case of mismatch.
461 > The results of the tests can then be
462 > monitored through a web page.
463 > \item[String]
464 > When the third and last module or double sided assembly is mounted on
465 > a string, all the commissioning runs described above are performed
466 > just after the I$^2$C communication tests. The ``Find
467 > connections'' run allows for the devices accessibility to be checked.
468 > Afterwards a ``Time Alignement'' run, a  ``Laser scan'' run and
469 > finally a ``Pedestal and Noise'' run with bias voltage at 400V are performed. The
470 > ``Laser scan'' run is done limiting the laser gain to the lower value
471 > which is found to be enough for the integration setup needs.
472 > \item[Control Ring and Redundancy.]
473 > The control electronics can be fully tested only after complete
474 > assembly of a shell. This last test forsees a check of the correct
475 > operation of the ring and of the redundancy circuitry. A failure on a
476 > CCU or a control cable can be immediatly spotted as it causes the
477 > interruption of the ring; the communication with the other components
478 > is then checked using ProgramTest. Finally the test of redundancy
479 > circuitry is performed by-passing each CCU one by one and verifying
480 > the correct response of the ring. The test is successful only if both
481 > the primary and secondary circuits of the DOHM are working correctly
482 > and if the CCUs are connected in the right order to the DOHM ports.
483   %%%%%%%% fine aggiunta C.G.
484 + \end{description}
485  
486   \section{Safety of operations}
487   The integration procedure posed many possible problems in the safety of operations,

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines