ViewVC Help
View File | Revision Log | Show Annotations | Root Listing
root/cvsroot/UserCode/TIBTIDNotes/TIBTIDIntNote/IntegrationTests.tex
(Generate patch)

Comparing UserCode/TIBTIDNotes/TIBTIDIntNote/IntegrationTests.tex (file contents):
Revision 1.1 by sguazz, Tue Jan 20 11:13:35 2009 UTC vs.
Revision 1.4 by carlo, Thu May 21 10:26:25 2009 UTC

# Line 5 | Line 5 | The CMS tracker has been designed to sur
5   for maintenance. Its various components have been tested during the production
6   to meet stringent quality requirements. Few important problems have been spotted and
7   solved.\\
8 < During the TIB integration all the operations have been monitored step by step by a chain of tests
9 < aimed at a final control of the components just after the installation and at a verification of the
10 < shell overall quality and functionality. The step by step tests are of particular importance
8 > During the TIB/TID integration all the operations have been monitored step by step by a chain of tests
9 > aimed at a final control of the components just after the installation described here.
10 > A verification of the
11 > shell overall quality and functionality in conditions similar to the final ones
12 > has been later performed during the so-called burn-in tests~\cite{ref:burnin}.
13 > The step by step tests are of particular importance
14   because in most cases it is very difficult and in some cases even dangerous
15   to replace a single faulty component when it is embedded in a fully equipped shell.
16  
17   \subsection{Test Setup}
18   \label{sec:TestSetup}
19 < The integration activities started well before the tracker
19 > The integration started well before the tracker
20   final data acquisition hardware and software
21 < were available to the Collaboration, and thus had to rely on prototype peripherals
22 < and a developing software freezing at the integration start up time.
23 < Several further upgrades were actually implemented in time, but they all relied on the
24 < same version of the underlying framework.
21 > were available to the Collaboration. Integration tests thus had to
22 > rely on prototype DAQ hardware and peripherals and software versions
23 > that has been frozen to ensure consistent conditions during the
24 > integration activities time-span, except few minor upgrades and bug
25 > fixes.
26  
27   \subsubsection{Hardware}
28   Here a brief account of the hardware used in DAQ for the integration
# Line 40 | Line 44 | is given (see Fig.~\ref{fig:integration
44   \label{fig:integration daq}
45   \end{figure}
46  
47 < The A/D conversion is managed by FEDs~\cite{bib:fedpci},
47 > \begin{description}
48 > \item[TSC] The Trigger Sequencer Card or TSC~\cite{ref:tsc}  generate the
49 > 40~MHz clock for the entire system and triggers as well, either
50 > internally via software or by accepting external inputs. It has up to four
51 > electrical clock/trigger outputs, enough to drive the FEDs used during the
52 > integration, and an optical clock/trigger output for the FEC.
53 > The TSC may also generate the reset and calibration signal that are
54 > also encoded on the clock/trigger line.\\
55 > \item[FED] The analog-to-digital conversion is done by special PCI FEDs,
56 > %~\cite{bib:fedpci},
57   with electrical differential analog input, mounted on
58   PCI carrier boards and installed in an industrial PC.
59 < The opto-electrical conversion of the analog signals is done externally by a 24-channel unit.
59 > The opto-electrical conversion of the analog signals coming from the module under test
60 > is done externally by a 24-channel unit.
61   A setup containing 3 FEDs, with the electro-opical converter, is able
62 < to readout 48 APV; this is equivalent to 12 single sided modules (4 complete strings)
63 < or 4 double sided modules (1 string plus 1 module). This is more than what is needed to
64 < test the strings during the module installation.\\
65 < The trigger and clock signals
66 < were provided by the Trigger Sequencer Card (or TSC,~\cite{bib:specs:tsc}). It has up to four
67 < electrical clock/trigger outputs, which were enough to drive the FEDs, and an optical clock/trigger
68 < output for the FEC.
69 < This card is used to generate a 40~MHz clock and provide it to the system and also to generate
70 < triggers, either internally via software or by accepting external inputs.
71 < The TSC may also generate the RESET and CALIBRATION signal, by coding them properly on the
72 < clock/trigger line.\\
73 < The FEC mezzanine used during the integration
74 < was laid on a PCI carrier, supporting trigger/clock optical input as fed by the TSC. Its
75 < output was also optical, and could be directly connected to DOHs on the DOHM. \\
76 < The peculiarity of a PCI FED with respect to a final VME FED, other than having electrical inputs
77 < instead of optical ones, is its timing system: a final FED is able to recognise which
78 < conversion count
79 < corresponds to a frame header coming from a module on each input channel separately,
80 < while a PCI FED has this capability (\textit{header finding}) implemented only on the first channel
81 < out of the eight available and assumes that the signals coming from the modules are synchronised.
82 < This makes the PLL-based time alignment procedure even more important in a setup with PCI FEDs.
83 < Furthermore, PCI FEDs are not able to insert a programmable delay on their inputs and thus
84 < it is important that the clock/trigger connections from the TSC to FEDs have all the same delay.
85 < Last, PCI FEDs cannot perform an online pedestal subtraction and zero suppression (they cannot
86 < run in Processed Raw data nor in Zero Suppression mode).
62 > to readout 48 APV25; this is equivalent to 12 single sided modules
63 > %(four complete strings)
64 > or four double sided modules assemblies.
65 > %(one string plus one module).
66 > These figures are
67 > pefectly suited for the tests during the integration.\\
68 > Since the readout of the data from the APV25s is not
69 > synchronous with the L1 trigger, a crucial capability of the FED is the
70 > \textit{header finding}, i.e. the automatical tagging of the
71 > analog data stream from APV25 pairs with respect to the idle
72 > signals at its inputs. This is possible since the APV25s embeds the
73 > analogue data stream within a {\em digital frame} made up of a leading
74 > digital header and a trailing tick-mark.
75 > %The peculiarity of the PCI FED
76 > %with respect to the VME FED that will be used in the experiment, other than having electrical inputs
77 > %instead of optical ones, is the timing system: the VME FED is able to
78 > %perform the header finding on each input channel independently; the PCI FED has
79 > %this capability only on the first channel of the eight available
80 > %and assumes that the all input signals are synchronised.
81 > %This makes the PLL-based time alignment procedure of crucial
82 > %importance in a setup with PCI FEDs. Furthermore, PCI FEDs are not
83 > %able to insert a programmable delay on their inputs and thus
84 > %it is important that the clock/trigger connections from the TSC to
85 > %FEDs have all the same delay. Last, PCI FEDs cannot perform an online
86 > %pedestal subtraction and zero suppression (they cannot run in
87 > %Processed Raw data nor in Zero Suppression mode).
88 > \item[FEC] The special FEC used during the integration is the {\em FEC
89 >  mezzanine} also installed into a PC on a PCI carrier. It supports
90 >  the optical trigger/clock provided by the TSC and features an
91 >  optical output directly connected to DOHs on the DOHM.
92 > \end{description}
93  
74 \subsubsection{Software}
75 The software used to carry out integration tests was essentially the CMS
76 tracker implementation of a more general software, named xDAQ, which is the official CMS
77 DAQ framework. This implementation is known as TrackerOnline.\\
78 The present version of TrackerOnline makes use of a set of
79 xml files which store the parameters needed by all the devices present on the structure
80 to be tested. In the final implementation of the software the use of a database is foreseen.
81
82 \begin{figure}[bth!]
83 \centering
84 \includegraphics[width=\textwidth]{Figs/fecmodulexml_2.pdf}
85 \caption{An example of data contained in fec.xml and module.xml files for one module.
86 Part of the data is not shown for simplicity.}
87 \label{fig:fecmodulexml}
88 \end{figure}
89 The information on a given setup can be divided in two sections: one describing the
90 readout hardware (and corresponding software) setup and another describing which part of
91 the tracker is going to be connected to the Control System and to FEDs.\\
92 The hardware/software setup is written in a single xml file, which was prepared once
93 and for all at the start of the integration.
94 The information regarding the readout tracker section
95 is stored in two more files, commonly named fec.xml and module.xml. The first
96 contains all data which the FEC will need to download to modules before starting the data
97 taking, that is all the values to be written in devices' $I^2C$ registers.
98 The second contains all other
99 information needed by the DAQ setup to rearrange data coming from the FEDs: from an input channel
100 based indexing to a module based one (see Fig.~\ref{fig:fecmodulexml}).\\
101 The module.xml file contains two tables: the first joins FEDs input channel indexes
102 with the respective modules' ring, CCU and $I^2C$ address (i.e.\ readout coordinates
103 with Control System coordinates). Each row corresponds to an APV
104 pair, an AOH laser, an optical fibre and a FED input.\\
105 The second table of module.xml reassembles the information on a module basis.
106 Here all the active modules are listed, with a row for every module.
107 The Control System coordinates are repeated both in module.xml and fec.xml,
108 so that they can be used as a pivot between the 3 tables.\\
109 It is clear that these files have to be archived along with raw data taken during a run, and
110 to do it automatically is the first task of an integration validation software.
111 Another required task is an automatic run logging; a fast data analysis is also
112 desirable.
113 Last, but not least, this software should easily allow the user to perform all preliminary
114 (commissioning) runs, adjusting modules' parameters accordingly.
115
116 \begin{figure}
117 \centering
118 \includegraphics[width=.9\textwidth]{Figs/integration_package.pdf}
119 \caption{Scheme of the integration software.
120 \textit{Arrows}: a relation.
121 \textit{Gears}: an application.
122 \textit{Mouse:} an interactive application.
123 \textit{Sheet}: a file.}
124 \label{fig:integration_package}
125 \end{figure}
126
127 Figure~\ref{fig:integration_package} shows a scheme of the relations between all the software
128 components that will be described, the local files and the remote database.
129 \paragraph*{FecTool}
130 FecTool is a front-end Graphics User Interface (GUI) aimed to ease the creation of the device
131 description, i.e.\ fec.xml. This program is used to launch
132 two standalone applications deployed along with TrackerOnline: ProgramTest and FecProfiler.
133 The first application can test the ring functionality, the connection to all devices reporting
134 a list of detected hardware. Also the second application is able to retrieve a
135 list of hardware connected to CCUs, but its output is the fec.xml file needed by TrackerOnline.\\
136 By accessing the output of these two programs, the FecTool GUI enables the user to
137 test the functionality of a string, or of a whole ring. FecTool also checks that the found hardware
138 corresponds to what one expects to find in tracker's modules: for every $I^2C$ address
139 there should be either 4 or 6 APVs, one PLL, one AOH, and so on.\\
140 The user can also input the GeoId(s) of tested string(s) before starting the test. In this case
141 FecTool also checks that the DCU Hardware Id read from each module matches the one declared
142 in the construction database performing an important consistency check between what is
143 registered on the integration database and what is really present on the structure spotting
144 possible module registration error.\\
145 Only if this last test is passed, the user is allowed to create the fec.xml description file
146 needed to go forth with integration tests. Hence tests proceed with the data readout from
147 modules, which rely on TrackerOnline.
148 \paragraph*{The integration package}
149 The xDAQ version installed on the integration setups is very user-unfriendly, and requires
150 an expert user: many run-specific parameters are set manually and there is no
151 input validation.
94  
95 < \begin{figure}
96 < \centering
97 < \includegraphics[width=0.35\textwidth]{Figs/FedGuiMain.png}
98 < \caption{Main GUI window.}
99 < \label{fig:fedgui}
100 < \end{figure}
101 <
102 < A package interacting with TrackerOnline, capable
103 < of automatically setting all relevant parameters and performing all data collection at the end
104 < of a run has been written. This is a
105 < finite state machine which cycle through the needed states setting run-specific parameters
106 < in the xDAQ software.
107 <
108 < The state machine cycles through the following states:
95 > \subsubsection{Software}
96 > The software used to carry out integration tests is
97 > based on the CMS general data acquisition framework
98 > %{\em TrackerOnline}, the CMS
99 > %tracker implementation of a more general software,
100 > named xDAQ~\cite{ref:xdaq}.
101 > % which is the official CMS DAQ framework.
102 > In place of a database as in the
103 > experiment version, the integration version of TrackerOnline uses a set of
104 > xml files for all the configurations needed to perform a test run.
105 > A description of the xml configuration files follows.
106 > %, i.e. all the parameters needed by the devices and the
107 > %software involved in the test run,
108 > %\begin{figure}[bth!]
109 > %\centering
110 > %\includegraphics[width=\textwidth]{Figs/fecmodulexml_2.pdf}
111 > %\caption{An example of data contained in fec.xml and module.xml files for one module.
112 > %Part of the data is not shown for simplicity.}
113 > %\label{fig:fecmodulexml}
114 > %\end{figure}
115 > The hardware and software configuration of the DAQ is written into the file
116 > named daq.xml. It reflects the setup used for the test and has to be rarely changed
117 > during the integration procedures. The system settings uploaded, and
118 > read back for verification, by the FEC are contained into fec.xml.
119 > The data decoding map (i.e., information needed to map each FED input to an
120 > APV25 pair of a specific module) is written into module.xml.
121 >
122 > %\begin{description}
123 > %\item[Configuration of the DAQ hardware and software, daq.xml] The hardware
124 > %  and software configuration of the DAQ is written in a single xml file, which
125 > %  reflect the setup used for the test and has to be rarely changed
126 > %  during the integration procedures.
127 > %\item[Configuration of the control system, fec.xml] The most important
128 > %  part of this configuration section is the settings the FEC must
129 > %  upload to the modules and AOHs and in general any configurable
130 > %  $I^2C$ device, before starting the data
131 > %taking. Altough these settings are not specifically
132 > %  related to the control system, it is duty of the control system to
133 > %  write them to the devices' $I^2C$ registers and read them back for verification.
134 > %\item[Configuration of the readout, module.xml] All other information needed by the DAQ to
135 > %rearrange data coming from the FEDs is in module.xml that allows each
136 > %FEDs input channel to be mapped to an APV pair of a specific module as
137 > %identified by ring, CCU and $I^2C$ addresses (i.e. the correspondance
138 > %between readout coordinates and Control System coordinates);
139 > %Each row corresponds to an APV
140 > %pair, an AOH laser, an optical fibre and a FED input.\\
141 > %The second table of module.xml reassembles the information on a module basis.
142 > %Here all the active modules are listed, with a row for every module.
143 > %The Control System coordinates are repeated both in module.xml and fec.xml,
144 > %so that they can be used as a pivot between the 3 tables.\\
145 > %\end{description}
146 >
147 > The tasks performed by the integration software are the following:
148 > execution of the commissioning runs needed to optimal adjust of the
149 > module parameters (preparation of fec.xml and module.xml); execution
150 > of the test runs with complete and automated logging; fast analysis
151 > for immediate feedback; archival of xml configuration files to log the
152 > test conditions; archival of the raw data.
153 > %\begin{figure}
154 > %\centering
155 > %\includegraphics[width=.9\textwidth]{Figs/integration_package.pdf}
156 > %\caption{Scheme of the integration software.
157 > %\textit{Arrows}: a relation.
158 > %\textit{Gears}: an application.
159 > %\textit{Mouse:} an interactive application.
160 > %\textit{Sheet}: a file.}
161 > %\label{fig:integration_package}
162 > %\end{figure}
163 > %Figure~\ref{fig:integration_package} shows a scheme of the relations between all the software
164 > %components that will be described, the local files and the remote database.
165 > This is achieved by using a specific set of components of integration
166 > data acquisition software as summarized in the following.
167 > \begin{description}
168 > \item[FecTool.]
169 > FecTool is GUI based front-end to two standalone
170 > applications: FecProfiler
171 > and ProgramTest, aimed to ease the creation of the device
172 > description.
173 > FecProfiler is able to detect
174 > the devices connected to the CCUs and builds the fec.xml file needed by
175 > TrackerOnline. FecTool takes care of checking that thedetected devices
176 > corresponds to expected ones, i.e., per module, 4 or 6 APV25s, one
177 > PLL, one AOH, and so on.  ProgramTest allows the ring functionalities,
178 > i.e. the redundancy, to be deeply tested.
179 >
180 > The geographical identity of the strings under test must be
181 > entered to allows for verification from FecTool of the matching
182 > between the DCU Hardware Id read from each
183 > module and the one declared in the module database. This
184 > consistency check is crucial to spot possible errors in recording the
185 > location where a module is mounted during the assembly. If the check
186 > is passed, the fec.xml description file needed to go forth with
187 > integration tests can be created.
188 > %Hence tests proceed with the data
189 > %readout from modules, which rely on .
190 > \item[The Integration Package.]
191 > %TrackerOnline as any xDAQ implementation requires an expert user as
192 > %many run-specific parameters must be set and there is no  input
193 > %validation.
194 > %\begin{figure}
195 > %\centering
196 > %\includegraphics[width=0.35\textwidth]{Figs/FedGuiMain.png}
197 > %\caption{Main GUI window.}
198 > %\label{fig:fedgui}
199 > %\end{figure}
200 > The integration setup is made more user-friendly by a special {\em
201 >  Integration Package}, a GUI based front end that interacts with
202 >  the data aquisition program to automatically set all relevant parameters
203 >  and to harvest all data at the end
204 > of a run. The package is organized as a finite state machine by which
205 >  the user can cycle between the various states, i.e. the following integration test steps:
206   \begin{enumerate}
207 < \item TrackerOnline initialisation (only once)
208 < \item Ask for the desired run
209 < \item Launch run in TrackerOnline
210 < \item Wait for run completion (polling the number of acquired events)
211 < \item Stop data taking
212 < \item Launch fast data analysis package
213 < \item Ask user for data validation
214 < \item If acknowledged, pack all the data and log the run with proper data quality flag. If the run
215 < was a commissioning run, update fec.xml and module.xml
207 > \item Daq initialisation (only once);
208 > \item choice of the the desired run;
209 > \item execution of the run via Daq program;
210 > \item on run completion (i.e. after a given number of events), stop
211 >  data taking and execution of the fast data analysis;
212 > \item presentation of the run outcome on summary GUIs;
213 > \item on positive validation from the user, data are stored together
214 >  with run logs and data quality flags;
215 > \item in case of commissioning runs, on positive validation from the
216 >  user, fec.xml and module.xml are updated accordingly to be used from
217 >  now on.
218   \end{enumerate}
219  
220 < For every required run, the integration package shows the proper TrackerOnline output
221 < through some GUIs, so that the user can acknowledge the outcome of the run and give the approval
222 < for archiving the run data or, in case of a commissioning run, for updating the parameter set.\\
223 < The TrackerOnline software computes the new parameter set for each commissioning run
224 < (except for the ``find connection'' run, as we'll see below) and writes
225 < it locally as a fec.xml file, which is retrieved by the integration package at need.\\
226 < A main integration database, for centralized archiving purposes,
227 < was also installed. Software implementing data analysis runs automatically on the
228 < archived files producing validation outputs, accessible througth a web interface,
229 < for every tested module.
220 > %For every required run, the integration package shows the TrackerOnline output
221 > %through some GUIs, so that the user can acknowledge the outcome of the run and give the approval
222 > %for archiving the data or, in case of a commissioning run, for updating the parameter set.\\
223 > %The TrackerOnline software computes the new parameter set for each commissioning run
224 > %(except for the ``find connection'' run, as we'll see below) and writes
225 > %it locally as a fec.xml file, which is retrieved by the integration package at need.\\
226 > A main integration database has been setup for centralized archiving purposes,
227 > was also installed. For later reference, if needed, the data analysis
228 > can also be run on the archived files with validation outputs made
229 > available through a web interface.
230 > \end{description}
231  
232   \subsection{Test Description}
233   \label{ref:test-description}
192 \paragraph*{Find connections:}
193 This commissioning run is used to know which module is connected to which FED input channel.
194 This procedure consists of switching on all the lasers of all AOH one by one, while checking the
195 signal on all the FED inputs. If the difference of the signal seen on a FED channel is above
196 a given threshold, the connection between that laser and input channel is tagged and registered
197 in the module.xml as a connection table.
234  
235 < \begin{figure}
236 < \centering
201 < \includegraphics[width=0.27\textwidth]{Figs/FedGuiChannels.png}
202 < \caption{Connections displayed during the run}
203 < \label{fig:fedguichan}
204 < \end{figure}
205 <
206 < At this point of the commissioning procedures both the descriptions for FECs
207 < (i.e. tracker hardware connected to the FECs) and for FEDs are present.
208 <
209 < \paragraph*{Time alignment:}
210 < This step is used to compensate the different delays in the control and readout chain,
211 < i.e. the connections between FECs and FEEs.
212 < This also makes the APVs' sampling time synchronous provided that AOH
213 < fibres and ribbons are the same length. The latter is not important during integration
214 < quality checks, as no external signal is ever measured, but it becomes so when
215 < one tries to detect ionising particles.\\
216 < This run type is relevant because, if FEDs are to sample properly
217 < the APV signal, the clock must go to the modules synchronously, with a skew of the order of a few ns.
218 < Also the clock coming to the three FEDs must be synchronous but this is guaranteed
219 < by using cables of the same length between the clock-generating
220 < board (TSC) and the FEDs themselves.\\
221 < The time allignment run
222 < makes use of the periodic tick mark signal sent by the APVs when it is clocked:
223 < after these devices receive a reset, they produce a tick mark signal every 70 clock cycles.
224 < During this run the FEDs continously sample the signals at the full clock frequency.
225 < This means that for every DAQ cycle the output of all APVs is measured as with a 40~MSample scope.
226 < After every cycle is completed all the PLLs' delays are icreased by (25/24)~ns
227 < (the minimum delay step), and the signal readout is performed again. After 24 such cycles the full
228 < tick mark signal is measured as with a 960~MSample scope.
235 > Each run type that can be choosen by the user corresponds to a
236 > commissioning run or to a test run, as described below.
237  
238 + \begin{description}
239 + \item[Find connections.] This commissioning run is used associate each
240 +  FED input channel to a module. The procedure, repeated in sequence
241 +  for all AOHs laser drivers, consists of switching on only a given
242 +  AOH laser driver at a time while checking the signal on all the FED inputs. If
243 +  the difference of the signal seen on a FED channel is above
244 + a given threshold, the connection between that laser and input channel is tagged and stored
245 + in module.xml.
246 + \item[Time alignment.]
247 + This commissioning run measures the appropriate delays to be later set
248 + in the PLLs delay registers.
249 + Doing so the different delays in the control and
250 + readout chain are compensated, the clock arrives to the modules
251 + synchronously, with a skew of the order of a few ns, and the APV25s
252 + signal are properly sampled by the FEDs. This requires also the clock
253 + to all the FEDs to be synchronous, but this is guaranteed
254 + by using cables equal in length between the TSC and the FEDs.\\
255 + The time alignment run uses the periodic tick mark signal issued by
256 + the idle APV25s every 70 clock cycles. The APV25 signals are sampled by FEDs in
257 + scope mode, i.e. without waiting for an header but continously,
258 + sampling the inputs at the full clock frequency as with a 40~MSample/s
259 + scope. The measurement is repeated after all the PLL delays are
260 + increased by the minimum delay step, 25/24~ns. After 24 such cycles the
261 + idle APV25 output and thus the tick mark signal also are measured with
262 + an effective 960~MSample/s scope.
263   \begin{figure}
264   \centering
265   \includegraphics[width=.6\textwidth]{Figs/tickmark.pdf}
# Line 235 | Line 268 | are marked. In the picture are reported
268   during the time alignment an interval of $1\,\mu\mathrm{s}$ is scanned.}
269   \label{fig:tick}
270   \end{figure}
271 < The DAQ takes the time delay between tick marks as a measurement of the difference in delays in the
272 < FEC-FEE connections and computes what delay must be set on each PLL in order to compensate this.
271 > The time differences between the variuos APV25 tick marks are a
272 > measurement of the relative delays introduced by the connections and
273 > can be used to compute the optimal delay to be set on each PLL for compensation.
274   The tick mark raising edge $t_R$ time is measured by taking the time corresponding to the highest
275   signal derivative (see Fig.~\ref{fig:tick}).
276   The best sampling point is considered $t_R+15\,\mathrm{ns}$, to avoid
277 < the possible overshoot. This is important also later, when measuring the analogue data frame, as
278 < it allows measuring the signal coming from each strip after transient effects due to the signal
279 < switching between strips are over.\\
280 < At the end of this procedure, the user is shown all proposed adjustments to PLL delay values.
281 < Then he can accept the outcome of the first time alignment run and, possibly,
282 < repeat it. If the time allignment has been done correctly maximum variation of two or less
283 < nanoseconds will be found. The delays are written in the correspoding xml file.
284 <
285 < \paragraph*{Laser Scan:}
286 < This run makes a scan of all AOH bias values for the four possible gain settings
287 < to determine the optimal gain and the corresponding optimal bias (see Fig.~\ref{fig:laserscan}).
288 < In this run the APV generated tick marks are sampled (a correctly done
289 < time allignment has to be done before) for all gain and bias values.
290 <
277 > the possible overshoot.
278 > %[???This is important also later, when measuring the analogue data frame, as
279 > %it allows measuring the signal coming from each strip after transient effects due to the signal
280 > %switching between strips are over.???]\\
281 > At the end of the procedure, all proposed adjustments to PLL delay
282 > values are proposed to the user. If accepted the delays are written in
283 > fec.xml. If the setup is correctly aligned in time, a further time
284 > alignment procedure should not propose delay corrections greater than
285 > $\sim 2$ns.
286 > %It is worth noticing that by the time alignment procedure all APVs are
287 > %made sampling synchronously, since in the integration setup AOH fibres
288 > %and ribbons are all equal in length. This is not important during
289 > %integration quality checks, as no external signal is ever measured,
290 > %but it would be so in trying to detect ionising particles.
291 > \item[Laser Scan.] By this commissioning run a scan of any bias and
292 >  gain value pair is done to determine the optimal working point for
293 >  each AOH.
294 > The procedure requires a sucessfull time alignment since again tick marks are
295 > sampled but in this case changing gain and bias values.
296 > %
297   \begin{figure}[t!]
298   \begin{center}
259 \subfigure[A pictorial representation of a tick mark as produced by the APVs (dotted)
260 and as transmitted by the lasers (solid) when the laser driver's bias is too
261 low (left), correct (centre) or too high (right), with the subsequent signal saturation.]
262 {
263        \label{fig:laserscan}
264        \includegraphics[width=.9\textwidth]{Figs/laserscan.pdf}
265 }
266 \\
299   \subfigure[The sampled tick mark top and baseline as a function of the laser driver's bias.]
300   {
301          \label{fig:gainscan_basetop}
# Line 275 | Line 307 | low (left), correct (centre) or too high
307          \label{fig:gainscan_range}
308          \includegraphics[width=.45\textwidth]{Figs/gainscan_range.pdf}
309   }
310 + \subfigure[A pictorial representation of a tick mark as produced by the APV25s (dotted)
311 + and as transmitted by the lasers (solid) when the laser driver's bias is too
312 + low (left), correct (centre) or too high (right), with the subsequent signal saturation.]
313 + {
314 +        \label{fig:laserscan}
315 +        \includegraphics[width=.9\textwidth]{Figs/laserscan.pdf}
316 + }
317   \caption{Plots computed during a gain scan run.}
318   \end{center}
319   \end{figure}
320 < For each trigger sent to FEDs, the tick mark's top is sampled twice and all other samplings fall
321 < on the baseline. The highest samples are used to estimate the higher limit of the signal
322 < for a given gain/bias pair and the lower values provide an estimate of the lower limit.
323 < For each gain value three plots are computed: in the first two the high and low edges of
324 < the tick mark are represented as a function of bias (Fig.~\ref{fig:gainscan_basetop})
325 < and the third, being the difference between the former, represent the dynamic range
326 < as a function of bias (Fig.~\ref{fig:gainscan_range}).\\
327 < For each laser these plots are created 4 times: one for each possible gain value. The best laser
328 < gain is computed as that providing an overall gain as close as possible to a given optimal value,
329 < the overall gain being estimated as the slope of the first two curves in their central section.
330 < The best bias value is taken as the one maximising the tick mark height keeping
331 < the maximum and minimum not saturated.\\
332 < After the run is completed, values proposed by TrackerOnline are shown to the user, who
333 < intervenes if there is any abnormal proposed gain, which may indicate a problem either on the
334 < AOH or on the fibre.
335 <
336 < \paragraph*{VPSP Scan:}
337 < This is the first run with FEDs with the header finding function active. In this run the
338 < trigger is dispatched also to modules, which send their data frames.
339 < During this run a scan of VPSP parameter is performed on the modules, and for each value
340 < their frames are acquired several times. As there is no physics
341 < signal on the detectors, the sampled signal is a measurement of the pedestal of the analog channels.
342 < The average strip pedestal is computed for every APV as a function of the VPSP parameter and
343 < at the end of the run the best VPSP pedestal is computed as that which moves the baseline
344 < to 1/3 of the dynamic range. This choice avoids setting a baseline too near the
345 < lower saturation value leaving anyway enough range for possible signals from particles
346 < ($\sim 6$ MIP equivalent).
347 < At the end of the run computed values are proposed to the user for approval and written
348 < to the xml file.
349 <
320 > %
321 > For each trigger sent to FEDs, the tick mark is sampled twice and all other samplings fall
322 > on the baseline. The tick mark's top samples are used to estimate the higher limit of the signal
323 > for a given gain/bias setting pair and the baseline samples provide an estimate of the lower limit.
324 > For each gain setting the tick mark top samples and the baseline samples are measured as a
325 > function of the bias, as shown in Fig.~\ref{fig:gainscan_basetop}. For
326 > each bias setting their difference is the tick mark height, shown in Fig.~\ref{fig:gainscan_range},
327 > that represents the dynamic range as a function of the bias.\\
328 > The best bias setting for a given gain setting is taken as the one maximising the tick mark height keeping
329 > the maximum and minimum not saturated, as pictorially represented in Fig.~\ref{fig:laserscan}.
330 > The same measurement is done for each possible gain setting.
331 > The best laser gain setting is the one providing an overall gain of
332 > the optical chain as close as possible to the design one, 0.8~\cite{ref:gain}.
333 > The overall gain is estimated from the slope of the curves of
334 > Fig.~\ref{fig:gainscan_basetop} in their central section.
335 > After the run is completed a set of values are proposed to the
336 > user. Abnormal gain values may indicate a problem either on the AOH or
337 > on the fibre and are investigated.
338 > \item[VPSP Scan.]
339 > This commissioning run is devoted to optimise the pedestal of
340 > the APV25, i.e. the average output level in absence of any signal, with
341 > respect to the dynamical range of the FEDs. This level is managed by a
342 > specific APV25 register, know as {\em VPSP}, which controls a voltage
343 > setting within the deconvolution circuitry. The procedure consists of
344 > a scan of VPSP values while acquiring data frames from modules in the
345 > standard way.
346 > %%, i.e. trigger sent to the modules and FEDs in
347 > %%``header finding'' mode.
348 > In the final tracker operation the optimal VPSP setting correspond to a pedestal baseline placed
349 > around 1/3 of the available dynamic range. This is a good compromise to keep
350 > the baseline not too close the lower saturation value while leaving a good
351 > range for particle signals, corresponding to $\sim 6$ MIP.
352 > During the integration tests the only important point is to keep the baseline
353 > away from the saturation levels; in such a way the module noise measurements will
354 > not be affected by the wrong common mode subtractions which are present in case
355 > of events with baseline saturation.  
356 > At the end of the run a set of values are proposed to the user for
357 > approval and in case written in the relevant xml file.
358   \begin{figure}
359   \begin{center}
360 < \subfigure[Strip pedestals of a module in ADC counts vs.\ strip number.]
360 > %\subfigure[Strip pedestals of a module in ADC counts vs.\ strip number.]
361 > \subfigure[]
362   {
363          \label{fig:saturationpedestal}
364          \includegraphics[width=.45\textwidth]{Figs/saturation_pedestal.pdf}
365   }
366   \hspace{5mm}
367 < \subfigure[Strip noise of a module in ADC counts vs.\ strip number.]
367 > %\subfigure[Strip noise of a module in ADC counts vs.\ strip number.]
368 > \subfigure[]
369   {
370          \label{fig:saturationnoise}
371          \includegraphics[width=.45\textwidth]{Figs/saturation_noise.pdf}
372   }
373 < \caption{The pedestal of strips after \#{}640 is low, approaching to the bottom of the
374 < dynamic range, and their noise is therefore lower.}
373 > \caption{Pedestal (left) and noise (right) vs. strip number for a 6 APV25 module.
374 > The pedestals of strips after strip \#{}640 are low, approaching to the bottom of the
375 > dynamic range. Their noise is therefore altered with respect to the not saturated
376 > channels.}
377   \label{fig:saturation}
378   \end{center}
379   \end{figure}
380 < This run is not normally performed during the integration, as a default value was set on
381 < all the APVs after a measurement on a sample. Anyway it is sometimes needed because the optimal
382 < VPSP value may change from APV to APV and is also strongly dependent on temperature.
383 < An example of this is shown in Fig.~\ref{fig:saturation}, where
384 < only a few readout channels suffer from a signal saturation, while most of the strips of a module
385 < are placed correctly inside the dynamic range.
386 < This usually happens because the pedestal values of the channels of an APV are different one
387 < from another and may have a dependency on the strip index like that shown in
388 < Fig.~\ref{fig:saturationpedestal}.
389 < In these cases when the pedestal of a strip approaches to the edge of dynamic range     the
390 < APV-AOH output is no more linear and the channel's RMS is lower, (see Fig.~\ref{fig:saturationnoise}).
391 < This is one of the most frequent problems with the pedestals and it is solved
392 < by a VPSP scan.
393 <
394 < \paragraph*{Pedestal and Noise:}
395 < This is the main run for qualifying the detector performances during
396 < the integration. The 400V bias are applied to the module under test which are
397 < also checked for any possible overcurrent or breakdown.\\
398 < Triggers are sent to the modules and FEDs are placed in header recognition mode.
348 < All the analogue frames from the modules are acquired and for each channel both the average value
349 < and the RMS are computed.
350 < Two analyses are performed on these data: one is done online by the TrackerOnline software, and
351 < another one is performed offline through procedures taken from the ORCA package (ORCA was the
352 < official reconstruction package of CMS at the time, now substituted by CMSSW) and run
353 < on the files containing the raw data as acquired by FEDs in just the same way as it would
354 < do with data coming to a Filter Farm.\\
380 > The VPSP scan is not sistematically performed during the integration,
381 > since the default VPSP setting is adequate in most of the
382 > cases. Nevertheless, VPSP optimal values change considerably within
383 > the APV25 population and are strongly temperature dependent and is
384 > rather common to have a stuation in which the pedestal of few readout
385 > channels approaches to the lower edge of dynamic range
386 > (Fig.~\ref{fig:saturationpedestal}) resulting in a lower RMS (see
387 > Fig.~\ref{fig:saturationnoise}). The VPSP scan allows for this issue to be
388 > fixed.
389 > \item[Pedestal and Noise Run.]
390 > This is the main run type for qualifying the performances during
391 > the integration. Tipically a bias of 400V is applied to the modules
392 > under test to check for any possible overcurrent or breakdown.\\
393 > Triggers are sent to the modules and FEDs work in ``header finding'' mode.
394 > All the analogue frames from the modules are collected two analyses
395 > are performed on these data: online, by the TrackerOnline
396 > software; offline, in a way very similar to the final experiment algorythms.
397 > %by using algorythms of the ORCA package~\cite{bib:orca}, the CMS
398 > %reconstruction package at that time, now replaced by CMSSW.
399   The average value of the signal read on each strip is an estimate of
400   its pedestal, while the RMS is a good estimate of its noise, provided that the noise itself
401   is Gaussian, which is true to a first approximation. This value is often referred to as
402 < \textit{raw noise}, as opposed to the \textit{common-mode subtracted noise} (or CMN). The latter
403 < can be computed after pedestals are measured, which happens after the first hundreds of frames
404 < are acquired.
405 < The common mode noise subtraction performed by ORCA and TrackerOnline is similar to that
406 < performed by the final FEDs.
407 < This subtraction
408 < eliminates the Common Mode Noise contribution to a specific event. After the Common
409 < Mode Noise subtraction the noise can be computed again as the RMS of the remaining signal on strips
410 < and this new noise measurement is called Common Mode Subtracted Noise (or just CMN).\\
411 < Because of the difference in gain between the various optical links to compare the noise
412 < on different APV pairs a renormalization is needed.
413 < To implement this a gain measuring procedure to correct
414 < noise measurements has been done. When FEDs acquire analogue frames, they store all acquired
415 < raw data, comprising the samplings on the digital header. As the digital header has
416 < the same amplitude on every APV, its measurement in terms of FED ADC counts
417 < was used as an estimate of the relative gain of optical links.
418 < This method allows a contextual measurement of noise and gain, and it is accurate,
419 < provided that there is no signal saturation both on low and high values.\\
420 < The normalisation factor was chosen so as to bring the normalised header height to 220 ADC counts,
377 < which was the value of header's height as it was read out in the module test setup
378 < of the module production line. This allowed the scaled measurements to be easily
379 < compared with those done during module production tests. Both normalised CMN noise and raw noise
380 < are shown to the user, while only uncalibrated CMN noise is plotted as a reference.
381 < Also the distribution of normalised CMN and raw strip noise is computed and shown to
382 < complete the information.
383 <
402 > \textit{raw noise}, as opposed to the \textit{common-mode subtracted
403 >  noise} (or CMN). The latter is the RMS computed after having
404 > subtracted the {\em common noise}, i.e. the correlated noise-like fluctuation
405 > common to a given group of channels (tipically an entire APV25).
406 > The common mode noise subtraction method implemented in
407 > TrackerOnline is similar to that performed by the final FEDs.\\
408 > Because of the difference in gain between the various
409 > optical links, noise comparison between different APV25 pairs requires a
410 > normalization. This procedure relies on the digital
411 > headers whose amplitude, being the same on each APV25, is used to
412 > estimate of the relative gain of optical links so to apply an
413 > appropriate correction. In such a way noise and gain are
414 > simultaneously measured provided that the signal is not saturation both on low and high values.
415 > The normalisation factor is chosen so as to bring the normalised
416 > header height to 220 ADC counts, as measured in quality controls
417 > during the module production. This allowed the scaled measurements to be easily
418 > compared with those done during module production tests~\cite{ref:modtest}.\\
419 > At the end of the run, pedestal and noise profiles and distributions
420 > are shown to the user.
421   \begin{figure}[t!]
422   \centering
423   \includegraphics[width=0.6\textwidth]{Figs/noiseprofile.pdf}
# Line 389 | Line 426 | and y axis represent the noise in ADC co
426   normalised noise).}
427   \label{fig:noiseprofile}
428   \end{figure}
429 < Figure~\ref{fig:noiseprofile} shows an example of the output
430 < at the end of a noise measurement run: the normalised raw noise and CMN and the uncalibrated CMN
429 > Figure~\ref{fig:noiseprofile} shows an example of the noise output:
430 > the normalised raw noise and CMN and the uncalibrated CMN
431   for each strip are plotted against the strip index. The first 256 strips belong to the
432 < first APV pair and are multiplexed to a single optical line and the strips from 257 to 512
433 < belong to the second APV pair. It can be noted here that the
434 < raw noise without normalisation suffers from a different gain of optical links
435 < and that after the normalisation procedures the noise level is the same
436 < for the two APV pairs.\\
400 < At the end of this run, once the outcomes are showed to the user, he can decide whether to
401 < store these data or to cancel the procedure. In the first case data are packed along with
432 > first APV25 pair and are multiplexed to a single optical line and the strips from 257 to 512
433 > belong to the second APV25 pair. It can be noted here that the
434 > raw noise without normalisation reflecs the different gain of
435 > optical links, this is corrected by the normalisation procedure.\\
436 > If validated by the user, data are packed along with
437   possible comments and sent to the central archive system, where they are processed
438 < again and made available on a web page. The system also automatically recognises where
439 < the modules where mounted by checking their DCU Hardware Id (which is written in the fec.xml file)
440 < in the construction database, this information is stored in the test table of
441 < the integration database and allows to build a geographical table of mounted modules with
442 < a link to a page containing all the tests performed for each module.\\
438 > again and made available on a web page.
439 > %The system also automatically recognises where
440 > %the modules where mounted by checking their DCU Hardware Id (which is written in the fec.xml file)
441 > %in the construction database, this information is stored in the test table of
442 > %the integration database and allows to build a geographical table of mounted modules with
443 > %a link to a page containing all the tests performed for each module.\\
444 > \end{description}
445  
446   %%%%%%% aggiunta C.G.
447 < \subsubsection{Single Module}
448 < After a module of a string was mounted, its basic functionalities were tested.
449 < A fast test on the I$^2$C communication permitted to spot possible electrical problems
450 < in the module front-end electronics in the AOH or in the mother cable. Since  the safe removal of a MC
451 < from the shell requires the dismounting of al the modules of a string, it is very important
452 < to perform this test of as soon as the first module of a string is integrated. After the
453 < I$^2$C test FecTool checks the identity of the components against the construction database.
454 < The results of the tests can then be monitored through a web page.
455 <
456 < \subsubsection{String}
457 < When the third and last module of a string is mounted the commissioning runs described in section~\ref{ref:test-description} are performed after the I$^2$C communication tests. The first run, ``Find connections'' permitted to check the full functionality of the AOH. Since the AOHs can be tested only after the modules are mounted this is the first test which can spot possibly broken fibres. It was necessary to perform this test after the integration of each string, because the subtitution of an AOH implies the dismounting of all the modules of the string mounted between the AOH and the front flange.\\
458 < After this test a ``Time Alignement'' run, a  ``Laser scan'' run and finally a ``Pedestal and Noise'' run with HV on were performed. The ``Laser scan'' run was done limiting the laser gain to the lower value which was found to be optimal for all the AOHs in the integration setup.
459 <
460 < \subsubsection{Control Ring and Redundancy}
461 < The control electronics can be fully tested only after complete assembly of a shell. This last test forsees a check of the correct operation of the ring and of the redundancy circuitry. A failure on a CCU or a control cable can be immediatly spotted as it causes the interruption of the ring; the communication with the other components is then checked using ProgramTest.\\
462 < Finally the test of redundancy circuitry is performed by-passing each CCU one by one and verifying the correct response of the ring. The test is successful only if both the primary and secondary circuits of the DOHM are working correctly and if the CCUs are connected in the right order to the DOHM ports.
447 > The basic run types and tests described above are appropriately
448 > combined according to the devices and/or the group of devices under test.
449 > \begin{description}
450 > \item[Single Module.] After a module is mounted, its basic
451 >  functionalities can be tested. In particular, a fast test on the
452 >  I$^2$C communication permits possible
453 > electrical problems to be spotted in the module front-end electronics,
454 > in the AOH or in the mother cable. For mother cable and AOH this is of
455 >  particular importance: an AOHs can be practically tested only
456 >  after the corresponding modules is mounted and this is the first
457 >  test which can spot possibly broken fibres; similarly for the MC, it
458 >  is very important to perform the test of as soon as possible. In
459 >  fact, a safe removal and replacement of either an AOH or the MC is a
460 >  difficult intervention possibly requiring the dismounting of many
461 >  modules already put in place. \\
462 >  Since the
463 > I$^2$C test FecTool checks the identity of the components with respect
464 > to the construction database an alarm issued in case of mismatch.
465 > The results of the tests can then be
466 > monitored through a web page.
467 > \item[String]
468 > When the third and last module or double sided assembly is mounted on
469 > a string, all the commissioning runs described above are performed
470 > just after the I$^2$C communication tests. The ``Find
471 > connections'' run allows for the devices accessibility to be checked.
472 > Afterwards a ``Time Alignement'' run, a  ``Laser scan'' run and
473 > finally a ``Pedestal and Noise'' run with bias voltage at 400V are performed. The
474 > ``Laser scan'' run is done limiting the laser gain to the lower value
475 > which is found to be enough for the integration setup needs.
476 > \item[Control Ring and Redundancy.]
477 > The control electronics can be fully tested only after complete
478 > assembly of a shell. This last test forsees a check of the correct
479 > operation of the ring and of the redundancy circuitry. A failure on a
480 > CCU or a control cable can be immediatly spotted as it causes the
481 > interruption of the ring; the communication with the other components
482 > is then checked using ProgramTest. Finally the test of redundancy
483 > circuitry is performed by-passing each CCU one by one and verifying
484 > the correct response of the ring. The test is successful only if both
485 > the primary and secondary circuits of the DOHM are working correctly
486 > and if the CCUs are connected in the right order to the DOHM ports.
487   %%%%%%%% fine aggiunta C.G.
488 + \end{description}
489  
490   \section{Safety of operations}
491   The integration procedure posed many possible problems in the safety of operations,
# Line 460 | Line 522 | furthermore the AOH are equipped with th
522   Even in the most stressing condition and after a few minutes of settling,
523   the highest measured temperature on the lasers was $\sim 40^\circ \mathrm{C}$
524   (while it was $48^\circ \mathrm{C}$ on the hybrid).
525 < These are safe temperatures
525 > These are safe temperatures~\cite{ref:lasertemp}
526   thus the integration tests could be performed without problems.

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines