ViewVC Help
View File | Revision Log | Show Annotations | Root Listing
root/cvsroot/UserCode/TIBTIDNotes/TIBTIDIntNote/IntegrationTests.tex
(Generate patch)

Comparing UserCode/TIBTIDNotes/TIBTIDIntNote/IntegrationTests.tex (file contents):
Revision 1.1 by sguazz, Tue Jan 20 11:13:35 2009 UTC vs.
Revision 1.2 by sguazz, Mon Mar 9 15:41:05 2009 UTC

# Line 5 | Line 5 | The CMS tracker has been designed to sur
5   for maintenance. Its various components have been tested during the production
6   to meet stringent quality requirements. Few important problems have been spotted and
7   solved.\\
8 < During the TIB integration all the operations have been monitored step by step by a chain of tests
9 < aimed at a final control of the components just after the installation and at a verification of the
10 < shell overall quality and functionality. The step by step tests are of particular importance
8 > During the TIB/TID integration all the operations have been monitored step by step by a chain of tests
9 > aimed at a final control of the components just after the installation described here and at a verification of the
10 > shell overall quality and functionality (~\ref{burnin}). The step by step tests are of particular importance
11   because in most cases it is very difficult and in some cases even dangerous
12   to replace a single faulty component when it is embedded in a fully equipped shell.
13  
14   \subsection{Test Setup}
15   \label{sec:TestSetup}
16 < The integration activities started well before the tracker
16 > The integration started well before the tracker
17   final data acquisition hardware and software
18 < were available to the Collaboration, and thus had to rely on prototype peripherals
19 < and a developing software freezing at the integration start up time.
20 < Several further upgrades were actually implemented in time, but they all relied on the
21 < same version of the underlying framework.
18 > were available to the Collaboration. Integration tests thus had to
19 > rely on prototype DAQ hardware and peripherals and software versions
20 > that has been frozen to ensure consistent conditions during the
21 > integration activities time-span, except few minor upgrades and bug
22 > fixes.
23  
24   \subsubsection{Hardware}
25   Here a brief account of the hardware used in DAQ for the integration
# Line 40 | Line 41 | is given (see Fig.~\ref{fig:integration
41   \label{fig:integration daq}
42   \end{figure}
43  
44 < The A/D conversion is managed by FEDs~\cite{bib:fedpci},
44 > \begin{description}
45 > \item[TSC] The Trigger Sequencer Card or TSC~\cite{bib:specs:tsc}  generate the
46 > 40~MHz clock for the entire system and triggers as well, either
47 > internally via software or by accepting external inputs. It has up to four
48 > electrical clock/trigger outputs, enough to drive the FEDs, and an optical clock/trigger
49 > output for the FEC.
50 > The TSC may also generate the RESET and CALIBRATION signal that are
51 > also encoded on the clock/trigger line.\\
52 > \item[FED] The analog-to-digital conversion is done by special PCI FEDs~\cite{bib:fedpci},
53   with electrical differential analog input, mounted on
54   PCI carrier boards and installed in an industrial PC.
55   The opto-electrical conversion of the analog signals is done externally by a 24-channel unit.
56   A setup containing 3 FEDs, with the electro-opical converter, is able
57 < to readout 48 APV; this is equivalent to 12 single sided modules (4 complete strings)
58 < or 4 double sided modules (1 string plus 1 module). This is more than what is needed to
59 < test the strings during the module installation.\\
60 < The trigger and clock signals
61 < were provided by the Trigger Sequencer Card (or TSC,~\cite{bib:specs:tsc}). It has up to four
62 < electrical clock/trigger outputs, which were enough to drive the FEDs, and an optical clock/trigger
63 < output for the FEC.
64 < This card is used to generate a 40~MHz clock and provide it to the system and also to generate
65 < triggers, either internally via software or by accepting external inputs.
66 < The TSC may also generate the RESET and CALIBRATION signal, by coding them properly on the
67 < clock/trigger line.\\
68 < The FEC mezzanine used during the integration
69 < was laid on a PCI carrier, supporting trigger/clock optical input as fed by the TSC. Its
70 < output was also optical, and could be directly connected to DOHs on the DOHM. \\
71 < The peculiarity of a PCI FED with respect to a final VME FED, other than having electrical inputs
72 < instead of optical ones, is its timing system: a final FED is able to recognise which
73 < conversion count
74 < corresponds to a frame header coming from a module on each input channel separately,
75 < while a PCI FED has this capability (\textit{header finding}) implemented only on the first channel
76 < out of the eight available and assumes that the signals coming from the modules are synchronised.
77 < This makes the PLL-based time alignment procedure even more important in a setup with PCI FEDs.
78 < Furthermore, PCI FEDs are not able to insert a programmable delay on their inputs and thus
79 < it is important that the clock/trigger connections from the TSC to FEDs have all the same delay.
80 < Last, PCI FEDs cannot perform an online pedestal subtraction and zero suppression (they cannot
81 < run in Processed Raw data nor in Zero Suppression mode).
82 <
83 < \subsubsection{Software}
84 < The software used to carry out integration tests was essentially the CMS
85 < tracker implementation of a more general software, named xDAQ, which is the official CMS
86 < DAQ framework. This implementation is known as TrackerOnline.\\
78 < The present version of TrackerOnline makes use of a set of
79 < xml files which store the parameters needed by all the devices present on the structure
80 < to be tested. In the final implementation of the software the use of a database is foreseen.
81 <
82 < \begin{figure}[bth!]
83 < \centering
84 < \includegraphics[width=\textwidth]{Figs/fecmodulexml_2.pdf}
85 < \caption{An example of data contained in fec.xml and module.xml files for one module.
86 < Part of the data is not shown for simplicity.}
87 < \label{fig:fecmodulexml}
88 < \end{figure}
89 < The information on a given setup can be divided in two sections: one describing the
90 < readout hardware (and corresponding software) setup and another describing which part of
91 < the tracker is going to be connected to the Control System and to FEDs.\\
92 < The hardware/software setup is written in a single xml file, which was prepared once
93 < and for all at the start of the integration.
94 < The information regarding the readout tracker section
95 < is stored in two more files, commonly named fec.xml and module.xml. The first
96 < contains all data which the FEC will need to download to modules before starting the data
97 < taking, that is all the values to be written in devices' $I^2C$ registers.
98 < The second contains all other
99 < information needed by the DAQ setup to rearrange data coming from the FEDs: from an input channel
100 < based indexing to a module based one (see Fig.~\ref{fig:fecmodulexml}).\\
101 < The module.xml file contains two tables: the first joins FEDs input channel indexes
102 < with the respective modules' ring, CCU and $I^2C$ address (i.e.\ readout coordinates
103 < with Control System coordinates). Each row corresponds to an APV
104 < pair, an AOH laser, an optical fibre and a FED input.\\
105 < The second table of module.xml reassembles the information on a module basis.
106 < Here all the active modules are listed, with a row for every module.
107 < The Control System coordinates are repeated both in module.xml and fec.xml,
108 < so that they can be used as a pivot between the 3 tables.\\
109 < It is clear that these files have to be archived along with raw data taken during a run, and
110 < to do it automatically is the first task of an integration validation software.
111 < Another required task is an automatic run logging; a fast data analysis is also
112 < desirable.
113 < Last, but not least, this software should easily allow the user to perform all preliminary
114 < (commissioning) runs, adjusting modules' parameters accordingly.
115 <
116 < \begin{figure}
117 < \centering
118 < \includegraphics[width=.9\textwidth]{Figs/integration_package.pdf}
119 < \caption{Scheme of the integration software.
120 < \textit{Arrows}: a relation.
121 < \textit{Gears}: an application.
122 < \textit{Mouse:} an interactive application.
123 < \textit{Sheet}: a file.}
124 < \label{fig:integration_package}
125 < \end{figure}
126 <
127 < Figure~\ref{fig:integration_package} shows a scheme of the relations between all the software
128 < components that will be described, the local files and the remote database.
129 < \paragraph*{FecTool}
130 < FecTool is a front-end Graphics User Interface (GUI) aimed to ease the creation of the device
131 < description, i.e.\ fec.xml. This program is used to launch
132 < two standalone applications deployed along with TrackerOnline: ProgramTest and FecProfiler.
133 < The first application can test the ring functionality, the connection to all devices reporting
134 < a list of detected hardware. Also the second application is able to retrieve a
135 < list of hardware connected to CCUs, but its output is the fec.xml file needed by TrackerOnline.\\
136 < By accessing the output of these two programs, the FecTool GUI enables the user to
137 < test the functionality of a string, or of a whole ring. FecTool also checks that the found hardware
138 < corresponds to what one expects to find in tracker's modules: for every $I^2C$ address
139 < there should be either 4 or 6 APVs, one PLL, one AOH, and so on.\\
140 < The user can also input the GeoId(s) of tested string(s) before starting the test. In this case
141 < FecTool also checks that the DCU Hardware Id read from each module matches the one declared
142 < in the construction database performing an important consistency check between what is
143 < registered on the integration database and what is really present on the structure spotting
144 < possible module registration error.\\
145 < Only if this last test is passed, the user is allowed to create the fec.xml description file
146 < needed to go forth with integration tests. Hence tests proceed with the data readout from
147 < modules, which rely on TrackerOnline.
148 < \paragraph*{The integration package}
149 < The xDAQ version installed on the integration setups is very user-unfriendly, and requires
150 < an expert user: many run-specific parameters are set manually and there is no
151 < input validation.
152 <
153 < \begin{figure}
154 < \centering
155 < \includegraphics[width=0.35\textwidth]{Figs/FedGuiMain.png}
156 < \caption{Main GUI window.}
157 < \label{fig:fedgui}
158 < \end{figure}
57 > to readout 48 APV; this is equivalent to 12 single sided modules
58 > %(four complete strings)
59 > or four double sided modules assemblies.
60 > %(one string plus one module).
61 > These figures are
62 > pefectly suited for the tests during the integration.\\
63 > For obvious reason the readout of the data from the APVs is not
64 > synchronous with the L1 trigger. A crucial capability of the FED is the
65 > \textit{header finding}, i.e. the automatical tagging of the
66 > analog data stream from APV pairs with respect to the idle
67 > signals at its inputs. This is possible since the APVs embeds the
68 > analogue data stream within a {\em digital frame} made up of a leading
69 > digital header and a trailing tick-mark. The peculiarity of the PCI FED
70 > with respect to the VME FED that will be used in the experiment, other than having electrical inputs
71 > instead of optical ones, is the timing system: the VME FED is able to
72 > perform the header finding on each input channel independently; the PCI FED has
73 > this capability only on the first channel of the eight available
74 > and assumes that the all input signals are synchronised.
75 > This makes the PLL-based time alignment procedure of crucial
76 > importance in a setup with PCI FEDs. Furthermore, PCI FEDs are not
77 > able to insert a programmable delay on their inputs and thus
78 > it is important that the clock/trigger connections from the TSC to
79 > FEDs have all the same delay. Last, PCI FEDs cannot perform an online
80 > pedestal subtraction and zero suppression (they cannot run in
81 > Processed Raw data nor in Zero Suppression mode).
82 > \item[FEC] The special FEC used during the integration is the {\em FEC
83 >  mezzanine} also installed into a PC on a PCI carrier. It supports
84 >  the optical trigger/clock provided by the TSC and features an
85 >  optical output directly connected to DOHs on the DOHM.
86 > \end{description}
87  
160 A package interacting with TrackerOnline, capable
161 of automatically setting all relevant parameters and performing all data collection at the end
162 of a run has been written. This is a
163 finite state machine which cycle through the needed states setting run-specific parameters
164 in the xDAQ software.
88  
89 < The state machine cycles through the following states:
89 > \subsubsection{Software}
90 > The software used to carry out integration tests is {\em TrackerOnline}, the CMS
91 > tracker implementation of a more general software, named
92 > xDAQ~\cite{ref:xdaq}, which is the official CMS DAQ framework.
93 > In place of a database as in the
94 > experiment version, the integration version of TrackerOnline uses a set of
95 > xml files for all the configurations needed to perform a test run.
96 > %, i.e. all the parameters needed by the devices and the
97 > %software involved in the test run,
98 > %\begin{figure}[bth!]
99 > %\centering
100 > %\includegraphics[width=\textwidth]{Figs/fecmodulexml_2.pdf}
101 > %\caption{An example of data contained in fec.xml and module.xml files for one module.
102 > %Part of the data is not shown for simplicity.}
103 > %\label{fig:fecmodulexml}
104 > %\end{figure}
105 > \begin{description}
106 > \item[Configuration of the DAQ hardware and software, daq.xml] The hardware
107 >  and software configuration of the DAQ is written in a single xml file, which
108 >  reflect the setup used for the test and has to be rarely changed
109 >  during the integration procedures.
110 > \item[Configuration of the control system, fec.xml] The most important
111 >  part of this configuration section is the settings the FEC must
112 >  upload to the modules and AOHs and in general any configurable
113 >  $I^2C$ device, before starting the data
114 > taking. Altough these settings are not specifically
115 >  related to the control system, it is duty of the control system to
116 >  write them to the devices' $I^2C$ registers and read them back for verification.
117 > \item[Configuration of the readout, module.xml] All other information needed by the DAQ to
118 > rearrange data coming from the FEDs is in module.xml that allows each
119 > FEDs input channel to be mapped to an APV pair of a specific module as
120 > identified by ring, CCU and $I^2C$ addresses (i.e. the correspondance
121 > between readout coordinates and Control System coordinates);
122 > %Each row corresponds to an APV
123 > %pair, an AOH laser, an optical fibre and a FED input.\\
124 > %The second table of module.xml reassembles the information on a module basis.
125 > %Here all the active modules are listed, with a row for every module.
126 > %The Control System coordinates are repeated both in module.xml and fec.xml,
127 > %so that they can be used as a pivot between the 3 tables.\\
128 > \end{description}
129 >
130 > The tasks performed by the integration software are the following:
131 > execution of the commissioning runs needed to optimal adjust of the
132 > module parameters (preparation of fec.xml and module.xml); execution
133 > of the test runs with complete and automated logging; fast analysis
134 > for immediate feedback; archival of xml configuration files to log the
135 > test conditions; archival of the raw data.
136 >
137 >
138 > %\begin{figure}
139 > %\centering
140 > %\includegraphics[width=.9\textwidth]{Figs/integration_package.pdf}
141 > %\caption{Scheme of the integration software.
142 > %\textit{Arrows}: a relation.
143 > %\textit{Gears}: an application.
144 > %\textit{Mouse:} an interactive application.
145 > %\textit{Sheet}: a file.}
146 > %\label{fig:integration_package}
147 > %\end{figure}
148 >
149 > %Figure~\ref{fig:integration_package} shows a scheme of the relations between all the software
150 > %components that will be described, the local files and the remote database.
151 >
152 > This is achieved by using a specific set of components of TrackerOnline.
153 > \begin{description}
154 > \item[FecTool.]
155 > FecTool is GUI based front-end to two standalone
156 > applications deployed with TrackerOnline, FecProfiler
157 > and ProgramTest, aimed to ease the creation of the device
158 > description.
159 > FecProfiler is able to detect
160 > the devices connected to the CCUs and builds the fec.xml file needed by
161 > TrackerOnline. FecTool takes care of checking that thedetected devices
162 > corresponds to expected ones, i.e., per module, 4 or 6 APVs, one
163 > PLL, one AOH, and so on.  ProgramTest allows the ring functionalities,
164 > i.e. the redundancy, to be deeply tested.
165 >
166 > The geographical identity of the strings under test must be
167 > entered to allows for verification from FecTool of the matching
168 > between the DCU Hardware Id read from each
169 > module and the one declared in the module database. This
170 > consistency check is crucial to spot possible errors in recording the
171 > location where a module is mounted during the assembly. If the check
172 > is passed, the fec.xml description file needed to go forth with
173 > integration tests can be created.
174 > %Hence tests proceed with the data
175 > %readout from modules, which rely on .
176 > \item[The Integration Package.]
177 > TrackerOnline as any xDAQ implementation requires an expert user as
178 > many run-specific parameters must be set and there is no  input
179 > validation.
180 > %\begin{figure}
181 > %\centering
182 > %\includegraphics[width=0.35\textwidth]{Figs/FedGuiMain.png}
183 > %\caption{Main GUI window.}
184 > %\label{fig:fedgui}
185 > %\end{figure}
186 > The integration setup is mode more user-friendly by a special {\em
187 >  Integration Package}, a GUI based front end that interacts with
188 >  TrackerOnline to automatically set all relevant parameters and to harvest all data at the end
189 > of a run. The package is organized as a finite state machine by which
190 >  the user can cycle between the various states, i.e. the following integration test steps:
191   \begin{enumerate}
192 < \item TrackerOnline initialisation (only once)
193 < \item Ask for the desired run
194 < \item Launch run in TrackerOnline
195 < \item Wait for run completion (polling the number of acquired events)
196 < \item Stop data taking
197 < \item Launch fast data analysis package
198 < \item Ask user for data validation
199 < \item If acknowledged, pack all the data and log the run with proper data quality flag. If the run
200 < was a commissioning run, update fec.xml and module.xml
192 > \item TrackerOnline initialisation (only once);
193 > \item choice of the the desired run;
194 > \item execution of the run via TrackerOnline;
195 > \item on run completion (i.e. after a given number of events), stop
196 >  data taking and execution of the fast data analysis;
197 > \item presentation of the run outcome on summary GUIs;
198 > \item on positive validation from the user, data are stored together
199 >  with run logs and data quality flags;
200 > \item in case of commissioning runs, on positive validation from the
201 >  user, fec.xml and module.xml are updated accordingly to be used from
202 >  now on.
203   \end{enumerate}
204  
205 < For every required run, the integration package shows the proper TrackerOnline output
206 < through some GUIs, so that the user can acknowledge the outcome of the run and give the approval
207 < for archiving the run data or, in case of a commissioning run, for updating the parameter set.\\
208 < The TrackerOnline software computes the new parameter set for each commissioning run
209 < (except for the ``find connection'' run, as we'll see below) and writes
210 < it locally as a fec.xml file, which is retrieved by the integration package at need.\\
211 < A main integration database, for centralized archiving purposes,
212 < was also installed. Software implementing data analysis runs automatically on the
213 < archived files producing validation outputs, accessible througth a web interface,
214 < for every tested module.
205 > %For every required run, the integration package shows the TrackerOnline output
206 > %through some GUIs, so that the user can acknowledge the outcome of the run and give the approval
207 > %for archiving the data or, in case of a commissioning run, for updating the parameter set.\\
208 > %The TrackerOnline software computes the new parameter set for each commissioning run
209 > %(except for the ``find connection'' run, as we'll see below) and writes
210 > %it locally as a fec.xml file, which is retrieved by the integration package at need.\\
211 > A main integration database has been setup for centralized archiving purposes,
212 > was also installed. For later reference, if needed, the data analysis
213 > can also be run on the archived files with validation outputs made
214 > available through a web interface.
215 > \end{description}
216  
217   \subsection{Test Description}
218   \label{ref:test-description}
192 \paragraph*{Find connections:}
193 This commissioning run is used to know which module is connected to which FED input channel.
194 This procedure consists of switching on all the lasers of all AOH one by one, while checking the
195 signal on all the FED inputs. If the difference of the signal seen on a FED channel is above
196 a given threshold, the connection between that laser and input channel is tagged and registered
197 in the module.xml as a connection table.
219  
220 < \begin{figure}
221 < \centering
201 < \includegraphics[width=0.27\textwidth]{Figs/FedGuiChannels.png}
202 < \caption{Connections displayed during the run}
203 < \label{fig:fedguichan}
204 < \end{figure}
205 <
206 < At this point of the commissioning procedures both the descriptions for FECs
207 < (i.e. tracker hardware connected to the FECs) and for FEDs are present.
208 <
209 < \paragraph*{Time alignment:}
210 < This step is used to compensate the different delays in the control and readout chain,
211 < i.e. the connections between FECs and FEEs.
212 < This also makes the APVs' sampling time synchronous provided that AOH
213 < fibres and ribbons are the same length. The latter is not important during integration
214 < quality checks, as no external signal is ever measured, but it becomes so when
215 < one tries to detect ionising particles.\\
216 < This run type is relevant because, if FEDs are to sample properly
217 < the APV signal, the clock must go to the modules synchronously, with a skew of the order of a few ns.
218 < Also the clock coming to the three FEDs must be synchronous but this is guaranteed
219 < by using cables of the same length between the clock-generating
220 < board (TSC) and the FEDs themselves.\\
221 < The time allignment run
222 < makes use of the periodic tick mark signal sent by the APVs when it is clocked:
223 < after these devices receive a reset, they produce a tick mark signal every 70 clock cycles.
224 < During this run the FEDs continously sample the signals at the full clock frequency.
225 < This means that for every DAQ cycle the output of all APVs is measured as with a 40~MSample scope.
226 < After every cycle is completed all the PLLs' delays are icreased by (25/24)~ns
227 < (the minimum delay step), and the signal readout is performed again. After 24 such cycles the full
228 < tick mark signal is measured as with a 960~MSample scope.
220 > Each run type that can be choosen by the user corresponds to a
221 > commissioning run or to a test run, as described below.
222  
223 + \begin{description}
224 + \item[Find connections.] This commissioning run is used associate each
225 +  FED input channel to a module. The procedure, repeated in sequence
226 +  for all AOHs laser drivers, consists of switching on only a given
227 +  AOH laser driver while checking the signal on all the FED inputs. If
228 +  the difference of the signal seen on a FED channel is above
229 + a given threshold, the connection between that laser and input channel is tagged and stored
230 + in module.xml.
231 + \item[Time alignment.]
232 + This commissioning run measures the appropriate delays to be later set
233 + in the PLLs delay registers.
234 + Doing so the different delays in the control and
235 + readout chain are compensated, the clock arrives to the modules
236 + synchronously, with a skew of the order of a few ns, and the APVs
237 + signal are properly sampled by the FEDs. This requires also the clock
238 + to all the FEDs to be synchronous, but this is guaranteed
239 + by using cables equal in length between the TSC and the FEDs.\\
240 + The time alignment run uses of the periodic tick mark signal issued by
241 + the idle APVs every 70 clock cycles. The APV signals are sampled by FEDs in
242 + scope mode, i.e. without waiting for an header but continously,
243 + sampling the inputs at the full clock frequency as with a 40~MSample
244 + scope. The measurement is repeated after all the PLL delays are
245 + icreased by the minimum delay step, 25/24~ns. After 24 such cycles the
246 + idle APV output and thus the tick mark signal also are measured with
247 + an effective 960~MSample scope.
248   \begin{figure}
249   \centering
250   \includegraphics[width=.6\textwidth]{Figs/tickmark.pdf}
# Line 235 | Line 253 | are marked. In the picture are reported
253   during the time alignment an interval of $1\,\mu\mathrm{s}$ is scanned.}
254   \label{fig:tick}
255   \end{figure}
256 < The DAQ takes the time delay between tick marks as a measurement of the difference in delays in the
257 < FEC-FEE connections and computes what delay must be set on each PLL in order to compensate this.
256 > The time differences between the variuos APV tick marks are a
257 > measurement of the relative delays introduced by the connections and
258 > can be used to compute the optimal delay to be set on each PLL for compensation.
259   The tick mark raising edge $t_R$ time is measured by taking the time corresponding to the highest
260   signal derivative (see Fig.~\ref{fig:tick}).
261   The best sampling point is considered $t_R+15\,\mathrm{ns}$, to avoid
262 < the possible overshoot. This is important also later, when measuring the analogue data frame, as
263 < it allows measuring the signal coming from each strip after transient effects due to the signal
264 < switching between strips are over.\\
265 < At the end of this procedure, the user is shown all proposed adjustments to PLL delay values.
266 < Then he can accept the outcome of the first time alignment run and, possibly,
267 < repeat it. If the time allignment has been done correctly maximum variation of two or less
268 < nanoseconds will be found. The delays are written in the correspoding xml file.
269 <
270 < \paragraph*{Laser Scan:}
271 < This run makes a scan of all AOH bias values for the four possible gain settings
272 < to determine the optimal gain and the corresponding optimal bias (see Fig.~\ref{fig:laserscan}).
273 < In this run the APV generated tick marks are sampled (a correctly done
274 < time allignment has to be done before) for all gain and bias values.
275 <
262 > the possible overshoot.
263 > %[???This is important also later, when measuring the analogue data frame, as
264 > %it allows measuring the signal coming from each strip after transient effects due to the signal
265 > %switching between strips are over.???]\\
266 > At the end of the procedure, all proposed adjustments to PLL delay
267 > values are proposed to the user. If accepted the delays are written in
268 > fec.xml. If the setup is correctly aligned in time, a further time
269 > alignment procedure should not propose delay corrections greater than
270 > $\sim 2$ns.
271 > It is worth noticing that by the time alignment procedure all APVs are
272 > made sampling synchronously, since in the integration setup AOH fibres
273 > and ribbons are all equal in length. This is not important during
274 > integration quality checks, as no external signal is ever measured,
275 > but it would be so in trying to detect ionising particles.
276 > \item[Laser Scan.] By this commissioning run a scan of any bias and
277 >  gain value pair is done to determine the optimal working point for
278 >  each AOH.
279 > The procedure requires a sucessfull time alignment since again tick marks are
280 > sampled but in this case changing gain and bias values.
281 > %
282   \begin{figure}[t!]
283   \begin{center}
259 \subfigure[A pictorial representation of a tick mark as produced by the APVs (dotted)
260 and as transmitted by the lasers (solid) when the laser driver's bias is too
261 low (left), correct (centre) or too high (right), with the subsequent signal saturation.]
262 {
263        \label{fig:laserscan}
264        \includegraphics[width=.9\textwidth]{Figs/laserscan.pdf}
265 }
266 \\
284   \subfigure[The sampled tick mark top and baseline as a function of the laser driver's bias.]
285   {
286          \label{fig:gainscan_basetop}
# Line 275 | Line 292 | low (left), correct (centre) or too high
292          \label{fig:gainscan_range}
293          \includegraphics[width=.45\textwidth]{Figs/gainscan_range.pdf}
294   }
295 + \subfigure[A pictorial representation of a tick mark as produced by the APVs (dotted)
296 + and as transmitted by the lasers (solid) when the laser driver's bias is too
297 + low (left), correct (centre) or too high (right), with the subsequent signal saturation.]
298 + {
299 +        \label{fig:laserscan}
300 +        \includegraphics[width=.9\textwidth]{Figs/laserscan.pdf}
301 + }
302   \caption{Plots computed during a gain scan run.}
303   \end{center}
304   \end{figure}
305 < For each trigger sent to FEDs, the tick mark's top is sampled twice and all other samplings fall
306 < on the baseline. The highest samples are used to estimate the higher limit of the signal
307 < for a given gain/bias pair and the lower values provide an estimate of the lower limit.
308 < For each gain value three plots are computed: in the first two the high and low edges of
309 < the tick mark are represented as a function of bias (Fig.~\ref{fig:gainscan_basetop})
310 < and the third, being the difference between the former, represent the dynamic range
311 < as a function of bias (Fig.~\ref{fig:gainscan_range}).\\
312 < For each laser these plots are created 4 times: one for each possible gain value. The best laser
313 < gain is computed as that providing an overall gain as close as possible to a given optimal value,
314 < the overall gain being estimated as the slope of the first two curves in their central section.
315 < The best bias value is taken as the one maximising the tick mark height keeping
316 < the maximum and minimum not saturated.\\
317 < After the run is completed, values proposed by TrackerOnline are shown to the user, who
318 < intervenes if there is any abnormal proposed gain, which may indicate a problem either on the
319 < AOH or on the fibre.
320 <
321 < \paragraph*{VPSP Scan:}
322 < This is the first run with FEDs with the header finding function active. In this run the
323 < trigger is dispatched also to modules, which send their data frames.
324 < During this run a scan of VPSP parameter is performed on the modules, and for each value
325 < their frames are acquired several times. As there is no physics
326 < signal on the detectors, the sampled signal is a measurement of the pedestal of the analog channels.
327 < The average strip pedestal is computed for every APV as a function of the VPSP parameter and
328 < at the end of the run the best VPSP pedestal is computed as that which moves the baseline
329 < to 1/3 of the dynamic range. This choice avoids setting a baseline too near the
330 < lower saturation value leaving anyway enough range for possible signals from particles
331 < ($\sim 6$ MIP equivalent).
332 < At the end of the run computed values are proposed to the user for approval and written
333 < to the xml file.
334 <
305 > %
306 > For each trigger sent to FEDs, the is sampled twice and all other samplings fall
307 > on the baseline. The tick mark's top samples are used to estimate the higher limit of the signal
308 > for a given gain/bias setting pair and the baseline samples provide an estimate of the lower limit.
309 > For each gain setting the tick mark top samples and the baseline samples are measured as a
310 > function of the bias, as shown in Fig.~\ref{fig:gainscan_basetop}. For
311 > each bias setting their difference is the tick mark height, shown in Fig.~\ref{fig:gainscan_range},
312 > that represents the dynamic range as a function of the bias.\\
313 > The best bias setting for a given gain setting is taken as the one maximising the tick mark height keeping
314 > the maximum and minimum not saturated, as pictorially represented in Fig.~\ref{fig:laserscan}.
315 > The same measurement is done for each possible gain setting.
316 > The best laser gain setting is the one providing an overall gain of
317 > the optical chain as close as possible to the design one, 0.8~\cite{ref:gain}.
318 > The overall gain is estimated from the slope of the curves of
319 > Fig.~\ref{fig:gainscan_basetop} in their central section.
320 > After the run is completed a set of values are proposed to the
321 > user. Abnormal gain values may indicate a problem either on the AOH or
322 > on the fibre and are investigated.
323 > \item[VPSP Scan.]
324 > This commissioning run is devoted to optimise the pedestal of
325 > the APV, i.e. the average output level in absence of any signal, with
326 > respect to the dynamical range of the FEDs. This level is managed by a
327 > specific APV register, know as {\em VPSP}, which controls a voltage
328 > setting within the deconvolution circuitry. The procedure consists of
329 > a scan of VPSP values while acquiring data frames from modules in the
330 > standard way, i.e. trigger sent to the modules and FEDs in
331 > ``header finding'' mode.
332 > The optimal VPSP setting correspond to a pedestal baseline placed
333 > around 1/3 of the available dynamic range, a good compromise to keep
334 > the baseline not too close the lower saturation value while leaving a good
335 > range for particle signals, corresponding to $\sim 6$ MIP.
336 > At the end of the run a set of values are proposed to the user for
337 > approval andin case written in the relevant xml file.
338   \begin{figure}
339   \begin{center}
340   \subfigure[Strip pedestals of a module in ADC counts vs.\ strip number.]
# Line 326 | Line 353 | dynamic range, and their noise is theref
353   \label{fig:saturation}
354   \end{center}
355   \end{figure}
356 < This run is not normally performed during the integration, as a default value was set on
357 < all the APVs after a measurement on a sample. Anyway it is sometimes needed because the optimal
358 < VPSP value may change from APV to APV and is also strongly dependent on temperature.
359 < An example of this is shown in Fig.~\ref{fig:saturation}, where
360 < only a few readout channels suffer from a signal saturation, while most of the strips of a module
361 < are placed correctly inside the dynamic range.
362 < This usually happens because the pedestal values of the channels of an APV are different one
363 < from another and may have a dependency on the strip index like that shown in
364 < Fig.~\ref{fig:saturationpedestal}.
365 < In these cases when the pedestal of a strip approaches to the edge of dynamic range     the
366 < APV-AOH output is no more linear and the channel's RMS is lower, (see Fig.~\ref{fig:saturationnoise}).
367 < This is one of the most frequent problems with the pedestals and it is solved
368 < by a VPSP scan.
369 <
370 < \paragraph*{Pedestal and Noise:}
371 < This is the main run for qualifying the detector performances during
372 < the integration. The 400V bias are applied to the module under test which are
373 < also checked for any possible overcurrent or breakdown.\\
374 < Triggers are sent to the modules and FEDs are placed in header recognition mode.
348 < All the analogue frames from the modules are acquired and for each channel both the average value
349 < and the RMS are computed.
350 < Two analyses are performed on these data: one is done online by the TrackerOnline software, and
351 < another one is performed offline through procedures taken from the ORCA package (ORCA was the
352 < official reconstruction package of CMS at the time, now substituted by CMSSW) and run
353 < on the files containing the raw data as acquired by FEDs in just the same way as it would
354 < do with data coming to a Filter Farm.\\
356 > The VPSP scan is not sistematically performed during the integration,
357 > since the default VPSP setting is adequate in most of the
358 > cases. Nevertheless, VPSP optimal values change considerably within
359 > the APV population and are strongly temperature dependent and is
360 > tather common to have a stuation in which the pedestal of few readout
361 > channels approaches to the lower edge of dynamic range
362 > (Fig.~\ref{fig:saturationpedestal}) resulting in a lower RMS (see
363 > Fig.~\ref{fig:saturationnoise}). The VPSP scan allows for this issue to be
364 > fixed.
365 > \item[Pedestal and Noise Run.]
366 > This is the main run type for qualifying the performances during
367 > the integration. Tipically a bias of 400V is applied to the modules
368 > under test to check for any possible overcurrent or breakdown.\\
369 > Triggers are sent to the modules and FEDs work in ``header finding'' mode.
370 > All the analogue frames from the modules are collected two analyses
371 > are performed on these data: online, by the TrackerOnline
372 > software; offline, in a way very similar to the final experiment by
373 > using algorythms of the ORCA package~\cite{bib:orca}, the CMS
374 > reconstruction package at that time, now replaced by CMSSW.
375   The average value of the signal read on each strip is an estimate of
376   its pedestal, while the RMS is a good estimate of its noise, provided that the noise itself
377   is Gaussian, which is true to a first approximation. This value is often referred to as
378 < \textit{raw noise}, as opposed to the \textit{common-mode subtracted noise} (or CMN). The latter
379 < can be computed after pedestals are measured, which happens after the first hundreds of frames
380 < are acquired.
381 < The common mode noise subtraction performed by ORCA and TrackerOnline is similar to that
382 < performed by the final FEDs.
383 < This subtraction
384 < eliminates the Common Mode Noise contribution to a specific event. After the Common
385 < Mode Noise subtraction the noise can be computed again as the RMS of the remaining signal on strips
386 < and this new noise measurement is called Common Mode Subtracted Noise (or just CMN).\\
387 < Because of the difference in gain between the various optical links to compare the noise
388 < on different APV pairs a renormalization is needed.
389 < To implement this a gain measuring procedure to correct
390 < noise measurements has been done. When FEDs acquire analogue frames, they store all acquired
391 < raw data, comprising the samplings on the digital header. As the digital header has
392 < the same amplitude on every APV, its measurement in terms of FED ADC counts
393 < was used as an estimate of the relative gain of optical links.
394 < This method allows a contextual measurement of noise and gain, and it is accurate,
395 < provided that there is no signal saturation both on low and high values.\\
396 < The normalisation factor was chosen so as to bring the normalised header height to 220 ADC counts,
377 < which was the value of header's height as it was read out in the module test setup
378 < of the module production line. This allowed the scaled measurements to be easily
379 < compared with those done during module production tests. Both normalised CMN noise and raw noise
380 < are shown to the user, while only uncalibrated CMN noise is plotted as a reference.
381 < Also the distribution of normalised CMN and raw strip noise is computed and shown to
382 < complete the information.
383 <
378 > \textit{raw noise}, as opposed to the \textit{common-mode subtracted
379 >  noise} (or CMN). The latter is the RMS computed after having
380 > subtracted the {\em common noise}, i.e. the correlated noise-like fluctuation
381 > common to a given group of channels (tipically an entire APV).
382 > The common mode noise subtraction method implemented in ORCA and
383 > TrackerOnline is similar to that performed by the final FEDs.\\
384 > Noise comparison between different APV pairs requires a
385 > renormalization, because of the difference in gain between the various
386 > optical links. The normalization procedure relies on the digital
387 > headers whose amplitude, being the same on each APV, is used to
388 > estimate of the relative gain of optical links so to apply an
389 > appropriate correction. In such a way noise and gain are
390 > simultaneously measured provided that the signal is not saturation both on low and high values.
391 > The normalisation factor is chosen so as to bring the normalised
392 > header height to 220 ADC counts, as measured in quality controls
393 > during the module production. This allowed the scaled measurements to be easily
394 > compared with those done during module production tests.\\
395 > At the end of the run, pedestal ando noise profiles and distributions
396 > are shown to the user.
397   \begin{figure}[t!]
398   \centering
399   \includegraphics[width=0.6\textwidth]{Figs/noiseprofile.pdf}
# Line 389 | Line 402 | and y axis represent the noise in ADC co
402   normalised noise).}
403   \label{fig:noiseprofile}
404   \end{figure}
405 < Figure~\ref{fig:noiseprofile} shows an example of the output
406 < at the end of a noise measurement run: the normalised raw noise and CMN and the uncalibrated CMN
405 > Figure~\ref{fig:noiseprofile} shows an example of the noise output:
406 > the normalised raw noise and CMN and the uncalibrated CMN
407   for each strip are plotted against the strip index. The first 256 strips belong to the
408   first APV pair and are multiplexed to a single optical line and the strips from 257 to 512
409   belong to the second APV pair. It can be noted here that the
410 < raw noise without normalisation suffers from a different gain of optical links
411 < and that after the normalisation procedures the noise level is the same
412 < for the two APV pairs.\\
400 < At the end of this run, once the outcomes are showed to the user, he can decide whether to
401 < store these data or to cancel the procedure. In the first case data are packed along with
410 > raw noise without normalisation suffers from a different gain of
411 > optical links, corrected by the normalisation procedure.\\
412 > If validated by the user, data are packed along with
413   possible comments and sent to the central archive system, where they are processed
414 < again and made available on a web page. The system also automatically recognises where
415 < the modules where mounted by checking their DCU Hardware Id (which is written in the fec.xml file)
416 < in the construction database, this information is stored in the test table of
417 < the integration database and allows to build a geographical table of mounted modules with
418 < a link to a page containing all the tests performed for each module.\\
414 > again and made available on a web page.
415 > %The system also automatically recognises where
416 > %the modules where mounted by checking their DCU Hardware Id (which is written in the fec.xml file)
417 > %in the construction database, this information is stored in the test table of
418 > %the integration database and allows to build a geographical table of mounted modules with
419 > %a link to a page containing all the tests performed for each module.\\
420 > \end{description}
421  
422   %%%%%%% aggiunta C.G.
423 < \subsubsection{Single Module}
424 < After a module of a string was mounted, its basic functionalities were tested.
425 < A fast test on the I$^2$C communication permitted to spot possible electrical problems
426 < in the module front-end electronics in the AOH or in the mother cable. Since  the safe removal of a MC
427 < from the shell requires the dismounting of al the modules of a string, it is very important
428 < to perform this test of as soon as the first module of a string is integrated. After the
429 < I$^2$C test FecTool checks the identity of the components against the construction database.
430 < The results of the tests can then be monitored through a web page.
431 <
432 < \subsubsection{String}
433 < When the third and last module of a string is mounted the commissioning runs described in section~\ref{ref:test-description} are performed after the I$^2$C communication tests. The first run, ``Find connections'' permitted to check the full functionality of the AOH. Since the AOHs can be tested only after the modules are mounted this is the first test which can spot possibly broken fibres. It was necessary to perform this test after the integration of each string, because the subtitution of an AOH implies the dismounting of all the modules of the string mounted between the AOH and the front flange.\\
434 < After this test a ``Time Alignement'' run, a  ``Laser scan'' run and finally a ``Pedestal and Noise'' run with HV on were performed. The ``Laser scan'' run was done limiting the laser gain to the lower value which was found to be optimal for all the AOHs in the integration setup.
435 <
436 < \subsubsection{Control Ring and Redundancy}
437 < The control electronics can be fully tested only after complete assembly of a shell. This last test forsees a check of the correct operation of the ring and of the redundancy circuitry. A failure on a CCU or a control cable can be immediatly spotted as it causes the interruption of the ring; the communication with the other components is then checked using ProgramTest.\\
438 < Finally the test of redundancy circuitry is performed by-passing each CCU one by one and verifying the correct response of the ring. The test is successful only if both the primary and secondary circuits of the DOHM are working correctly and if the CCUs are connected in the right order to the DOHM ports.
423 > The basic run types and tests described above are appropriately
424 > combined according to the devices and/or the group of devices under test.
425 > \begin{description}
426 > \item[Single Module.] After a module is mounted, its basic
427 >  functionalities can be tested. In particular, a fast test on the
428 >  I$^2$C communication permits possible
429 > electrical problems to be spotted in the module front-end electronics,
430 > in the AOH or in the mother cable. For mother cable and AOH this is of
431 >  particular importance: an AOHs can be practically tested only
432 >  after the corresponding modules is mounted and this is the first
433 >  test which can spot possibly broken fibres; similarly for the MC, it
434 >  is very important to perform the test of as soon as possible. In
435 >  fact, a safe removal and replacement of either an AOH or the MC is a
436 >  difficult intervention possibly requiring the dismounting of many
437 >  modules already put in place.
438 >  safe removal of a MC from the shell \\ Since  After the
439 > I$^2$C test FecTool checks the identity of the components with respect
440 > to the construction database. The results of the tests can then be
441 > monitored through a web page.
442 > \item[String]
443 > When the third and last module or double sided assembly is mounted on
444 > a string, all the commissioning runs described above are performed
445 > just after the I$^2$C communication tests. The ``Find
446 > connections'' run allows for the full functionality to be checked.
447 > Afterwards a ``Time Alignement'' run, a  ``Laser scan'' run and
448 > finally a ``Pedestal and Noise'' run with HV bias are performed. The
449 > ``Laser scan'' run is done limiting the laser gain to the lower value
450 > which is found to be enough for the integration setup needs.
451 > \item[Control Ring and Redundancy.]
452 > The control electronics can be fully tested only after complete
453 > assembly of a shell. This last test forsees a check of the correct
454 > operation of the ring and of the redundancy circuitry. A failure on a
455 > CCU or a control cable can be immediatly spotted as it causes the
456 > interruption of the ring; the communication with the other components
457 > is then checked using ProgramTest. Finally the test of redundancy
458 > circuitry is performed by-passing each CCU one by one and verifying
459 > the correct response of the ring. The test is successful only if both
460 > the primary and secondary circuits of the DOHM are working correctly
461 > and if the CCUs are connected in the right order to the DOHM ports.
462   %%%%%%%% fine aggiunta C.G.
463 + \end{description}
464  
465   \section{Safety of operations}
466   The integration procedure posed many possible problems in the safety of operations,

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines