ViewVC Help
View File | Revision Log | Show Annotations | Root Listing
root/cvsroot/UserCode/claudioc/OSNote2010/limit.tex
(Generate patch)

Comparing UserCode/claudioc/OSNote2010/limit.tex (file contents):
Revision 1.19 by benhoob, Fri Dec 3 15:17:37 2010 UTC vs.
Revision 1.35 by claudioc, Fri Jan 14 00:21:24 2011 UTC

# Line 3 | Line 3
3  
4   %{\bf \color{red} The numbers in this Section need to be double checked.}
5  
6 + \subsection{Limit on number of events}
7 + \label{sec:limnumevents}
8   As discussed in Section~\ref{sec:results}, we see one event
9   in the signal region, defined as SumJetPt$>$300 GeV and
10   \met/$\sqrt{\rm SumJetPt}>8.5$ GeV$^{\frac{1}{2}}$.
11  
12 < The background prediction from the SM Monte Carlo is
11 < 1.3 events.
12 > The background prediction from the SM Monte Carlo is 1.3 events.
13   %, where the uncertainty comes from
14   %the jet energy scale (30\%, see Section~\ref{sec:systematics}),
15   %the luminosity (10\%), and the lepton/trigger
# Line 16 | Line 17 | The background prediction from the SM Mo
17   %the modeling of $t\bar{t}$ in MadGraph have not been evaluated.
18   %The uncertainty on $pp \to \sigma(t\bar{t})$ is also not included.}.
19   The data driven background predictions from the ABCD method
20 < and the $P_T(\ell\ell)$ method are $1.5 \pm 0.9({\rm stat}) \pm 0.3({\rm syst})$
21 < and $4.3 \pm 3.0({\rm stat}) \pm 1.2({\rm syst})$, respectively.
20 > and the $P_T(\ell\ell)$ method are $1.3 \pm 0.8({\rm stat}) \pm 0.3({\rm syst})$
21 > and $2.1 \pm 2.1({\rm stat}) \pm 0.6({\rm syst})$, respectively.
22  
23   These three predictions are in good agreement with each other
24   and with the observation of one event in the signal region.
25   We calculate a Bayesian 95\% CL upper limit\cite{ref:bayes.f}
26   on the number of non SM events in the signal region to be 4.1.
27 < We have also calculated this limit using a profile likelihood method
28 < as implemented in the cl95cms software, and we also find 4.1.
29 < These limits were calculated using a background prediction of $N_{BG}=1.7 \pm 1.1$
30 < events.  The upper limit is not very sensitive to the choice of
27 > We have also calculated this limit using
28 > % a profile likelihood method
29 > % as implemented in
30 > the cl95cms software\cite{ref:cl95cms},
31 > and we also find 4.1.  (This is not surprising, since cl95cms
32 > also gives baysean upper limits with a flat prior).
33 > These limits were calculated using a background prediction of $N_{BG} = 1.4 \pm 0.8$
34 > events, the error-weighted average of the ABCD and $P_T(\ell\ell)$ background
35 > predictions.  The upper limit is not very sensitive to the choice of
36   $N_{BG}$ and its uncertainty.
37  
38   To get a feeling for the sensitivity of this search to some
39   popular SUSY models, we remind the reader of the number of expected
40 < LM0 and LM1 events from Table~\ref{tab:sigcont}: $6.3 \pm 1.3$
41 < events and $2.6 \pm 0.4$
36 < respectively, where the uncertainties
40 > LM0 and LM1 events from Table~\ref{tab:sigcont}: $8.6 \pm 1.6$
41 > events and $3.6 \pm 0.5$ events respectively, where the uncertainties
42   are from energy scale (Section~\ref{sec:systematics}), luminosity,
43 < and lepton efficiency.  Note that these expected SUSY yields
39 < are computed using LO cross-sections, and are therefore underestimated.
43 > and lepton efficiency.
44  
45 +
46 + \subsection{Outreach}
47 + \label{sec:outreach}
48   Conveying additional useful information about the results of
49   a generic ``signature-based'' search such as the one described
50 < in this note is a difficult issue.  The next paragraph represent
51 < our attempt at doing so.
50 > in this note is a difficult issue.  
51 > Here we attempt to present our result in the most general
52 > way.
53  
54 < Other models of new physics in the dilepton final state
54 > Models of new physics in the dilepton final state
55   can be confronted in an approximate way by simple
56   generator-level studies that
57 < compare the expected number of events in 35 pb$^{-1}$
57 > compare the expected number of events in 34.0~pb$^{-1}$
58   with our upper limit of 4.1 events.  The key ingredients
59   of such studies are the kinematical cuts described
60   in this note, the lepton efficiencies, and the detector
61 < responses for SumJetPt and \met/$\sqrt{\rm SumJetPt}$. These
54 < quantities have been evaluated with Spring10 MC samples,
55 < and we are currently checking if any of them change after
56 < switching to Fall10 MC.
61 > responses for SumJetPt and \met/$\sqrt{\rm SumJetPt}$.
62   The muon identification efficiency is $\approx 95\%$;
63   the electron identification efficiency varies from $\approx$ 63\% at
64   $P_T = 10$ GeV to 91\% for $P_T > 30$ GeV.  The isolation
65   efficiency in top events varies from $\approx 83\%$ (muons)
66   and $\approx 89\%$ (electrons) at $P_T=10$ GeV to
67 < $\approx 95\%$ for $P_T>60$ GeV.  The average detector
67 > $\approx 95\%$ for $P_T>60$ GeV.
68 > %{\bf \color{red} The following numbers were derived from Fall 10 samples. }
69 > The average detector
70   responses for SumJetPt and $\met/\sqrt{\rm SumJetPt}$ are
71 < $1.00 \pm 0.05$ and $0.94 \pm 0.05$ respectively, where
71 > $1.02 \pm 0.05$ and $0.94 \pm 0.05$ respectively, where
72   the uncertainties are from the jet energy scale uncertainty.
73 < The experimental resolutions on these quantities are 10\% and
74 < 14\% respectively.
73 > The experimental resolutions on these quantities are 11\% and
74 > 16\% respectively.
75  
76   To justify the statements in the previous paragraph
77   about the detector responses, we plot
# Line 74 | Line 81 | efficiency for the cuts on these quantit
81   signal region.
82   % (SumJetPt $>$ 300 GeV and \met/$\sqrt{\rm SumJetPt} > 8.5$
83   % Gev$^{\frac{1}{2}}$).  
84 + %{\bf \color{red} The following numbers were derived from Fall10 samples }
85   We find that the average SumJetPt response
86 < in the Monte Carlo
87 < is very close to one, with an RMS of order 10\% while
88 < the
81 < response of \met/$\sqrt{\rm SumJetPt}$ is approximately 0.94 with an
82 < RMS of 14\%.
86 > in the Monte Carlo is about 1.02, with an RMS of order 11\% while
87 > the response of \met/$\sqrt{\rm SumJetPt}$ is approximately 0.94 with an
88 > RMS of 16\%.
89  
90   %Using this information as well as the kinematical
91   %cuts described in Section~\ref{sec:eventSel} and the lepton efficiencies
# Line 90 | Line 96 | RMS of 14\%.
96  
97   \begin{figure}[tbh]
98   \begin{center}
99 < \includegraphics[width=\linewidth]{selectionEff.png}
99 > \includegraphics[width=\linewidth]{selectionEffDec10.png}
100   \caption{\label{fig:response} Left plots: the efficiencies
101   as a function of the true quantities for the SumJetPt (top) and
102   tcMET/$\sqrt{\rm SumJetPt}$ (bottom) requirements for the signal
# Line 101 | Line 107 | Right plots: The average response and it
107   The response is defined as the ratio of the reconstructed quantity
108   to the true quantity in MC.  These plots are done using the LM0
109   Monte Carlo, but they are not expected to depend strongly on
110 < the underlying physics.}
110 > the underlying physics.
111 > %{\bf \color{red} These plots were made with Fall10 samples. }
112 > }
113 > \end{center}
114 > \end{figure}
115 >
116 >
117 >
118 > %%%  Nominal
119 > % -----------------------------------------
120 > % observed events                         1
121 > % relative error on acceptance        0.000
122 > % expected background                 1.400
123 > % absolute error on background        0.770
124 > % desired confidence level             0.95
125 > % integration upper limit             30.00
126 > % integration step size              0.0100
127 > % -----------------------------------------
128 > % Are the above correct? y
129 > %    1  16.685     0.29375E-06
130 > %
131 > % limit: less than     4.112 signal events
132 >
133 >
134 >
135 > %%%  Add 20% acceptance uncertainty based on LM0
136 > % -----------------------------------------
137 > % observed events                         1
138 > % relative error on acceptance        0.200
139 > % expected background                 1.400
140 > % absolute error on background        0.770
141 > % desired confidence level             0.95
142 > % integration upper limit             30.00
143 > % integration step size              0.0100
144 > % -----------------------------------------
145 > % Are the above correct? y
146 > %    1  29.995     0.50457E-06
147 > %
148 > % limit: less than     4.689 signal events
149 >
150 >
151 > \subsection{mSUGRA scan}
152 > \label{sec:mSUGRA}
153 > We also perform a scan of the mSUGRA parameter space, as recomended
154 > by the SUSY group convenors\cite{ref:scan}.
155 > The goal of the scan is to determine an exclusion region in the
156 > $m_0$ vs. $m_{1/2}$ plane for
157 > $\tan\beta=3$,
158 > sign of $\mu = +$, and $A_{0}=0$~GeV.  This scan is based on events
159 > generated with FastSim.
160 >
161 > The first order of business is to verify that results using
162 > Fastsim and Fullsim are compatible.  To this end we compare the
163 > expected yield for the LM1 point in FullSim (3.56 $\pm$ 0.06) and
164 > FastSim (3.29 $\pm$ 0.27), where the uncertainties are statistical only.
165 > These two numbers are in agreement, which gives us confidence in
166 > using FastSim for this study.
167 >
168 > The FastSim events are generated with different values of $m_0$
169 > and $m_{1/2}$ in steps of 10 GeV.  For each point in the
170 > $m_0$ vs. $m_{1/2}$ plane, we compute the expected number of
171 > events at NLO.  We then also calculate an upper limit $N_{UL}$
172 > using cl95cms at each point using the following inputs:
173 > \begin{itemize}
174 > \item Number of BG events = 1.40 $\pm$ 0.77
175 > \item Luminosity uncertainty = 11\%
176 > \item The acceptance uncertainty is calculated at each point
177 > as the quadrature sum of
178 > \begin{itemize}
179 > \item The uncertainty due to JES for that point, as calculated
180 > using the method described in Section~\ref{sec:systematics}
181 > \item A 5\% uncertainty due to lepton efficiencies
182 > \item An uncertaity on the NLO cross-section obtained by varying the
183 > factorization and renormalization scale by a factor of two\cite{ref:sanjay}.
184 > \item The PDF uncertainty on the product of cross-section and acceptance
185 > calculated using the method of Reference~\cite{ref:pdf}.
186 > \end{itemize}
187 > \item We use the ``log-normal'' model for the nuisance parameters
188 > in cl95cms
189 > \end{itemize}
190 >
191 > An mSUGRA point is excluded if the resulting $N_{UL}$ is smaller
192 > than the expected number of events.  Because of the quantization
193 > of the available MC points in the $m_0$ vs $m_{1/2}$ plane, the
194 > boundaries of the excluded region are also quantized.  We smooth
195 > the boundaries using the method recommended by the SUSY
196 > group\cite{ref:smooth}.  In addition, we show a limit
197 > curve based on the LO cross-section, as well as the
198 > ``expected'' limit curve.  The expected limit curve was
199 > calculated using the CLA function also available in cl95cms.
200 > In general we found that the ``expected'' limit is very close
201 > to the observed limit, which is not surprising since the
202 > expected BG (1.4 $\pm$ 0.8 events) is fully consistent
203 > with the observation (1 event). Because of the quantization,
204 > we find that the expected and observed limits are either
205 > identical or differ by one or at most two grid points.
206 > We have approximated the expected limit as the observed limit
207 > minus 10 GeV\footnote{We show the expected limit only because
208 > this is what is recommended by SUSY management. We believe that
209 > quoting the agreement between the expected BG and the
210 > observation should be enough....}.
211 > Finally, we note that the sross-section uncertainties due to
212 > variations of the factorization
213 > and renormalization scale are not included for the LO curve.
214 > The results are shown in Figure~\ref{fig:msugra}
215 >
216 >
217 > \begin{figure}[tbh]
218 > \begin{center}
219 > \includegraphics[width=\linewidth]{exclusion_noPDF.pdf}
220 > \caption{\label{fig:msugra}\protect Exclusion curves in the mSUGRA parameter space,
221 > assuming $\tan\beta=3$, sign of $\mu = +$ and $A_{0}=0$~GeVs.  THIS IS STILL MISSING
222 > THE PDF UNCERTAINTIES.  WE ALSO WANT TO IMPROVE THE SMOOTHING PROCEDURE.}
223 > \end{center}
224 > \end{figure}
225 >
226 >
227 > \subsubsection{Check of the nuisance parameter models}
228 > We repeat the procedure outlined above but changing the
229 > lognormal nuisance parameter model to a gaussian or
230 > gamma-function model.  The results are shown in
231 > Figure~\ref{fig:nuisance}.  (In this case,
232 > to avoid smoothing artifacts, we
233 > show the raw results, without smoothing).
234 >
235 > \begin{figure}[tbh]
236 > \begin{center}
237 > \includegraphics[width=0.5\linewidth]{nuissance.png}
238 > \caption{\label{fig:nuisance}\protect Exclusion curves in the
239 > mSUGRA parameter space,
240 > assuming $\tan\beta=3$, sign of $\mu = +$ and $A_{0}=0$~GeVs
241 > using different models for the nuisance parameters.
242 > PDF UNCERTAINTIES ARE NOT INCLUDED.}
243 > \end{center}
244 > \end{figure}
245 >
246 > We find that different assumptions on the PDFs for the nuisance
247 > parameters make very small differences to the set of excluded
248 > points.
249 > Following the recommendation of Reference~\cite{ref:cousins},
250 > we use the lognormal nuisance parameter model as the default.
251 >
252 >
253 > % \clearpage
254 >
255 >
256 > \subsubsection{Effect of signal contamination}
257 > \label{sec:contlimit}
258 >
259 > Signal contamination could affect the limit by inflating the
260 > background expectation.  In our case we see no evidence of signal
261 > contamination, within statistics.
262 > The yields in the control regions  
263 > $A$, $B$, and $C$ (Table~\ref{tab:datayield}) are just
264 > as expected in the SM, and the check
265 > of the $P_T(\ell \ell)$ method in the control region is
266 > also consistent with expectations (Table~\ref{tab:victory}).
267 > Since we have two data driven methods, with different
268 > signal contamination issues, giving consistent
269 > results that are in agreement with the SM, we
270 > argue for not making any correction to our procedure
271 > because of signal contamination.  In some sense this would
272 > be equivalent to using the SM background prediction, and using
273 > the data driven methods as confirmations of that prediction.
274 >
275 > Nevertheless, here we explore the possible effect of
276 > signal contamination.  The procedure suggested to us
277 > for the ABCD method is to modify the
278 > ABCD background prediction from $A_D \cdot C_D/B_D$ to
279 > $(A_D-A_S) \cdot (C_D-C_S) / (B_D - B_S)$, where the
280 > subscripts $D$ and $S$ refer to the number of observed data
281 > events and expected SUSY events, respectively, in a given region.
282 > We then recalculate $N_{UL}$ at each point using this modified
283 > ABCD background estimation.  For simplicity we ignore
284 > information from the $P_T(\ell \ell)$
285 > background estimation.  This is conservative, since
286 > the $P_T(\ell\ell)$ background estimation happens to
287 > be numerically larger than the one from ABCD.
288 >
289 > Note, however, that in some cases this procedure is
290 > nonsensical.  For example, take LM0 as a SUSY
291 > point.  In region $C$ we have a SM prediction of 5.1
292 > events and $C_D = 4$ in agreement with the Standard Model,
293 > see Table~\ref{tab:datayield}.  From the LM0 Monte Carlo,
294 > we find $C_S = 8.6$ events.   Thus, including information
295 > on $C_D$ and $C_S$ should {\bf strengthen} the limit, since there
296 > is clearly a deficit of events in the $C$ region in the
297 > LM0 hypothesis.  Instead, we now get a negative ABCD
298 > BG prediction (which is nonsense, so we set it to zero),
299 > and therefore a weaker limit.
300 >
301 >
302 >
303 >
304 > \begin{figure}[tbh]
305 > \begin{center}
306 > \includegraphics[width=0.5\linewidth]{sigcont.png}
307 > \caption{\label{fig:sigcont}\protect Exclusion curves in the
308 > mSUGRA parameter space,
309 > assuming $\tan\beta=3$, sign of $\mu = +$ and $A_{0}=0$~GeVs
310 > with and without the effects of signal contamination.
311 > PDF UNCERTAINTIES ARE NOT INCLUDED.}  
312 > \end{center}
313 > \end{figure}
314 >
315 > A comparison of the exclusion region with and without
316 > signal contamination is shown in Figure~\ref{fig:sigcont}
317 > (with no smoothing).  The effect of signal contamination is
318 > small, of the same order as the quantization of the scan.
319 >
320 >
321 > \subsubsection{mSUGRA scans with different values of tan$\beta$}
322 > \label{sec:tanbetascan}
323 >
324 > For completeness, we also show the exclusion regions calculated
325 > using $\tan\beta = 10$ (Figure~\ref{fig:msugratb10}).
326 >
327 > \begin{figure}[tbh]
328 > \begin{center}
329 > \includegraphics[width=\linewidth]{exclusion_tanbeta10.pdf}
330 > \caption{\label{fig:msugratb10}\protect Exclusion curves in the mSUGRA parameter space,
331 > assuming $\tan\beta=10$, sign of $\mu = +$ and $A_{0}=0$~GeVs.  THIS IS STILL MISSING
332 > THE PDF UNCERTAINTIES.  WE ALSO WANT TO IMPROVE THE SMOOTHING PROCEDURE.}
333   \end{center}
334   \end{figure}
335 +
336 +
337 +
338 +
339 +
340 +

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines