Skip to content

Commit

Permalink
Updates from Overleaf
Browse files Browse the repository at this point in the history
  • Loading branch information
fedhere committed Sep 10, 2024
1 parent b6fd26c commit 7350347
Show file tree
Hide file tree
Showing 15 changed files with 93 additions and 82 deletions.
2 changes: 1 addition & 1 deletion PSTN-056.tex
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@
% \setDocCurator{The Curator of this Document}

\setDocAbstract{%
The final recommendation of the SCOC for the Survey Strategy prior to the start of LSST
We present the final planned comprehensive recommendation for Rubin Observatory the Legacy Survey of Space and Time (LSST) survey strategy: this recommendation is the product of a many-years-long iterative process where community recommendations to maximize the scientific impact of LSST across domains of astrophysics were reviewed, synthesized, aggregate, and merged to define the overall plan for 10 years of LSST observations. The current recommendation builds on Phase 1 \citep{PSTN-053} and Phase 2 recommendations \citep{PSTN-055} and, together, they define a 10-year plan for observing. Here we answer questions left open in \citetalias{PSTN-055}, refine additional survey details, and describe the scope of future activities of the SCOC.
}

% Change history defined here.
Expand Down
8 changes: 4 additions & 4 deletions additional.tex
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
\section{Additional Recommendations}\label{sec:additional}

{\bf A small change to the southern portion of the footprint improves overlap with the Euclid footprint} and determines negligible changes in science metrics.
{\bf A small change to the southern portion of the footprint improves overlap with the Euclid footprint} (see Fig.~\ref{fig:euclid-overlap}) and causes negligible changes in science metrics.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figures/baseline_v3_0_10yrs_euclid_overlap.png}
Expand All @@ -9,16 +9,16 @@ \section{Additional Recommendations}\label{sec:additional}
\end{overpic}%\includegraphics[width=0.45\textwidth]{figures/baseline_v3_6_10yrs_euclid_overlap.png}
%\includegraphics[width=0.45\textwidth]{figures/3.0_south.png}\includegraphics[width=0.45\textwidth]{figures/3.1_south.png}

\caption{Small changes to the southern portion of the footprint improve overlap with Euclid.}
\caption{Small changes to the southern portion of the footprint improve overlap with Euclid. \label{fig:euclid-overlap}}
\end{figure}

{\bf The airmass limits for the Near-Sun Twilight microsurvey, introduced with baseline v3.0, were increased from $X=2.5$ to $X=3.0$} in \texttt{v3.2}, corresponding with decreasing the minimum solar elongation reached for this microsurvey from 60 degrees to 45 degrees. This improves the likelihood of discovery of interior-to-earth objects, increasing the survey sensitivity to this niche of discovery space. The recovered population of objects
{\bf The airmass limits for the Near-Sun Twilight microsurvey, introduced with baseline v3.0, were increased from $X=2.5$ to $X=3.0$} in \texttt{v3.2}, corresponding to decreasing the minimum solar elongation reached for this microsurvey from 60 degrees to 45 degrees. This improves the likelihood of discovery of objects with interior-to-Earth orbits, increasing the survey sensitivity to this niche of discovery space. The recovered population of objects
interior to Venus at magnitude $H\leq20$ goes from $\sim4\%$ to $\sim40\%$ in \texttt{v3.2} and later. The impacts outside the microsurvey are negligible.

\section{Additional changes introduced in \texttt{v3.6} \opsim s }\label{sec:opsimchanges}
Starting with \texttt{v3.6} some important assumptions underlying the simulations were updated:
\begin{itemize}
\item Increased downtime in Y1 to reflect a more realistic transition into operations. The downtime in Y1 is simulated to be maximal early on and decreased to the level expected for the general LSST survey by the end of the first year.
\item The effect of jerk on slew time is now included in the simulations, and thus included in scheduling choices.
\item The effect of jerk on slew time is now included in the simulations, and thus included in scheduling choices. Functionally this slightly increases the overhead and decreases survey efficiency.
\end{itemize}

119 changes: 63 additions & 56 deletions answers.tex

Large diffs are not rendered by default.

20 changes: 12 additions & 8 deletions intro.tex
Original file line number Diff line number Diff line change
Expand Up @@ -20,36 +20,40 @@ \section{Introduction}
Additional changes to the survey strategy are described in \autoref{sec:additional}.

The current recommendation is summarized in \autoref{sec:summary}.
%Topics that the SCOC should focus on in the next round of deliberations follow in \autoref{sec:next}.

%In \autoref{sec:process} we describe the process of interaction with the community and iterative optimization of the LSST during Operations.
Topics that the SCOC should focus on in the next round of deliberations, including the process of interaction with the community and iterative optimization of the LSST during Operations follow in [TBD].% \autoref{sec:next}.

Note: this document includes definitions of all used within in \autoref{sec:acronyms}.

\clearpage
%In \autoref{sec:process} we describe .

\section{Executive Summary of the Phase 3 recommendation}

Notable updates from previous recommendations, and corresponding changes to the baseline, include: updating system throughputs, expectations for engineering time, and slew performance; tweaking the filter balance in response to throuhgput changes; refining observing choices in the Galactic Plane, Bulge, and LMC/SMC/SCP; defining the implementation plan for the ToO program; recommending single visit exposures over visits implemented in ``snaps''; investigating new rolling strategy options; refining the DDF observing plans.
Notable updates from previous recommendations, and corresponding changes to the baseline, include: updating system throughputs, expectations for engineering time (particularly in the first year of LSST, Y1), and slew performance; tweaking the filter balance in response to throughput changes; refining observing choices in the Galactic Plane, Bulge, and LMC/SMC/SCP; defining the implementation plan for the ToO program; recommending single visit exposures over visits implemented in ``snaps''; investigating new rolling strategy options; refining the DDF observing plans.

\subsection{Note on how to read the SCOC plots}

In this document you will see sky maps measuring quantities (\eg, number of visits) in healpixels. The typical sky pixelization that underlies the metric calculations the SCOC review is 128 sides healpixels (covering an area $\sim 0.2\mathrm{deg}^2$) although for particularly computationally intense MAFs this is turned down to 64 or 32.
In this document, you will see sky maps measuring quantities (\eg, number of visits) in healpixels. The typical sky pixelization that underlies the metric calculations the SCOC review is 128 sides healpixels (covering an area $\sim 0.2\mathrm{deg}^2$) although for particularly computationally intense MAFs this is turned down to 64 or 32.

The SCOC typically reviews the outcome of MAFs across multiple \opsim\ to compare scientific performance. You will see plots in three styles:

{\bf Heatmaps}: \autoref{fig:heatmap} -
the divergent color scheme shows improvements in metrics in blue and drops in metrics in red. Note that different heat maps may show different ranges in the color scheme, but the SCOC typically considers changes of more than a few percent to be significant and less than a few percent to be in the noise. One of the \opsim\ is chosen as comparison and the corresponding column will look neutral in color.
the divergent color scheme shows improvements in metrics in blue and drops in metrics in red. Note that different heat maps may show different ranges in the color scheme, but the SCOC typically considers changes of more than a few percent to be significant and less than a few percent to be in the noise. One of the \opsim\ is chosen as a comparison and the corresponding column will look neutral in color.

\begin{figure}
\centering
\begin{overpic}[width=0.8\textwidth]{figures/v1-v34heatmap.png}
\put(50,30){\color{lsstblue}\huge DRAFT}
\end{overpic}
%\includegraphics[width=0.8\textwidth]{figures/v1-v34heatmap.png}
\caption{A heatmap produced by the Observing Strategy team for the SCOC. The plot compares the performance of selected metrics across baseline simulations form \baseline{1.X} (original survey strategy) through \baseline{3.4}. In these plots, a \opsim\ is selected as the reference and all other \opsim s' performances are shown relative to that. That is: the reference \opsim\ (\baseline{2.0} in this case) has MAF=1 for all the metrics. Blue colors indicate a positive metric value, \ie\ an improvement. Red colors indicate a performance drop with respect to the reference \opsim.}
\caption{A heatmap produced by the Observing Strategy team for the SCOC. The plot compares the performance of selected metrics across baseline simulations from \baseline{1.X} (original survey strategy) through \baseline{3.4}. In these plots, a \opsim\ is selected as the reference and all other \opsim s' performances are shown relative to that. That is: the reference \opsim\ (\baseline{2.0} in this case) has MAF=1 for all the metrics. Blue colors indicate a positive metric value, \ie\ an improvement. Red colors indicate a performance drop with respect to the reference \opsim.}
\label{fig:heatmap}
\end{figure}

\FloatBarrier

{\bf Radar plots}: \autoref{fig:radar} - when comparing small numbers of metrics and few \opsim\ we often use radar plots. Each corner of the radar plot corresponds to a metric and the colored lines inside of the plots that join each corner show metric performance. In these plots the comparison \opsim s looks like a N-gone (or N-sided circle). Where the MAF performance shows improvements compared to the comparison \opsim\ the point lies outside of the N-gone, where its a loss it sits inside. The range of performance changes, so make sure you carefully inspect the plot to see the performance scaling going in and out of the N-gone (typical values are 0.9-1.1).
{\bf Radar plots}: \autoref{fig:radar} - when comparing small numbers of metrics and few \opsim\ we often use radar plots. Each corner of the radar plot corresponds to a metric and the colored lines inside of the plots that join each corner show metric performance. In these plots the comparison \opsim s looks like a N-gone (or N-sided circle). Where the MAF performance shows improvements compared to the comparison \opsim\ the point lies outside of the N-gone, where it's a loss, it sits inside. The range of performance changes, so make sure you carefully inspect the plot to see the performance scaling going in and out of the N-gone (typical values are 0.9-1.1).

\begin{figure}
\centering
Expand All @@ -58,7 +62,7 @@ \subsection{Note on how to read the SCOC plots}
\end{overpic}


\caption{A radar plot comparing performance of MAFs under different filter swapping schemes (see \autoref{sec:filterswap}). This plot compres \baseline{3.0} with \baseline{3.2}. All metrics within perform as well or better in \baseline{3.2} as shown by the orange point laying outside of the blue polygon, except \texttt{N YSO} where, however, the loss is minimal and likely not statistically significant. The range of the axis is 0.8-1.3, indicating that a point lying in the center would measure a performance 20\% worse than the reference \opsim\ and a point on the outer perimeter would indicate a 30\% improvement.}
\caption{A radar plot comparing the performance of MAFs under different filter swapping schemes (see \autoref{sec:filterswap}). This plot compres \baseline{3.0} with \baseline{3.2}. All metrics within perform as well or better in \baseline{3.2} as shown by the orange point laying outside of the blue polygon, except \texttt{N YSO} where, however, the loss is minimal and likely not statistically significant. The range of the axis is 0.8-1.3, indicating that a point lying in the center would measure a performance 20\% worse than the reference \opsim\ and a point on the outer perimeter would indicate a 30\% improvement.}
\label{fig:radar}
\end{figure}

Expand Down
Empty file modified lsst-texmf/bin/db2authors.py
100755 → 100644
Empty file.
Empty file modified lsst-texmf/bin/generateAcronyms.py
100755 → 100644
Empty file.
Empty file modified lsst-texmf/bin/generateBibfile.py
100755 → 100644
Empty file.
Empty file modified lsst-texmf/bin/lsstdoc2bib.py
100755 → 100644
Empty file.
Empty file modified lsst-texmf/bin/makeTablesFromGoogle.py
100755 → 100644
Empty file.
Empty file modified lsst-texmf/bin/showBeamerThemes
100755 → 100644
Empty file.
Empty file modified lsst-texmf/bin/skipacronyms.tex
100755 → 100644
Empty file.
Empty file modified lsst-texmf/bin/validate_authors.py
100755 → 100644
Empty file.
Empty file modified lsst-texmf/bin/validate_bib.py
100755 → 100644
Empty file.
Empty file modified lsst-texmf/docker/install-base-packages.sh
100755 → 100644
Empty file.
Loading

0 comments on commit 7350347

Please sign in to comment.