+
Skip to content
ArticlesThe following article is Free article

THE BLANCO COSMOLOGY SURVEY: DATA ACQUISITION, PROCESSING, CALIBRATION, QUALITY DIAGNOSTICS, AND DATA RELEASE

, , , , , , , , ,

Published 2012 September 6 © 2012. The American Astronomical Society. All rights reserved.
, , Citation S. Desai et al 2012 ApJ 757 83DOI 10.1088/0004-637X/757/1/83

0004-637X/757/1/83

ABSTRACT

The Blanco Cosmology Survey (BCS) is a 60 night imaging survey of ∼80 deg2 of the southern sky located in two fields: (α, δ) = (5 hr, −55°) and (23 hr, −55°). The survey was carried out between 2005 and 2008 in griz bands with the Mosaic2 imager on the Blanco 4 m telescope. The primary aim of the BCS survey is to provide the data required to optically confirm and measure photometric redshifts for Sunyaev–Zel'dovich effect selected galaxy clusters from the South Pole Telescope and the Atacama Cosmology Telescope. We process and calibrate the BCS data, carrying out point-spread function-corrected model-fitting photometry for all detected objects. The median 10σ galaxy (point-source) depths over the survey in griz are approximately 23.3 (23.9), 23.4 (24.0), 23.0 (23.6), and 21.3 (22.1), respectively. The astrometric accuracy relative to the USNO-B survey is ∼45 mas. We calibrate our absolute photometry using the stellar locus in grizJ bands, and thus our absolute photometric scale derives from the Two Micron All Sky Survey, which has ∼2% accuracy. The scatter of stars about the stellar locus indicates a systematic floor in the relative stellar photometric scatter in griz that is ∼1.9%, ∼2.2%, ∼2.7%, and ∼2.7%, respectively. A simple cut in the AstrOmatic star–galaxy classifier spread_model produces a star sample with good spatial uniformity. We use the resulting photometric catalogs to calibrate photometric redshifts for the survey and demonstrate scatter δz/(1 + z) = 0.054 with an outlier fraction η < 5% to z ∼ 1. We highlight some selected science results to date and provide a full description of the released data products.

Export citation and abstractBibTeXRIS

1. INTRODUCTION

Since the discovery of cosmic acceleration at the end of the last millennium (Schmidt et al. 1998; Perlmutter et al. 1999), understanding the underlying causes has remained as one of the key mysteries in modern astrophysics. As the most massive collapsed structures in the universe, galaxy cluster populations and their evolution with redshift provide a powerful probe of, for example, the dark energy equation of state parameter as well as alternative gravity theories, which mimic cosmic acceleration (Wang & Steinhardt 1998; Haiman et al. 2001; Holder et al. 2001). Evolution of the cluster abundance depends on a combination of the angular-diameter distance versus redshift relation and the growth rate of density perturbations. This sensitivity enables one to constrain a range of cosmological parameters, including the matter density, the sum of the neutrino masses (Ichiki & Takada 2012), the present-day amplitude of density fluctuations, and the presence of primordial non-Gaussianity in the initial density fluctuations (Dalal et al. 2008; Cunha et al. 2010). In addition, galaxy clusters provide an ideal laboratory to study galaxy evolution (e.g., Dressler 1980). Interesting studies of the galaxy properties and their evolution within clusters include studies of the blue fraction and the halo occupation distribution (e.g., Butcher & Oemler 1984; Lin et al. 2003, 2006; Lin & Mohr 2004; Hansen et al. 2009; Zenteno et al. 2011).

The first large-scale attempt to identify and catalog galaxy clusters was by Abell in 1958. He discovered galaxy clusters by looking for overdensities of galaxies in Palomar Observatory photographic plates within a radius of about 2.1 Mpc around a given cluster position (Abell 1958). Abell's catalogs contained about 4700 clusters (Abell et al. 1989). However, Abell's catalog suffered from incompleteness and contamination from projection effects as well as human bias (Biviano 2008). With the advent of CCD cameras, one could apply objective automated algorithms to look for galaxy clusters, and this has led to significant progress in cosmological as well astrophysical studies using galaxy clusters.

In the last decade, many optical photometric surveys such as the Sloan Digital Sky Survey (SDSS), CFHTLS, Red-Sequence Cluster Survey (RCS) covering contiguous regions of the sky have discovered several new galaxy clusters spanning a broad range of masses and redshifts. The CFHTLS-W (Adami et al. 2010) has observed about 171 deg2 in griz bands with 80% completeness up to an i-band magnitude of 23. The RCS-2 (Gilbank et al. 2011) survey has covered approximately 1000 deg2 in grz bands with 10σ magnitude depths of around 23.55 in the r band. The SDSS MaxBCG catalog (Koester et al. 2007) has covered about 7500 deg2 in ugriz bands with 10σ r-band magnitude limits of about 22.35. The largest optical galaxy cluster survey in terms of area is the Northern Sky Optical Cluster Survey III which has imaged about 11,400 deg2 up to a redshift of about 0.25 (Gal et al. 2009). The deepest optical cluster survey to date is the CFHTLS-D survey (Adami et al. 2010), which reaches 80% completeness for i-band magnitudes of 26 and has detected clusters up to a redshift of 1.5. Two upcoming photometric galaxy cluster surveys which will start around 2012 October include the Dark Energy Survey (DES) which will cover about 5000 deg2 in grizY bands with 10σ r-band limiting magnitudes of 24.8, and KIDS (de Jong et al. 2012) which will cover 1500 deg2 in ugri bands with a 10σ r-band limiting magnitude of 24.45.

One can use such surveys for cosmological studies using galaxy clusters. For example, Gladders et al. (2007) showed that a large optical galaxy cluster survey could constrain cosmological parameters using the self-calibration method (Majumdar & Mohr 2003, 2004). The first cosmological constraints using SDSS optical catalogs are described in Rozo et al. (2010).

Over the last decade, there have been several millimeter (mm)-wave cluster studies in the southern hemisphere, including ACBAR (Reichardt et al. 2009), the Atacama Cosmology Telescope (ACT) (Fowler 2004), APEX (Gonzalez et al. 2001), and the South Pole Telescope (SPT) (Ruhl et al. 2004). All of these projects have attempted to carry out galaxy cluster surveys using the Sunyaev–Zel'dovich effect (SZE). The SZE is the distortion of the cosmic microwave background (CMB) spectrum due to inverse Compton scattering of CMB photons by hot electrons in galaxy clusters (Sunyaev & Zel'dovich 1972), and it provides a promising way to discover galaxy clusters. Because the surface brightness of the SZE signature of a particular cluster is independent of redshift, SZE survey cluster samples can in principle have sensitivity over a broad range of redshifts (Birkinshaw 1999; Carlstrom et al. 2002). However, to make use of SZE-selected galaxy cluster samples, one needs a well-understood selection of galaxy clusters (sample contamination and completeness), cluster redshift estimates, and a link between the SZE signature and the cluster halo masses. It is important to note that redshift estimates cannot be obtained using SZ experiments alone, and so one needs dedicated optical surveys to follow up these galaxy clusters detected by SZ surveys.

The Blanco Cosmology Survey (BCS) is an optical photometric survey which was designed for this purpose and positioned to overlap the ACBAR, ACT, APEX, and SPT surveys in the southern hemisphere. The goal of BCS is to enable cluster cosmology by providing the data to confirm galaxy clusters from the above surveys and to measure their photometric redshifts. This was done by surveying two patches totaling ∼80 deg2 positioned so that they could be observed with good efficiency over the full night during the period October–December from Chile. The BCS observing strategy was chosen to obtain depths roughly 2 mag deeper than SDSS, so that one could estimate photometric redshifts for LL* galaxies out to a redshift z = 1.

The outline of this paper is as follows: Section 2 describes the BCS, including the camera, observing strategy, and site characteristics. In Section 3, we describe in detail the processing and calibration of the data set using the Dark Energy Survey Data Management (DESDM) system. In Section 3.3, we describe the photometric characteristics of the BCS data set and present single galaxy photometric redshifts that are tuned using fields containing large numbers of spectroscopic redshifts. In this paper all magnitudes refer to AB magnitudes.

2. BCS SURVEY

BCS was a NOAO Large Survey project (2005B-0043, PI: Joseph Mohr) which was awarded 60 nights between 2005 (starting from semester 2005B) and 2008 on the Cerro Tololo Inter American Observatory (CTIO) Blanco 4 m telescope using the Mosaic2 imager with griz bands. Because of shared nights with other programs, the data acquisition included 69 nights, and the final processed data set only consists of 66 nights, because two nights were entirely clouded out and the pointing solution for one night (20071105) was wrong due to observer error. We now describe the Mosaic2 imager on the Blanco telescope and then discuss the BCS observing strategy.

2.1. Mosaic2 Imager

The Mosaic2 imager is a prime focus camera on the Blanco 4 m telescope that contains eight 2048 × 4096 CCD detectors. The eight SITe 2K × 4K CCDs are read out in dual-amplifier mode, where different halves of each CCD are read out in parallel through separate amplifiers. The CCDs are read out through a single amplifier per chip simultaneously to eight controller inputs. Read noise is about 6–8 electrons and readout time is about 110 s. The dark current rate is less than 1 electron pixel−1 hr−1 at 90 K. The resulting mosaic array is a square of about 5 inches on an edge. The gaps between CCDs are kept to about 0.7 mm in the row direction and 0.5 mm in the column direction. Given the fast optics at the prime focus on the Blanco, the pixels subtend 0farcs27 on the sky. Total field of view is 36.8 arcmin on a side for a total solid angle per exposure of ∼0.4 deg2. More details on the Mosaic2 imager can be found in the online CTIO documentation.17

2.2. Field Selection and Multi-wavelength Coverage

The survey was divided into two fields to allow efficient use of the allocated nights between October and December. Both fields lie near δ = −55° which allows for overlap with the SPT and other mm-wave surveys. One field is centered near α = 23.5 hr and the other is at α = 5.5 hr. The 5 hr 30 minutes −52° patch consists of a 12 × 11 array of Blanco pointings and the 23 hr −55° patch is a 10 × 10 array of pointings. The 5 hr field lies within the Boomerang field where the ACBAR experiment took data. The 23 hr field has been observed by the APEX, ACT, and SPT experiments. In addition to the large science fields, BCS also covers nine small fields that overlap large spectroscopic surveys, so that photometric redshifts using BCS data can be trained and tested using a sample of over 5000 galaxies with spectroscopic redshifts. BCS also surveyed standard star fields for photometric calibration. The coverage of BCS in 5 hr and 23 hr fields is shown in Figure 1. For convenience of data processing and building catalogs, we divide the survey region into 36′ × 36′ square regions called tiles. Each tile is a 8192 × 8192 pixel portion of a tangent plane projection. These tiles are set on a grid of point separated by 34′, allowing for approximately 1′ overlaps of sky between neighboring tiles. The black vertical hatches in Figure 1 indicate locations of tiles which passed various quality checks. The red horizontal hatches indicate locations of tiles which were observed and processed, but failed data quality checks.

Figure 1. Refer to the following caption and surrounding text.

Figure 1. BCS survey footprint of co-added tiles in the 5 hr and 23 hr fields. There are 104 tiles covering ∼35 deg2 in the 23 hr field and 138 tiles covering ∼45 deg2 in the 5 hr field for a total coverage of ∼80 deg2. The black vertically hatched boxes represent tiles which have passed our quality checks. The red horizontally hatched boxes represent tiles with some data quality problems that we have not corrected.

Standard image High-resolution image

We also secured other multi-wavelength observations overlapping parts of the BCS fields. About 14 deg2 of the 23 hr BCS field was surveyed using XMM-Newton (known as XMM-BCS survey) and results from those observations are reported elsewhere (Šuhada et al. 2012). An ∼12 deg2 region of the same field was also targeted in a Spitzer survey (S-BCS). More recently, the XMM-Newton survey has been expanded to 25 deg2, and the Spitzer survey has been expanded to 100 deg2. Most of the BCS region has been observed in the near-infrared as part of the ESO VISTA survey program (Cioni et al. 2011).

2.3. Observing Strategy

The BCS observing strategy was designed to allow us to accurately measure cluster photometric redshifts out to redshift z = 1. Because the 4000 Å break is redshifting out to 8000 Å by z = 1, obtaining reliable photo-z's for 0 < z < 1 requires all four photometric bands g, r, i, z (i.e., one loses all clusters at z < 0.4 if you drop the g band and with the z band we can actually push beyond z = 1). The redshift at which the 4000 Å break redshifts beyond a particular band sets, crudely speaking, the maximum redshift for which that band is useful for cluster photo-z's; for griz this is z = 0.35, 0.7, 1.0, and 1.4, respectively. Because the central wavelength of the g band is about 4800 Å with a full width at half maximum (FWHM) of 1537 Å, we start losing sensitivity to very low redshift clusters, because it is not possible to straddle the 4000 Å break. Although detailed studies of the sensitivity of optical cluster detection at low redshifts have not been done, our ability to estimate unbiased red-sequence redshifts for clusters is reduced below redshifts z ∼ 0.1.

We calculate our photometric limits in each band by requiring that the depth allows us to probe at least to L* at that maximum redshift with 10σ photometry. We use a Bruzual and Charlot z = 3 single burst model with passive evolution (Bruzual & Charlot 2003) to calculate the evolution of L* in the four bands (see Figure 2). We select our z depth to probe to L* at z = 1 rather than at z = 1.4, because of the low sensitivity of the Mosaic2 detectors in the z band. The survey was designed to reach 10σ photometric limits within a 2.2 arcsec aperture of g = 24.0, r = 23.9, i = 23.6, and z = 22.3. These limits assume an airmass of 1.3 and 0farcs9 median seeing for all bands. Assuming bright time for z and i and dark time for g and r, these limits require exposures of 250 s, 600 s, 1400 s, and 700 s in griz, respectively.

Figure 2. Refer to the following caption and surrounding text.

Figure 2. Redshift evolution of a passively evolving L* galaxy along with target 10σ photometric BCS depths in each band. The exposure times in each band were tuned so that photometric depth meets or exceeds L* out to the redshift where the 4000 Å break shifts out of that band, but also limited to z = 1 due to the low sensitivity of the Mosaic2 camera in the z band.

Standard image High-resolution image

In all, we observed about 288 tiles spanning our survey fields. For each field, we typically took two exposures in g of 125 s each, two exposures in r of 300 s each, three exposures in i of 450 s each, and three exposures in z of 235 s each. A limitation of the Mosaic2 detector is a very low saturation of around 25,000 ADU for most of the detectors, and this forced us to take short exposures even though the readout time for each was quite high. Neighboring pointings have small overlaps, but multiple exposures were offset by approximately half the width of an amplifier to help us tie the survey together photometrically. Having two shifted exposures allows us to largely overcome the gaps in our survey left by spaces between neighboring chips. In addition to this primary survey tiling, we also constructed another layer of tilings, which was designed to sit at the vertices of unique groups of four adjacent primary pointings. These tiles were observed using shorter exposures during poor seeing conditions on photometric nights. The 110 s readout of the Mosaic2 camera makes the efficiency of short exposures low, and so in each band we have chosen the minimum number of exposures allowable given the sky brightness. The total exposure per tile is 3000 s and after including the readout time, the total observation time per science field is about 4200 s, giving us an overall efficiency of about 70%. The dome flats and bias frames were taken in the afternoon, and we did not take any twilight flats. Over the course of the survey, we acquired just over 3000 science exposures and an additional 455 photometric overlap exposures.

In addition to science exposures, on photometric nights we also observed photometric calibration fields as well as fields for calibrating our photo-z algorithms. These fields were CNOC2, DEEP, CFRS, CDFS, SSA22, and VVDS fields. For the photometric calibration fields, we typically observed two or three fields during evening and morning twilight and a single field during the transition from the 23 hr field to the 5 hr 30 minute field. We observed in all four bands during these calibration exposures. The spectroscopic standard fields were observed to full science depth using the same strategy as for the full survey.

2.4. Site Characteristics

The BCS survey provides a sampling of the CTIO site characteristics over a 69 night period in the October to December time frame over four observing seasons. Because this is the same time frame planned for DES observations, this provides an interesting glimpse into the expected site characteristics for DES. Given that the entire Mosaic2 camera and wide-field corrector are being replaced by DECam and the new DECam wide-field corrector (Soares-Santos et al. 2011), the seeing distribution for the DES data could be significantly improved relative to the BCS seeing distribution.

The seeing distribution is shown in the top panel of Figure 8. The seeing was obtained by running PSFEX software on all single-epoch images and using the FWHM_MEAN parameter. The FWHM_MEAN is derived from elliptical Moffat fits to the non-parametric point-spread function (PSF) models. These FWHMs include the pixel footprint. The modal seeing values integrated over the survey are ≃ 1′, 0farcm95, 0farcm8, 0farcm95 for griz bands, respectively. The median seeing values are 1.07, 0.99, 0.95, and 0.95 arcsec, while the upper and lower quartile seeing values are [0.96, 1.26], [0.89, 1.16], [0.84, 1.13], and [0.83, 1.11] arcsec, respectively.

The sky brightness is shown in Figure 3. The sky brightness is calculated using ZP − 2.5log B, where ZP is the calculated zero point for that image and B is the sky brightness in ADU arcsec−2. The sky brightness distributions in the griz bands have modal values of approximately 22.5, 21.5, 20.5, and 18.75 mag arcsec−2, respectively. Moreover, almost all i- and z-band data were taken with the moon up, while almost all g- and r-band data were taken with the moon set. The median values are 22.3, 21.3, 20.3, and 18.7 mag arcsec−2, respectively.

Figure 3. Refer to the following caption and surrounding text.

Figure 3. Sky brightness distributions for all four bands averaged on a per exposure basis during the BCS survey. Typically, we observed in g and r during dark time and i and z during bright time. The brightness values are peaked at around 22.5, 21.5, 20.5, and 18.75 mag arcsec−2 in griz bands, respectively.

Standard image High-resolution image

Given the division of the survey into a 23 hr and a 5 hr field, it was possible to obtain most of the data at relatively low airmass. Figure 4 shows the airmass distributions for each band during primary survey observations. We often obtained photometric calibration field observations over a broader range of airmasses, but we tried to restrict our primary survey observations to airmasses of <1.6. The median airmass in bands griz is 1.144, 1.147, 1.138, and 1.141, respectively.

Figure 4. Refer to the following caption and surrounding text.

Figure 4. Airmass distributions for BCS exposures, color coded by band and normalized by total number of exposures. The peak airmass values in griz bands are 1.144, 1.147, 1.138, and 1.141, respectively.

Standard image High-resolution image

3. DATA PROCESSING AND CALIBRATION

The processing of BCS data is carried out using the automated DESDM system which has been under development since Fall 2005 at University of Illinois (Ngeow et al. 2006; Mohr et al. 2008). DESDM will be used to process, calibrate, and store data from the DES once it begins operations in 2012 October. Since 2005, the DESDM system has been validated through a series of data challenges with simulated DECam data, which enabled us to improve various steps of the pipeline. The same automated pipeline was used to analyze BCS data. The only addition/change to the DESDM pipeline to analyze BCS data was in the cross-talk correction code, for which the routine had to be customized for the Mosaic2 camera. Processing of the BCS data presented here has been carried out on National Teragrid resources at NCSA and LONI supercomputers together with dedicated workstations needed for orchestrating the jobs and hosting the database. The middleware for the data reduction pipeline is designed using a Condor batch processing system. Each night takes about 300 CPU hours for processing.

We have processed BCS data multiple times in a process of discovery where we found problems with the data that required changes to our system. Scientific results from earlier rounds of processing of BCS data have already appeared, including the optical confirmation of the first ever SZE-selected galaxy clusters (Staniszewski et al. 2009) and the discovery of a strong gravitational lensing arc using data from the first round of processing in Spring 2008 (Buckley-Geer et al. 2011). Additional galaxy cluster science arising from subsequent rounds of BCS processing has also been published (High et al. 2010; Zenteno et al. 2011; Šuhada et al. 2012). Currently, our latest processing is being used for additional SZE cluster science within SPT, continued studies of the X-BCS region, and for the follow-up of the broader XMM-XXL survey over the 23 hr field.

The BCS data were made public one year after their acquisition, as is standard policy at NOAO. This has enabled multiple independent teams to access the data and use it for their own scientific aims. The first three seasons of BCS data were processed using an independent pipeline developed at Rutgers University (Menanteau et al. 2010b; Menanteau & Hughes 2009). All four seasons of BCS data have also been processed independently using NOAO pipeline as part of the current automated processing program, and with the PHOTPIPE analysis pipeline (Rest et al. 2005).

3.1. Detrending

In this section, we describe in detail the key steps involved in the DESDM pipeline used for reduction of Mosaic2 data to convert raw data products to science ready catalogs and images. Data from every night are processed through a nightly processing or detrending pipeline. Then data from different nights in the same part of the sky are combined using the co-addition pipeline. The detrending pipeline briefly consists of cross-talk corrections, overscan, flat field, bias and illumination correction, astrometric calibration, and cataloging. We now describe in detail each step of the detrending pipeline.

3.1.1. Cross-talk Corrections

A common feature of multi-CCD cameras, such as the Mosaic2 imager, is cross talk among the signals from otherwise independent amplifiers or CCDs. This leads to a CCD image containing not only the flux distribution that it collected from the sky, but also a low-amplitude version of the sky flux distributions that appear in other CCDs. The cross-talk correction equation is described by

Equation (1)

where Ii denotes the cross-talk-corrected image pixel value in the ith CCD, αij denotes the cross-talk coefficients, and Ijraw is the raw image pixel value. We used cross-talk coefficients provided by NOAO through the survey. As part of the cross-talk correction stage, the raw image (which contains 16 extensions) is split into one single-extension file per CCD. The processing and calibration of CCD mosaics can proceed independently for each CCD after the cross-talk correction, and therefore we split the images to enable efficient staging of the data to the compute resources.

3.1.2. Image Detrending

Detrending is the process that removes the instrumental signatures from the images. Detrending, in this context, includes overscan correction, bias subtraction, flat fielding, pixel-scale correction, and fringe and illumination correction. Both the overscan correction and bias correction are required to remove the bias level present in the CCD and any residual, recurrent structure in the DC bias. Overscan correction is done for all raw science and calibration images. We subtract the median pixel value in the overscan region in each row for both the amplifiers in each CCD from the raw image pixel values after the cross-talk correction stage.

The median bias frame is created using nightly bias frames taken during the late afternoon, and subtracted from the nightly data. The flat-field correction is typically derived from dome flats taken for each observing band. The input dome flat images are overscan corrected, bias corrected, and then scaled to a common mode and then median combined. The resulting flat-field correction is scaled by the inverse of the image mode, creating a correction with a mean value of about unity. For the bias correction and the flat correction, the variation among the input images is used to create an inverse variance weight map that is stored as a second extension in the correction images. The creation of correction images also requires a bad pixel map, which is an image where pixels with poor response or with high dark current are masked and excluded from the images. These bad pixel maps are created initially using bias correction and flat-field correction images to identify the troublesome pixels.

The bias and flat-field corrections are then applied to the science images to remove pixel-to-pixel sensitivity variations. These corrections are only applied to those science pixels that are not masked. In this process, each science image receives an associated inverse variance weight map that encodes the Poisson noise levels and Gaussian propagated noise from each correction step on a per pixel basis. In addition, each science image has an associated bad pixel map (short integer) where a bit is assigned to each type of masking (i.e., pixels masked from the original bad pixel map, or masked due to saturation, cosmic ray, etc.). In our data model, the science image has three extensions: image, weight map, bad pixel map. Each measured flux at the pixel level comes along with its statistical weight and a history of any masking that has been done on that pixel.

For the Mosaic2 imager which has significant focal plane distortion, the pixel scale varies significantly over the field, leading to a significant trend in delivered pixel brightness as a function of position even with a flat input sky. For such detectors flattening the sky introduces a photometric non-flatness to the focal plane. Typically this pixel-scale variation is corrected during the process of remapping to a portion of a tangent plane, but in our case we prefer to do the single-epoch cataloging on images that do not suffer from correlated noise. Therefore, we apply a pixel-scale correction to account for variation of pixel response as a function of x and y position for each CCD. We first created master template images to determine photometric flatness corrections using astrometrically refined images from the Mosaic2 camera that we use to calculate the solid angle of each pixel. The correction image is then normalized by the median value, providing a flat-field-like correction image that can be used to bring all pixels to a uniform flux sensitivity. To avoid reintroducing trends in the sky with this correction, we apply this correction only to the values of each pixel after subtraction of the modal sky value. Effectively, this correction scales only source flux while maintaining a flat sky.

Illumination and fringe corrections are derived from fully processed science observations in a particular band. These can be from a single night or shared across nights. Usually if there was only one exposure from a given band in a night, we use science observations from neighboring nights to create the illumination and fringe correction images. Illumination corrections are done for all images, but fringe corrections on the Mosaic2 camera are needed only for i and z bands. To create these correction images, we first create sky flat templates. This requires a process of stacking all the detrended images in a band–CCD combination after first flagging all pixels contaminated by source flux. Source-contaminated pixels are determined by applying a simple threshold above background with a variable grow radius so that all neighboring pixels of a pixel determined to contain source flux are also masked. Modal sky values are then calculated for each image using pixels that are not flagged for any reason (object pixels, hot column, saturated, interpolated, etc.). The reduced images are then scaled to a common modal sky value, median combined, and then rescaled to a unit modal value.

This science sky image then contains a combination of any illumination and fringe signatures that are common to the input images. To create the illumination correction, we adaptively smooth the science sky images with a kernel that is large in the center and grows smaller near the edges. This effectively averages out the effects of any fringing, leaving an illumination correction image behind. The fringe correction is then produced by first differencing the science sky image and the illumination correction image, leaving behind an image of the small-scale structure (i.e., fringe signature) that is common to all the science images. This fringe correction image is then scaled by the modal value of the science flat image to produce a fractional fringe correction image.

The illumination correction image is applied like a flat-field correction to all previously corrected images, thereby removing any trends that are introduced by the differences in illumination of the dome flats and the flat sky. The fringe correction is applied by first scaling the correction image by the modal value of the sky in the science image and then subtracting it. The results of these two corrections are visually very impressive. The fringe effects in i and z bands are nicely removed in almost all cases. We have found some problem images where the fringe correction leaves clearly visible fringe signatures, and these are cases where only a few frames in i or z were taken on a particular night, and the use of images from neighboring nights to create the corrections was not adequate.

We expect that the residual scatter we measure could be further reduced using a star flat technique to better characterize the non-uniformities in the pupil ghost. Nevertheless, the delivered data quality from our current flattening prescription produces data that meet our data quality requirements. We note that the same prescription has been used previously to meet the data quality requirements of the SuperMACHO experiment in the processing of Mosaic2 data.

At the end of this series of image detrending steps which includes overscan, bias, flat field, pixel-scale, and illumination and fringe corrections, the pipeline creates eight images (one for each CCD) for every science exposure. These single-epoch image FITS files are called red images, and they contain three extensions: the main image, a bad pixel mask (BPM), and an inverse variance weight image. The BPM contains a short integer image where any unusable pixels have non-zero values (coded according to the source of the problem). The weight map is an inverse variance image map that tracks the noise on the pixel scale and where the weight is set to zero for all masked pixels.

3.1.3. Astrometric Calibration

Besides pointing errors, wide-field imagers exhibit instrumental distortions that generally deviate significantly from those of a pure tangential projection. In addition, the vertical gradient of atmospheric refractivity creates a small image flattening of the order of a few hundredths of a percent (corresponding to a few pixels on a large mosaic), with direction and amplitude depending on the direction of the pointing. These three contributions are modeled in the SCAMP (Bertin 2006) package that we use for astrometric calibration. SCAMP uses the TPV distortion model,18 which maps detector coordinates to true tangent plane coordinates using a polynomial expansion.

SCAMP is normally meant to be run on a large set of SExtractor catalogs extracted from overlapping exposures together with a reference catalog, in order to derive a global solution. However, since our pipeline operates on an image-by-image basis, we proceed in two steps: we first run SCAMP once on a small subset of catalogs extracted from BCS mosaic images to derive an accurate polynomial model of the distortions where the distortions in the R.A./decl. tangential plane are expressed as a third degree polynomial function of the CCD x/y position. This Mosaic2 distortion map modeled using a third-order polynomial per CCD for a BCS exposure is shown in Figure 5. The astrometric solution computed in this first step of calibration is based on a set of overlapping catalogs from dithered exposures which provides tighter constraints on nonlinear distortion terms (than catalogs taken individually). Using this model, we create a distortion catalog that encodes the fixed distortion pattern of the detector. We then run SCAMP on catalogs from each individual exposure (i.e., the union of the catalogs from each of the eight single-epoch detrended images), allowing only linear terms (two for small position offsets and four for the linear distortion matrix) describing the whole focal plane to vary from exposure to exposure. The solutions for the World Coordinate System (WCS) including the TPV model parameters are then inserted back into image headers. This approach capitalizes on the expected constancy of the instrumental distortions over time.

Figure 5. Refer to the following caption and surrounding text.

Figure 5. Distortion map produced by SCAMP for one Mosaic2 exposure consisting of eight images. TPV distortion model was used. The Mosaic2 distortions were modeled for each CCD by expressing distortions along the R.A. and decl. direction each with a third-order polynomial in CCD x and y.

Standard image High-resolution image

We use the USNO-B1 (Monet et al. 2003) catalog as the astrometric reference. For astrometric refinement, the cataloging is done using SExtractor, and using WINdowed barycenters to estimate the positions of sources.

The astrometric accuracy is quite good, as can be demonstrated with the BCS co-adds. First, the accuracy is at the level of a fraction of a PSF or else significant PSF distortions would appear in the co-adds, and this is not the case. Second, we can measure the absolute accuracy relative to the calibrating catalog USNOB by probing for systematic offsets in R.A. or decl. between our object catalogs and those from the calibration source. Figure 6 shows the distribution of median offsets within all the co-add tiles for both R.A. and decl. The mean of the histograms is 0farcs0104 in R.A. and 0farcs0084 in decl., and the corresponding rms scatter is 47 mas and 45 mas, respectively. The USNOB catalog itself has an absolute accuracy with characteristic uncertainty of 200 mas (Monet et al. 2003), which then clearly dominates the astrometric uncertainty of our final catalogs.

Figure 6. Refer to the following caption and surrounding text.

Figure 6. Median value of the difference in R.A. and decl. for objects in BCS co-add catalogs vs. the USNOB catalog for every tile in arcsecs. The matching is done in a 2″ window. The histograms are peaked at ∼0farcs0104 and 0farcs0084 in Δ-R.A. and Δ-decl., respectively. The rms of the histograms in R.A. and decl. is about 0.047 and 0.045 arcsec. Note that the intrinsic accuracy of the USNOB catalog is about 0.2 arcsec (Monet et al. 2003).

Standard image High-resolution image

3.1.4. Single-epoch Cataloging

To catalog all objects from single-epoch images, we run SExtractor using PSF modeling and model-fitting photometry. A PSF model is derived for each CCD image using the PSFEx package (Bertin 2011). PSF variations within each CCD are modeled as an Nth degree polynomial expansion in CCD coordinates. For our application, we adopt a 26 × 26 pixel kernel and follow variations to third order. An example of variation of the FWHM of the PSF model across a single-epoch image is shown in Figure 7. The FWHM varies at the 10% level across this CCD due to both instrumental and integrated atmospheric effects.

Figure 7. Refer to the following caption and surrounding text.

Figure 7. Variation of the PSF model FWHM for the g band across a single-epoch image from the BCS night 20061030. Variations across the roughly 10′ × 20′ image are at the 10% level.

Standard image High-resolution image

A new version of SExtractor (version 2.14.2) uses this PSF model to carry out PSF-corrected model-fitting photometry over each image. The code proceeds by fitting a PSF model and a galaxy model to every source in the image. The two-dimensional modeling uses a weighted χ2 that captures the goodness of fit between the observed flux distribution and the model and iterates to a minimum. The resulting model parameters are stored and “asymptotic” magnitude estimates are extracted by integrating over these models. This code has been extensively tested within the DESDM program on simulated images, but the BCS data provide the first large-scale real world test. For the BCS application, we adopt a Sérsic profile galaxy model that has an ellipticity and orientation. This model fitting is computationally intensive and slows the “lightning-fast” SExtractor down to a rate on the order of 10 objects s−1 on a single core. The SExtractor config file detection parameters are shown in Table 1.

Table 1. SExtractor Detection Parameters

Parameter Values
DETECT-TYPE CCD
DETECT_MINAREA 5
DETECT_THRESH 1.5
ANALYSIS_THRESH 1.5
FILTER Y
FILTER_NAME gauss_3.0_3x3.conv
DEBLEND_NTHRESH 32
DEBLEND_MINCONT 0.005
CLEAN Y
CLEAN_PARAM 1.0
BACKPHOTO_THICK 24.0

Download table as:  ASCIITypeset image

The advantages of model-fitting photometry on single-epoch images that have not been remapped are manifold. First, pixel-to-pixel noise correlations are not present in the data and do not have to be corrected for in estimating measurement uncertainties. Second, unbiased PSF and galaxy model-fitting photometry is available across the image, allowing one to go beyond an approximate aperture correction to aperture magnitudes often used to extract galaxy and stellar photometry. Third, there are morphological parameters that can be extracted after directly accounting for the local PSF, which allows for improvements in star–galaxy classification and the extraction of PSF-corrected galaxy shear. A more detailed description of these new SExtractor capabilities along with the results from an extensive testing program within DESDM will appear elsewhere (E. Bertin et al., in preparation).

3.1.5. Remapping

From the WCS parameters which are computed for every reduced image, one can approximate the footprint of the CCD on the sky using frame boundaries in R.A. and decl. For the BCS survey, we have a pre-defined grid of 36′ × 36′ tangent plane tiles covering the observed fields. Based on this, for every red image which is astrometrically calibrated, we determine which tiles it overlaps. We then use SWarp (Bertin et al. 2002) to produce background-subtracted remapped images that conform to sections of these tangent plane tiles. A particular red image can be remapped to up to four different remap images in this process. Pixels are resampled using Lanczós-3 interpolation.

Remapping also produces a pixel weight map and we also remap the bad pixel map (using nearest neighbor remapping). In this process of remapping, zero weight pixels in the reduced images generically impact multiple pixels in the remap image given the size of the interpolation kernel. These remaps are then stored for later photometric calibration and co-addition. This on-the-fly remapping need not be done, because at a later stage of co-addition one could in principle return to the red images, but given the PSF homogenization we do prior to co-addition we have found it convenient to do the remapping as we are processing the nightly data sets.

3.1.6. Nightly Photometric Calibration

Our initial strategy for photometric calibration involved traditional photometric calibration using the standard fields observed on photometric nights along with the image overlaps to create a common zero point across all of our tiles. In fact, within DESDM we have developed a so-called Photometric Standards Module (PSM; Tucker et al. 2007) that we use to fit for nightly photometric solutions, and then we apply those solutions to all science images and associated catalogs from that night. For BCS this involves determining the zero points of all images on photometric nights through calibration to identified non-variable standard stars from the SDSS Stripe 82 field (Smith et al. 2002).

This procedure was used for processing and calibration of the BCS data processed in Spring 2008. But closer analysis of these data showed that we were not able to control photometric zero points to the required level to allow for cluster photometric redshifts over the full survey area. We therefore abandoned this method for BCS in favor of relative photometric calibration using common stars in overlapping red images followed by absolute calibration using the stellar locus (described in more detail in Section 3.2.5). One problem we faced is that so-called photometric nights exhibited non-photometric behavior in the standard field observations. There was no reliable photometric monitor camera at CTIO during our survey, and so observers simply used the time honored tradition of watching for clouds to make the call on a night being photometric. Because of our strategy for standard star observations (beginning, middle, end of night), even those nights that exhibit consistent photometric solutions need not have been photometric over the full nights. Therefore, we felt it safer to assume that no night was truly photometric and to calibrate the data using an entirely different approach.

The results from the PSM module for those nights exhibiting good photometric solutions are still useful. They have allowed us to monitor changes in the detectors and to measure the color terms in transforming our photometry onto the SDSS system. We provide a brief description of this procedure, although no science results in this paper are based on PSM-related direct photometric calibration. We expect to apply this method for absolute photometric calibration of DES data where we will indeed have an IR photometric monitoring camera on the mountain. The PSM solves for the following equation:

Equation (2)

where an is the photometric zero point for all eight CCDs, bn is the color term, stdColor is the fiducial color around which we define our standard solutions, which is gr for g and r bands, and ri for i and z bands, stdcolor0 is a constant equal to gr = 0.53 for g and r bands and ri = 0.09 for i and z bands, k is the first-order extinction coefficient, and X is the airmass. The PSM module solves for an, bn, and k for each photometric night. Using these values for the PSM a, b, and k, one can also estimate the expected zero point for every exposure. We calculate it as follows:

Equation (3)

We applied the PSM on about 30 BCS nights which were classified as photometric. We also checked for trends in variation of color terms as a function of CCD number. Only the i-band color term shows some variation, and this approximate constancy of color terms greatly simplifies the co-addition of the data, because we do not have to track which CCDs have contributed to each pixel on the sky. The color terms we have used for photometric calibration are −0.1221, −0.0123, −0.1907, and 0.0226 in griz, respectively. We also examine the band-dependent extinction coefficient (k) calculated using data from the photometric nights. For the ensemble of about 30 photometric solutions in each band, we find the median griz extinction coefficients at CTIO over the life of the survey to be 0.181, 0.104, 0.087, and 0.067 mag airmass−1, respectively.

This completes the description of all the steps of the nightly processing or single-epoch processing that we do for BCS.

3.2. Co-addition

Once we have data processed for all of the BCS nights, we then combine data within common locations on the sky to build deeper images that we call co-adds. This process is called co-addition and is complicated because it involves combining data taken in widely separated times and under very different observing conditions. Co-addition processing is done on a tile-by-tile basis. We describe our approach below.

3.2.1. Relative Photometric Calibration

During single-epoch processing we extract instrumental magnitudes. To produce science ready catalogs, we must calculate the zero point for every image and re-calibrate the magnitudes. The photometric calibration is done in two steps. The first step is a relative zero-point calibration that uses the same object in overlapping exposures, and the second is an absolute calibration using the stellar locus.

The relative calibration is done tile by tile rather than simultaneously across the full survey. We use two different pieces of information to calculate the relative zero points. The primary constraint comes from the average magnitude differences from pairs of red images with overlapping stars. The stars are selected based on the SExtractor flags and spread_model (discussed later in Section 3.2.4) values. In cases where there are not enough overlapping stars, we use the average CCD to CCD zero-point differences derived from photometric nights. In previous versions of the reduction, we also used direct zero points derived from photometric nights (see Section 3.1.6) and relative sky brightnesses on pairs of CCDs. As previously mentioned, the direct photometric zero-point information is contaminated at some level. The sky brightness constraints also seem to be problematic for BCS, because only g- and r-band data were taken on dark nights with no moon present which can introduce a gradient across the camera. To avoid a degradation of the calibration, we used neither the sky brightness constraints nor the direct photometric zero points.

We determine the zero points for all images in a tile by doing a least-squares solution using the inputs described above. For this least-squares solution there are N input images, each with an unknown zero point in the vector z. We arbitrarily fix the zero point for one image and calibrate the remaining images relative to it. We have M different constraints in the constraint vector c. The matrix A is N × M and denotes the images involved for each constraint. The resulting system of equations is described by Az = c, where we use singular value decomposition to solve for the vector z. This gives the relative zero points needed to co-add the data for a particular tile.

3.2.2. PSF Homogenization

Combining images with variable seeing generically leads to a PSF that varies discontinuously over the co-added image. This affects star–galaxy separation and contributes to variation across the image in the completeness at a given photometric depth. The PSF accuracy could be quite poor in regions where there are abrupt changes to the PSF which would translate into biases in the photometry that would be difficult to track. The main steps involved in the process of PSF homogenization include: (1) modeling the PSF using PSFEx for all remap images contributing to a co-add tile, (2) choosing the parameters of the target PSF, (3) using PSFEx to generate the homogenization kernel, and (4) carrying out the convolution to homogenize all the remap images to a common PSF.

To reduce PSF variation, we processed our images to bring them to a common PSF within an image and from image to image within a co-add tile. To do this we apply position-dependent convolution kernels that are determined using power spectrum weighting functions that adjust the relative contributions of large-scale and small-scale power within an image in such a way as to bring the PSFs within and among the image samples into agreement. The target PSF is defined to be a circular Moffat function with the FWHM set to be the same as the median value of all input PSFs:

Equation (4)

where Yl are the elements of a polynomial basis in xy. The target PSF is defined to be a circular Moffat function with the FWHM set to be the median FWHM of the input images. We imposed a cut on input image PSF FWHM < 1.6 arcsec. This selects only images with relatively good seeing. Images from each band are homogenized separately. The FWHM of the target PSF for all BCS tiles is shown in the bottom panel of Figure 8.

Figure 8. Refer to the following caption and surrounding text.

Figure 8. FWHM of single-epoch images using PSFex (top panel) along with the target PSF FWHM used for homogenizing the co-add images for the full BCS survey (bottom panel). The peak values of target PSFs are about 1″ for g and r bands, 0farcs9 for i band, and 0farcs8 for z band, respectively.

Standard image High-resolution image

Another price of homogenization is that noise is correlated on the scale of the PSF. While the noise is already correlated to some degree through the remapping interpolation kernel, PSF homogenization characteristically affects larger angular scales than does the remapping kernel. This leads to biases in photometric and morphological uncertainties, and can also affect initial object detection process in SExtractor. To address this within DES, we account for the noise correlations on two critical scales by producing two different weight maps. The first weight map is used to track the pixel-scale noise, and the second weight map is used to correct for the correlated noise on the scale of the PSF. The pixel-scale weight map is used by SExtractor in determining photometric and morphological uncertainties. The PSF scale weight map is used by SExtractor in the detection process. Extensive tests within DESDM have shown this approach to be adequate to produce unbiased photometric and morphological uncertainties and to enable unbiased detection of objects within co-adds built from homogenized images. These results will be presented in detail elsewhere. For the BCS processing, we used only a single pixel-scale weight map, tuned to return the correct measurement uncertainties within SExtractor.

3.2.3. Stacking Single-epoch Images

We use SWarp to combine the PSF-homogenized images to build the co-add tile. Inputs include the relative flux scales derived from the calibration described in Section 3.2.1. We combine the homogenized remap images using the associated weight maps and BPM for each image. The values of the flux-scaled, resampled pixels for each image are then median combined to create the output image. This allows us to be more robust to transient features such as cosmic rays in the i and z bands where there are three overlapping images. Also, objects with saturated pixels in all single-epoch images will contain pixels that are marked as saturated in the co-add images as well. This ensures accurate flagging of objects with untrustworthy photometry during the co-add cataloging stage. The resulting output co-add image's size is 8192 × 8192 pixels or approximately 0.6 × 0.6 deg.

Figure 9 shows a map of the FWHM as a function of position over one homogenized co-add image. Variations are at the level of ∼1% over the co-add, as compared to the ∼10% variations that are typical for Mosaic2 across a single CCD (see Figure 7). The constancy of PSF as a function of a position ensures that the PSF model can be modeled accurately and that the PSF-corrected model-fitting photometry is unbiased. The PSF homogenization process also circularizes the PSF. Figure 10 shows the distribution of ellipticities for the Mosaic2 single-epoch images and co-added images (color coded by band). The single-epoch ellipticity varies up to 0.1 with a modal value around 0.02. By contrast, the ellipticity distribution of the BCS co-adds is peaked at a fraction of a percent with a median value of 0.001.

Figure 9. Refer to the following caption and surrounding text.

Figure 9. Variation of the PSF model FWHM for a g-band co-add image for the co-add tile BCS0516-5441. Because of the homogenization process the variations are at the level of 1% across the 36′ image.

Standard image High-resolution image
Figure 10. Refer to the following caption and surrounding text.

Figure 10. Mean ellipticity calculated by PSFEx for single-epoch images (top panel) and for PSF-homogenized co-adds (bottom panel), color coded by band. Ellipticity is defined as (ab)/(a + b), where a and b refer to semimajor and semiminor axes, respectively. For single-epoch images, the median ellipticity for griz bands is 0.0342, 0.0326, 0.0374, and 0.04, respectively. For co-adds, typical values are around 0.004, 0.0024, 0.0026, 0.0033.

Standard image High-resolution image

3.2.4. Cataloging of Co-added Images

To catalog the objects from co-added images, we run SExtractor in dual-image mode with a common detection image across all bands. For BCS, we use the i-band image as the detection image, because it has three overlapping images so the cosmic-ray removal is good, and it is by design the deepest of the bands. We then run SExtractor using model-fitting photometry using this detection image and co-added image in each band. This ensures that a common set of objects are cataloged across all bands. In both single-epoch and co-addition cataloging, the detection criterion was that a minimum of 5 adjacent pixels had to have flux levels about 1.5σ above background noise. The full SExtractor detection parameters used for both co-added and single-epoch images are shown in Table 1. In all we catalog about 800 columns across four bands. However, for the public data release, we have released 60 columns from SExtractor per object. This full list can be found in Table 3. Most of the parameters are described in the SExtractor manual online. There are a few additional parameters which are not yet released in the public version of SExtractor. These include model magnitudes and a new star–galaxy classifier called spread_model, which is a normalized simplified linear discriminant between the best-fitting local PSF model (ϕ) and a slightly more extended model (G) made from the same PSF convolved with a circular exponential disk model with scale length = FWHM/16 (where FWHM is the full width at half-maximum of the PSF model). It can be defined by the following equation:

Equation (5)

where x is the image vector centered on the source. The distribution of spread_model for BCS catalogs is discussed in Section 3.2.7. More details of spread_model will be described elsewhere (E. Bertin et al., in preparation).

3.2.5. Absolute Photometric Calibration

Once all objects from the co-add are cataloged (in instrumental magnitudes), we proceed to obtain the absolute photometric calibration using Stellar Locus Regression (High et al. 2009). The principle behind this is that the regularity of the stellar main sequence leads to a pre-determined line in color–color space called the stellar locus. This stellar locus is observed to be invariant over the sky, at least for fields that lie outside the galactic plane. The constancy of the stellar locus has been used as a cross-check of the photometric calibration within the SDSS survey (Ivezić et al. 2007).

Absolute photometric calibration is done after the end of co-addition. We select star-like objects using a cut on the SExtractor spread_model parameter and magnitude error. We then match the observed stars to Two Micron All Sky Survey (2MASS) stars from the NOMAD catalog, which is a combination of USNOB and 2MASS data sets, and which have JHK magnitudes (Skrutskie et al. 2006). Color offsets are varied until the observed locus matches the known locus. Because the 2MASS magnitudes are calibrated with a zero-point accuracy at the ∼2% level, one can bootstrap the calibration to the other bands. The known locus is derived using the high-quality “superclean” SDSS–2MASS matched catalog from Covey et al. (2007). It consists of ∼300,000 high-quality stars with data in ugrizJHK. A median locus is calculated for each possible color combination in bins of gi.

The fit is done in two stages. First, a three parameter fit is done to the gr, ri, iz colors. Another fit is done using gr and rJ where the shift in the gr color is fixed from the first fit. The first fit provides an accurate calibration of the colors and the second fit fixes the absolute scale. The fit is done this way because only a fraction of the stars that overlap with 2MASS are saturated in all bands. We perform the stellar locus calibration for model and 3 arcsec aperture magnitudes, separately. The model magnitude calibration is then used to calibrate the other magnitudes in the catalog (except for the 3 arcsec magnitudes). The calibration of the 3 arcsec aperture magnitudes determines a PSF-dependent aperture correction for the mag_aper_3 magnitudes only. We have found these small aperture magnitudes to provide higher signal-to-noise colors for faint galaxies in comparison to mag_model and mag_auto.

An example of the stellar locus fits for one co-add tile is shown in Figure 11. Red points are the observed colors of the stars, and the blue line is the median SDSS–2MASS locus. The orthogonal scatter about the stellar locus for all three color combinations for BCS tiles is shown in Figure 12. The rms orthogonal scatter about the stellar locus in (gr, ri), (ri, iz), and (gr, rj) is 0.059, 0.061, and 0.075, respectively. Given the scatter and the number of stars available for calibration, we can determine the zero points in our bands with sub-percent accuracy.

Figure 11. Refer to the following caption and surrounding text.

Figure 11. Stellar locus in three different color–color spaces for the BCS tile BCS0510-5043. The blue line shows the expected distribution derived from studies of a large ensemble of stars within the SDSS and 2MASS surveys. Red points show model magnitudes of stars from the BCS catalogs of this tile. The stellar locus distributions allow us to calibrate the absolute photometry and to assess the quality of the photometry for each tile.

Standard image High-resolution image
Figure 12. Refer to the following caption and surrounding text.

Figure 12. Stellar locus scatter (above) for three color combinations for all tiles in the BCS survey (top panel) and the same for SDSS (bottom panel). Typical BCS scatter is in the 5%–8% range, and offsets after calibration are characteristically 1% or less. Typical scatter and offsets in the SDSS data set are smaller than in the BCS survey, reflecting the tighter requirements on photometric quality in SDSS.

Standard image High-resolution image

3.2.6. Testing Stellar Locus Calibration in SDSS

To validate our photometric calibration algorithms, we applied exactly the same procedure to the full SDSS–2MASS catalog in Covey et al. (2007). This catalog includes noiser objects than the catalog we used to derive the median stellar locus. We selected four areas (between R.A. of 120° and 350°) and divided each into 1° × 1° patches. We match the objects to obtain 2MASS magnitudes and then apply the same calibration procedure as we did for the BCS catalogs. The rms scatter distributions for all three color combinations can be found in Figure 12 (bottom panel). The corresponding scatter for SDSS in (gr, ri), (ri, iz), and (gr, rj) is about 0.041, 0.035, and 0.05, respectively, and is about 1.5 times smaller than for the BCS catalogs. This is clear evidence for higher scatter in our stellar photometry as compared to the SDSS photometry. Assuming this additional source of scatter adds in quadrature with the SDSS observed scatter, we estimate the extra noise in BCS color combinations compared to SDSS is 0.039, 0.054, and 0.048 in (gr, ri), (ri, iz), and (gr, rj), respectively. Because these noise sources are getting contributions from each color, we can estimate that the noise floors are δ(gr) ∼ 0.027, δ(ri) ∼ 0.038, and δ(iz) ∼ 0.038. These then imply noise floors in the stellar photometry within griz bands of approximately 1.9%, 2.3%, 2.7%, and 2.7%, respectively. This is then in good agreement with the typical repeatability scatter seen in these bands (see Figure 14) when one considers that g and r bands each have two overlapping exposures and i and z bands each have three.

3.2.7. Star–Galaxy Classification

Our current catalogs contain two star–galaxy classification parameters provided by SExtractor: class_star, which has been extensively studied and spread_model, which has been newly developed as part of the DESDM development program. In order to test their performance and range of magnitudes up to which these measures can be reliably used, we plot the behavior of these two classifiers in the i band as a function of $mag\_model$ in Figure 13. class_star lies in the range from 0 to 1. At bright magnitudes, one can see two sequences in class_star for galaxies and stars near 0 and 1, respectively. The two sequences begin merging as bright as i = 20 and are significantly merged beyond i = 22. As described in Section 3.2.4, spread_model uses the local PSF model to quantify the differences between PSF-like objects and resolved objects. In the spread_model panel, it is clear that there is a strong stellar sequence around the value 0.0, and that galaxies exhibit more positive values. The narrow stellar sequence and the broad galaxy sequence begin merging at i = 22 in the BCS, but there is significant separation in the two distributions of points down to i = 23. Along with spread_model comes a measurement uncertainty, and so it is, for example, possible to define a sample of objects that lie off the stellar sequence in a statically significant way. For the BCS data, a good cut to separate stars would be ${\tt spread\_model}<0.003$. Detailed studies of this new classification tool have been carried out within the DESDM project and will be carried out elsewhere.

Figure 13. Refer to the following caption and surrounding text.

Figure 13. Plots of spread_model (top panel) and class_star (bottom panel) as a function of i-band magnitudes for the full BCS catalog. Note that both measurements exhibit separate sequences for stars and galaxies, and that as one moves to fainter magnitudes these sequences merge. This is simply due to low signal-to-noise objects not containing enough morphological information for a reliable classification. However, note also that the new spread_model retains good capability of separating galaxies from stars to fainter magnitudes than class_star.

Standard image High-resolution image

3.2.8. Quality Control and Science Ready Catalogs

During the processing within the DESDM system, a variety of quality checks are carried out. These include, for example, thresholding checks on the fraction of flagged pixels within an image and the χ2 and number of stars used in the astrometric fit of each exposure. In addition, the system is set up to report on the similarity between correction images (bias, flat, illum, and fringe) against stored templates that have been fully vetted. During the BCS processing this last facility was not used.

Our experience has been that problems at any level of processing are most likely to show up in the stages of relative and absolute photometric calibration. Therefore, for the BCS processing done here we capture a range of photometric quality tests including the number of stars used in the stellar locus calibration and the rms scatter about the true stellar locus of the calibrated data (see Figure 11). In addition, we examine the photometric repeatability for common objects within overlapping images contributing to each tile. In Figure 14, we show an example for the g band in tile BCS0549-5043. This shows the magnitude difference between pairs of overlapping objects versus the average magnitude. The scatter here includes both statistical and systematic contributions, and the envelope of scatter grows toward faint magnitudes, as expected. Outlier rejection is done on the point distribution, and all 3σ outliers are filtered out and colored red. In the top panel, we plot the mean and rms as well as the outlier fraction of these repeatability distributions as a function of magnitude. The mean and rms numbers are listed in millimagnitudes. Also, the statistical uncertainties of the model magnitudes are used to estimate the systematic magnitude error contribution to the rms. On the bright end where the statistical noise is very small, the systematic contribution to the rms is close to the total, which is 10 mmag in this case. As one moves toward the faint end, the statistical contribution increases and the estimated systematic contribution plays only a small role in explaining the scatter. This is just how we expect the photometry to behave.

Figure 14. Refer to the following caption and surrounding text.

Figure 14. Repeatability plots for single-epoch images for BCS tile BCS0549-5043 in the g band. The repeatability is used to test the quality of the photometry in each band and tile. The top panel shows the mean magnitude difference between different single-epoch images which cover the same region of sky, binned as a function of magnitude along with statistical and systematic errors. The bottom panel shows an unbinned representation of the same. Characteristic scatter on the bright end (i.e., the systematic floor) is 2%–3% for g and r and 3%–4% for i and z.

Standard image High-resolution image

Repeatability plots indicate systematic contributions to the photometric errors at the 10–20 mmag levels for typical g- and r-band tiles. For i- and z-band tiles, the systematic noise is closer to 30–40 mmag. For all the BCS tiles, we have examined these repeatability and stellar locus plots to probe whether the scatter is in acceptable ranges. In cases where tiles did not meet these quality control tests, we worked on the relative and absolute photometric calibration to improve the data. In addition to these photometry tests, we examined the sky distribution of cataloged objects within each tile. In cases where large numbers of faint “junk” objects were found, we attempted to remove them by adjusting the cataloging. At present all our tiles meet these quality tests except for a handful of tiles that are marked in red in Figure 1. This includes four tiles in the 5 hr field and six tiles in the 23 hr field, corresponding to ∼4% of the 80 deg2 region. Ideally, we would reimage these regions to obtain better data.

For every BCS night, the detrending pipeline creates three main types of science image files which we denote as raw, red, and remap. The co-add pipeline produces four co-add images per tile for each of the four bands. Once we have calibrated co-add catalogs for all the processed tiles, we run a post-processing program to remove duplicate objects near the edges of the tiles. This is necessary because there is a 2 arcmin overlap between neighboring tiles. The program selects sources that appear in neighboring tiles that lie within 0.9 arcsec radius and for each pair it keeps the object that lies farthest from the edge of its tile. In this way a single, science ready catalog is prepared for each field. The 23 hr field catalog contains 1,877,088 objects, and the 5 hr field contains 2,952,282 objects with i model magnitude <23.5. In the next section, we review additional tests of the data quality.

3.3. Survey Depth

We estimate the 10σ photometric depths for galaxies using SExtractor ${\it mag}\_{\it auto}$ errors. This is obtained by doing a linear fit to the relationship between the magnitude and the log of the inverse magnitude error. As a cross-check, we also estimated the depths using information in the weight maps, and the results were comparable.

The distributions of depth for each band over the full survey are shown in Figure 15. The median magnitude depths for griz bands are 23.3, 23.4, 23.0, and 21.3, respectively. These numbers are shallower than the depths we estimated using the NOAO exposure time calculator during the survey planning; those depths were 24.0, 23.9, 23.6, and 22.3. Our originally proposed depths assume a 2.2 arcsec diameter aperture, whereas galaxies near the 10σ detection threshold are typically larger in our images. We examine the depths of 2″ aperture photometry and find that the median depths are 24.1, 24.1, 23.5, and 22.2 in griz, respectively. These are within 0.2 mag of our naive estimates, explaining the bulk of the difference. In addition, we know that during our survey often the conditions were not photometric, and this could introduce another 0.1–0.2 mag offset. Another reason for the difference is that the calibrated observed magnitudes also include a correction for galactic extinction and reddening, whereas the estimated depths did not have extinction corrections included.

Figure 15. Refer to the following caption and surrounding text.

Figure 15. Histogram of 10σ magnitude limits for all BCS tiles using ${\it mag}\_{\it auto}$ errors in all four bands. The median depth values for all BCS tiles are 23.3, 23.4, 23, and 21.3 in griz, respectively. The corresponding 10σ point-source depths are 23.9, 24.0, 23.6, and 22.1.

Standard image High-resolution image

Corresponding 10σ point-source depths are extracted using model-fitting ${\it mag}\_{\it psf}$ uncertainties. The results in bands griz are 23.9, 24.0, 23.6, and 22.1, respectively. These are in better agreement with the small aperture photometry depths we used to estimate the exposure times for the survey.

Another way of probing the depth of the survey is to look at the number counts of sources as a function of magnitude. Figure 16 contains the log N–log S from the combined 5 hr and 23 hr fields using ${\it mag}\_{\it auto}$. No star–galaxy separation is carried out, because near the detection limit there is not enough morphological information to reliably classify. The magnitudes of the turnover in the counts correspond to 24.15, 23.55, 23.25, and 22.35. These turnover magnitudes mark the onset of significant incompleteness in the catalogs. Estimates of the depth of the 50% and 90% completeness limits for a subset of the tiles appear in Zenteno et al. (2011), but we do not apply that analysis to the whole survey.

Figure 16. Refer to the following caption and surrounding text.

Figure 16. Number counts of BCS objects for all four bands in the BCS field using ${\it mag}\_{\it auto}$. The turnover magnitudes are 24.15, 23.55, 23.25, and 22.35 in griz, respectively. The corresponding median ${\it mag}\_{\it auto}$ 10σ depths in griz are 23.0, 23.4, 23.0, and 21.3, respectively.

Standard image High-resolution image

Finally, we probe for spatial variations in photometry by examining the distribution of sources above certain flux cuts over the two survey regions. The distributions of all sources at i < 22.5 in both the 5 hr and the 23 hr fields are shown in Figure 17. Objects are excluded for tiles that did not pass our quality tests, and this produces four black squares in the 5 hr field, and six black squares in the 23 hr field. The general uniformity of this object density distribution is an indication that the absolute photometric calibration is reasonably consistent across the fields. In the 5 hr field, it is clear that one tile in the lower left does not reach the depth i = 22.5 reached by the other tiles. This defect disappears if we examine the density distribution roughly 1 mag brighter, indicating that this is a depth issue and not a photometric calibration problem. For the 23 hr field, there is a small black rectangular notch in the upper right of the field with an associated dark path. Within this tile, we have verified that too few of our i-band exposures met the seeing requirements, and that has led to an uncovered region (the notch) as well as the shadow of lower object density to the right. Again, this is a depth issue rather than a photometric issue. There is another shadowed tile visible in the lower right part of the 23 hr field, and this is also a depth issue.

Figure 17. Refer to the following caption and surrounding text.

Figure 17. Distribution of sources in 5 hr (top) and 23 hr fields (bottom) from the combined catalogs after a ${\it mag}\_{\it auto}$ magnitude cut (i < 22.5). The gaps show tiles which were not included in the release due to data quality problems. Some other tiles have only partial coverage or do not push to the depth of the magnitude cut with good completeness. A logarithmic scale (zscale option in ds9) is used. The uniformity of the source distribution is a demonstration of the photometric uniformity across the survey.

Standard image High-resolution image

In Figure 18, we show similar object density plots for stars and galaxies for the 5 hr field. The stars and galaxies are chosen based on a spread_model cut at 0.003, where all objects with values greater than this threshold were considered galaxies and the rest were considered stars. A catalog depth cut at i < 22.5 was imposed. This is shown in Figure 18. The stellar distribution is quite uniform across this field, indicating that spread_model performance is quite robust to variations of PSF across a survey. Note that the shallow tile in the lower left portion of the survey exhibits edge effects, which we believe are associated with the reduced depth of this tile relative to the others. In the lower panel is the galaxy distribution. The same shallow tile shows up in the lower left portion. In addition, it is clear that the galaxy density is varying as a function of position as expected for the large-scale structure of the universe. We are quite happy with this performance. We have explored the same plots in the 23 hr field, and the results are similar. Moreover, we have explored these plots created using star_class as the classifier. The spatial distribution is highly inhomogeneous, indicating that class_star cannot be used to reliably separate stars and galaxies in a uniform manner across a large survey.

Figure 18. Refer to the following caption and surrounding text.

Figure 18. Distribution of stars (top) and galaxies (bottom) in the BCS 5 hr field with ${\it mag}\_{\it auto}$ i < 22.5 based on a spread_model cut of 0.003. The stars look uniformly distributed and traces of large-scale structure in the galaxy density plot can be seen. A logarithmic scale (zscale option in ds9) is used. We have explored similar plots with class_star, and these contain very large inhomogeneities in the stellar and galaxy distribution, indicating that spread_model offers significant advantages over class_star in the classification of objects in large surveys.

Standard image High-resolution image

3.4. BCS Data Release

We are publicly releasing the BCS catalogs, images, and the photo-z training fields to the astrophysical community. All public BCS data products can be downloaded from http://www.usm.uni-muenchen.de/BCS. The BCS catalogs are divided into ascii files for the 5 hr and 23 hr fields. Separate catalogs are available for the tiles that passed our quality analysis and for the tiles that did not. Each catalog contains 63 columns which are described in Table 3. We are also making available the co-added images for the BCS survey at the same site. These images are available in a PSF-homogenized form (used for the cataloging) and in the non-homogenized form. As in the case of the catalogs, we split the tar files by field and by whether the tiles passed our quality tests or did not. These tarballs contain FITS tile compressed images, which reduces the volume by a factor of ∼5 relative to the uncompressed co-adds.

4. PHOTOMETRIC REDSHIFTS

Initial tests of data quality are undertaken by obtaining photometric redshifts for BCS objects using an artificial neural network. Neural networks have been used to determine accurate photometric redshifts in past optical surveys (Collister et al. 2007; Oyaizu et al. 2008b). We use annz, a feed-forward multi-layer perceptron network designed for finding photometric redshifts (Collister & Lahav 2004). The network is composed of a series of inputs, several layers of nodes, and one or more outputs. Each node is made of a function that takes its input as a weighted output of each of the previous layer's nodes. The weights are tuned by training the network on a representative data set with known outputs. The optimal set of weights are those that minimize a cost function, which reflects the difference between a known output value and the network's predicted value.

The training process can result in a set of weights that are overfit to a particular training set. Furthermore, a given training process can converge to a local minimum of the cost function instead of the true minimum. In annz, the first issue is overcome by finding the set of weights that minimizes the cost function on a separate validation set rather than on the training set itself. The second is avoided by training a committee of several networks with randomized initial weights. The mean weights from each committee are used in the final network.

We train our neural network on 5820 objects with known redshifts. It is run with eight input parameters: four magnitudes griz; three colors gr, ri, and iz; and a concentration index. ${\it Mag}\_{\it auto}$ magnitudes are used for individual filters, ${\it mag}\_{\it aper}\_3$ magnitudes are used to determine colors, and the i-band spread_model is used for the concentration index. Following the guidelines of Firth et al. (2003) and Collister & Lahav (2004), we use a minimally sufficient network architecture and committee size in the hope of achieving the highest quality results. We find this to be a committee of eight neural networks that each have an architecture of 8:16:16:1 (eight inputs, two hidden layers of 16 nodes each, and one output). We denote photometric and spectroscopic redshifts as zphot and zspec, respectively, and have Δz represent zphotzspec.

4.1. Photometry Cross-checks with SDSS

We compare our photometry with SDSS data by looking at spectroscopic calibration tiles which overlap with SDSS data and which contain significant numbers of spectroscopic redshifts. As explained in Section 4.2, these spectroscopic redshifts are then used for training our neural networks to obtained photometric redshifts. To do a comparison with SDSS catalogs, we applied color and extinction corrections to SDSS catalogs from these tiles. The fields which we consider for this purpose are from CNOC and DEEP fields centered at R.A., decl. values of (2 hr 25 minutes, 7°), (2 hr 29 minutes, 35°), (23 hr 27 minutes, 8°), and (23 hr 29 minutes, 12°).

We then do an object by object comparison of colors and magnitudes of all stars from SDSS versus those from BCS catalogs in these tiles. The SDSS magnitudes for objects which overlap BCS tiles go up to 23.4 in g and 21.6 in r, i, and z. We consider an object to be matched if it spatially overlaps to within 2″. Since the number of objects in each tile which overlap with SDSS is small, we combine results from all tiles into one plot for each magnitude or color as necessary. The magnitude comparison for all four bands (using ${\it mag}\_{\it model}$) is shown in Figure 19. The peak offset between BCS model magnitudes and SDSS magnitudes is approximately −0.06 in g, r, and i bands and about 0.02 in z bands, while the median offset is −0.0562 in g, r, and i and 0.0087 in z.

Figure 19. Refer to the following caption and surrounding text.

Figure 19. Comparison between calibrated model magnitudes for stars from four BCS standard tiles (after stacking them together) and SDSS magnitudes after color and extinction corrections for the g band. The stars are chosen by requiring that ${\tt class\_star} > 0.8$ in all four bands and also SExtractor flag <5. The histograms are normalized to unity. The peak offset between BCS model magnitudes and SDSS is −0.06 in g, r, i and 0.02 in z bands.

Standard image High-resolution image

We also do a color comparison using the same cuts for these tiles between BCS and SDSS colors using ${\it mag}\_{\it aper3}$ (magnitude within a 3 arcsec aperture), because colors are determined using this magnitude in photo-z estimation (Figure 20). The peak offset in colors in gr, ri, and iz is about −0.01, −0.03, and −0.02 mag, respectively. The median offset is about −0.01 for gr and iz and about −0.05 for ri. The rms scatter about the median is 0.052, 0.061, and 0.081 for gr, ri, and iz, respectively.

Figure 20. Refer to the following caption and surrounding text.

Figure 20. Difference in (gr), (ri), and (iz) colors between stars from BCS tiles and SDSS using ${\it mag}\_{\it aper}3$. All cuts are the same as in Figure 19 and histograms are normalized to unity.

Standard image High-resolution image

4.2. Photometric Redshift Calibration

We obtain our training data set by dedicating nine of the survey pointings to fields overlapping spectroscopic surveys: CDFS, CFRS, two CNOC2 fields, SSA 22, three DEEP2 fields, and VVDS. Objects from these fields share their photometric depth and reduction pipeline with the BCS data as well as have known spectroscopic redshifts. Although this training set is not representative of the survey in sky position, Abdalla et al. (2011) show that limiting a neural network training set to small patches of sky does not result in biased redshifts for large surveys. The key issue is having uniformity of photometry between the training and application fields.

Only objects that have reliable redshifts and photometric parameters are used to train the neural network. Objects with an i-band magnitude >22.5 or an i-band error >0.1 are removed from the training set. Objects that are unresolved in one or more bands or that have a SExtractor flag greater than 2 are removed as well. Similar cuts are made based on spectroscopic redshift errors, however the nature of the cut varies by catalog. The DEEP2, CNOC2, and CFRS catalogs provide redshift errors for each measurement. Objects from these fields are removed if their spectroscopic redshift errors are greater than 0.01. The ACES catalog (providing coverage of the CDFS field) and the VVDS catalog assign a confidence parameter for each object. In this case, we only include objects with a confidence of 3 or 4 (see respective surveys for definitions). Both primary and secondary targets from the VVDS survey are included. Additional cuts were experimented with but found to produce more outliers, a larger sigma, or to reduce the size of the training set too much.

The final training set contains 5820 objects. Table 2 breaks down the number of training objects that pass the filter criteria from each pointing. Figure 21 further breaks down these objects by redshift bin. The pointings combine to provide a consistent distribution of redshifts from 0 < z ⩽ 1.1.

Figure 21. Refer to the following caption and surrounding text.

Figure 21. Redshift distribution of 5820 objects from the calibration fields used to train annz. The redshift distribution is color coded by source.

Standard image High-resolution image

Table 2. Photo-z Training Fields

Survey R.A. Decl. Redshifts
ACESa 03:32 −27:48 2846
CFRSb 22:17 00:91 65
CNOC2c 02:25 00:07 318
CNOC2c 02:26 00:43 164
SSA 22d 01:40 00:01 818
DEEP2e 02:29 00:35 226
DEEP2e 23:27 00:08 414
DEEP2e 23:29 00:12 600
VVDSf 14:00 05:00 329

Notes. aCooper et al. (2011). bLilly et al. (1995a, 1995b). cYee et al. (2000). dCowie et al. (1994). eDavis et al. (2007); Newman et al. (2012). fLe Fèvre et al. (2004, 2005).

Download table as:  ASCIITypeset image

We have released the matched catalogs of spectroscopic redshifts along with information from BCS catalogs for these fields. This would enable others to develop their own photometric redshift estimates using these data.

We evaluate the performance of annz on our data by randomly selecting half of the objects from the training set to train annz, while the other half remains for testing. One-sixth of the objects from the training half are removed to form the validation set (see above). The result provides 2910 objects with both photometric and spectroscopic redshifts.

We measure the photometric redshift performance using three metrics. The first, following Ilbert et al. (2006), is the normalized median absolute deviation

This metric is better suited for our data than the standard deviation as it is less affected by catastrophic outliers. The second is the fraction of catastrophic outliers η defined as the percentage of objects that satisfy

Equation (6)

The third metric is the net bias in redshift, averaged over all N objects and defined as

Our training set performs as σΔz/(1 + z) = 0.061 with η = 7.49%. Over the entire range of redshifts there is little net bias: zbias = 0.0005. These statistics, particularly the fraction of catastrophic outliers, can be improved by culling objects based on their photometric redshift error. annz provides redshift errors that are derived from the errors of the input parameters, however there are several other methods of determining photometric redshift errors. Oyaizu et al. (2008a) evaluate how well various methods improve zphot statistics. They show that culling objects based on redshift errors derived from magnitude errors are competitive with other methods at reducing the redshift scatter and catastrophic outlier fraction.

We analyze our photometric performance after culling our data of objects with zphot error ⩾0.13 based on errors provided by annz. The performance of the culled data improves to σΔz/(1 + z) = 0.054 and η = 4.93%. However, the zbias increases slightly to 0.0022. While the bias increases, it is still negligible. Figure 22 demonstrates the performance of annz in determining redshifts. For objects within the range 0.3 ≲ zspec ≲ 0.9, our photometric and spectroscopic redshifts match with little bias. For objects with redshifts below 0.3, there is a positive bias and for objects with redshifts beyond zspec ∼ 0.9 there is a negative bias.

Figure 22. Refer to the following caption and surrounding text.

Figure 22. Top panel: two-dimensional histogram of zphot vs. zspec for training set objects that have zphot error <0.13. Bin sizes are 0.015z × 0.015z. Red bins count catastrophic outliers as defined by Equation (6). Blue bins count all other objects. Five objects with zspec or zphot >1.5 are not displayed. Bottom panel: the same training set data are shown with each point representing a bin of 50 objects.

Standard image High-resolution image

4.3. Application to the Full BCS Catalog

The 5820 objects from the training set are used to train a committee of eight annz networks, each with an architecture 8:16:16:1. This committee is used to determine redshifts and errors for every object in the BCS catalog. These are included in Columns 61 and 62 of the data release. Because we found a negligible net bias when testing our calibration set, we do not perform a bias correction to redshifts of the BCS catalog.

Many of the objects of the BCS catalog lie outside of the parameter space of data used to train annz. While Collister & Lahav (2004) have demonstrated success using annz to determine redshifts of galaxies outside the parameter space used to train the network, this was done using a set of galaxies with a very uniform distribution of spectral types. For the generic distribution of galaxies provided in the BCS catalog, neural networks are unreliable in predicting redshifts outside the trained parameter space. Therefore, we indicate whether an object lies inside or outside of the parameter space of the training set with a flag (Column 63). A value of 1 means the object is within the parameter space of the training set and the redshift is reliable. A value of 0 means the object lays outside the parameter space and the redshift is unreliable. The flag is based only on the magnitude and magnitude error cuts that were made on the training set (i.e., i < 22.5, i-error <0.1, resolved in all bands). It is not based on the SExtractor flag, the star–galaxy separation criteria, or on photometric redshift errors.

We were able to obtain photo-z's for about 1,955,400 objects from the BCS catalog with i < 22. From these, there are ∼204,600 objects in the catalog that pass the star–galaxy separation criteria in all bands and lie within the training set parameter space. The redshift distribution of these BCS objects in different magnitude ranges is shown in Figure 23. The peak redshift is around zphot = 0.4 for 20 < i < 22. Out of these, there are about 200 objects with zphot > 1.0.

Figure 23. Refer to the following caption and surrounding text.

Figure 23. Distribution of photometric redshifts of the galaxies that lie within the annz training set parameter space and have zphot error <0.05(1 + zphot) and pass the star–galaxy separation test and in different i-band model magnitude ranges.

Standard image High-resolution image

Many of the objects that do not pass star–galaxy separation are stars. Since annz was trained with only galaxies, stellar objects lie outside the parameter space and therefore do necessarily not get assigned a correct redshift of zphot = 0. In fact, only a handful of objects in the entire catalog are assigned a zphot close to zero. We investigate the performance of the photo-z's when training a network with both stars and galaxies. Using the same inputs and network architecture as above but including ∼1000 stars in the training set, annz was successful in assigning stars a redshift below 0.1 only 70% of the time. However, redshift assignment of galaxies was not adversely affected. Only 4 out of approximately 3400 galaxies were assigned a redshift less than 0.015. The fraction of catastrophic outliers as well as σΔz/(1 + z) were not significantly affected either, so long as stars are not included in the statistics. While the results of training annz with stars are not sufficient to use for the entire BCS catalog, these preliminary results show some promise. Furthermore, Collister et al. (2007) have shown better results when training an annz network specifically for star–galaxy separation.

5. CONCLUSIONS AND DISCUSSION

In this paper, we present an overview of the BCS, an ∼80 deg2 optical photometric survey in griz bands carried out with the Mosaic2 imager on Blanco 4 m telescope between 2005 and 2008. We discuss the observing strategy within the context of our scientific goals, and we present basic observing characteristics at CTIO such as the sky brightness and delivered image quality.

We provide a detailed description of the data processing, calibration, and quality control, which we have carried out using a development version of the DESDM system. The processing steps in going from raw exposures to science ready catalogs include image detrending and astrometric calibration; this processing is run independently on every night of observations. This is followed by image co-addition, which combines data from the same region of the sky into deeper co-add images.

The processing of real data from the Blanco telescope provides a real world stress test of the DESDM system. Many novel algorithmic features, which will be used to process upcoming DES data, were tested on BCS data. These include PSF homogenization, cataloging using PSF-corrected, model-fitting photometry, object classification using the new spread_model, absolute photometric calibration using the stellar locus, and a variety of quality control tests.

We present the characteristics of the data set, including the median-estimated 10σ galaxy photometry depth in the co-adds for bands griz, which are 23.3, 23.4, 23.0, and 21.3, respectively. The corresponding point-source 10σ depths in griz are 23.9, 24.0, 23.6, and 22.1, respectively. We measure the systematic noise floor in our photometry using photometric repeatability in single-epoch images and comparisons of the stellar locus scatter from BCS and SDSS. Both results indicate a noise floor at the ∼1.9% level in g, ∼2.2% in r, and ∼2.7% in i and z bands. This noise floor does not impact the core galaxy cluster science for which the BCS was designed. We expect that with an improved characterization of the illumination correction using the star flat technique demonstrated in the Canada–France–Hawaii Telescope Legacy Survey (Regnault et al. 2009) it would be possible to reduce this noise floor further, but given that the current floor is adequate for our science needs we have not included these corrections in our BCS processing.

Our absolute photometric calibration is obtained using the stellar locus and including the 2MASS J-band photometry. We can calibrate our zero points at better than ∼1% (statistical) to the stellar locus, and so our overall photometric uniformity is driven by the ∼2% accuracy of the 2MASS survey (e.g., Skrutskie et al. 2006). We show that our photometric zero-point calibration is quite uniform across the survey by showing star and galaxy counts across the survey. We also demonstrate that with spread_model it is possible to carry out uniform star–galaxy separation even across a large extragalactic survey.

As an additional data quality test, we present photometric redshifts derived from a neural network trained on a sample of objects with spectroscopic redshifts that we targeted during the BCS survey. The performance of our four band griz photometric redshifts are evaluated based on analysis of a calibration set of over 5000 galaxies with measured spectroscopic redshifts. We find good performance with a characteristic scatter of σΔz/(1 + z) = 0.054 and an outlier fraction of η = 4.93%. Finally, we provided a summary of the output data products from our co-added images and catalogs along with information on how to download them.

Finally, the BCS data have been used for a range of scientific pursuits, which we briefly summarize and reference here to allow the reader to seek additional information as needed. Within the SPT survey, the first four SZE-selected clusters were optically confirmed with redshift estimates using BCS data (Staniszewski et al. 2009) and detailed studies of galaxy populations using these clusters were reported in Zenteno et al. (2011). The total number of SPT cluster candidates with signal-to-noise ratio >4.5 in BCS footprint is 15 (Reichardt et al. 2012) and among these 10 have been confirmed with the BCS data and the remaining 5 have redshift lower limits between 1 and 1.5 (Song et al. 2012). These clusters and their BCS-derived redshifts have figured prominently in SPT publications to date (Staniszewski et al. 2009; Vanderlinde et al. 2010; High et al. 2010; Andersson et al. 2011; Williamson et al. 2011; Reichardt et al. 2012; Song et al. 2012). The BCS data enabled the serendipitous discovery of a strong lensing arc of a galaxy at z = 0.9057 by a massive galaxy cluster at a redshift of z = 0.3838 (Buckley-Geer et al. 2011). Additional automated searches for strong lensing arcs have also been carried out, and further analysis of BCS data for weak lensing is in progress.

A sample of about 105 galaxy clusters was found using the first three seasons of BCS data using an independent processing (Menanteau et al. 2009, 2010b), and the BCS data were also used for optical confirmation of ACT clusters (Menanteau et al. 2010a). Other studies include estimates of weak-lensing cluster masses (McInnes et al. 2009) and a search for QSO candidates using r-band data (Jimenez et al. 2009).

We used the BCS data to measure photometric redshifts of about 46 X-ray-selected clusters in the XMM-BCS survey (Šuhada et al. 2012). This X-ray-selected sample is currently being used in combination with SPT data to explore the low-mass cluster population and its SZE properties (J. Liu et al., in preparation). In addition, these BCS data are also being used in the analysis of the larger XMM-XXL survey in the 23 hr field (M. Pierre 2012, private communication).

The BCS data continue to provide an important data set for SPT. Recently, the data were used to trace the galaxy populations and were correlated against the SPT CMB-lensing maps (van Engelen et al. 2012), demonstrating correlations significant at the 4σ–5σ level in both BCS fields (Bleem et al. 2012). The BCS data will provide a valuable optical data set for combination with a 100 deg2 Spitzer survey over the same region (S. A. Stanford 2012, private communication), a 100 deg2 Herschel survey (J. Carlstrom 2012, private communication), and they will overlap one of the deep mm-wave fields being targeted by SPT-pol (J. Carlstrom 2012, private communication) until the DES data are available.

We acknowledge Len Cowie for providing us spectroscopic redshifts for objects from the SSA 22 field. The Munich group acknowledges the support of the Excellence Cluster Universe and from the program TR33: The Dark Universe, both of which are funded by the Deutsche Forschungs Gemeinschaft. We acknowledge support from the National Science Foundation (NSF) through grants NSF AST 05-07688, NSF AST 07-08539, NSF AST 07-15036, and NSF AST 08-13534. We acknowledge the support of the University of Illinois where this project was begun. This paper includes data gathered with the Blanco 4 m telescope, located at the Cerro Tololo Inter-American Observatory in Chile, which is part of the U.S. National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA), under contract with the NSF.

Facility: Blanco (MOSAIC) - Cerro Tololo Inter-American Observatory's 4 meter Blanco Telescope

APPENDIX: BCS CATALOG DESCRIPTION

We created an ASCII catalog of files which is obtained from the catalogs of each individual tile and after removing duplicates. The description of each column in the BCS catalog is provided in Table 3.

Table 3. Details of BCS Catalogs

Column Parameter Units Definition
1 tilename Name of tile
2 objectid ID from DESDM database table coadd_objects
3 RA deg Right ascension
4 DEC deg Declination
5 mag_model_g AB mag Model magnitude (g)
6 magerr_model_g AB mag Error in model magnitude (g)
7 mag_auto_g AB mag Kron magnitude (g)
8 magerr_auto_g AB mag Error in Kron magnitude (g)
9 mag_psf_g AB mag PSF magnitude (g)
10 magerr_psf_g AB mag Error in PSF magnitude (g)
11 mag_petro_g AB mag Petrosian magnitude (g)
12 magerr_petro_g AB mag Error in Petrosian magnitude (g)
13. mag_aper3_g AB mag Magnitude in 3 arcsec aperture (g)
14 magerr_aper3_g AB mag Magnitude error in 3 arcsec aperture (g)
15 flags_g SExtractor flag (g)
16 class_star_g SExtractor star/galaxy separator
17 spread_model_g Difference in PSF and Sérsic magnitude (g)
18 spread_modelerr_g Error in spread_model (g)
19 mag_model_r AB mag Model magnitude (r)
20 magerr_model_r AB mag Error in model magnitude (r)
21 mag_auto_r AB mag Kron magnitude (r)
22 magerr_auto_r AB mag Error in Kron magnitude (r)
23 mag_psf_r AB mag PSF magnitude (r)
24 magerr_psf_r AB mag Error in PSF magnitude (r)
25 mag_petro_r AB mag Petrosian magnitude (r)
26 magerr_petro_r AB mag Error in Petrosian magnitude (r)
27 mag_aper3_r AB mag Magnitude in 3 arcsec aperture (r)
28 magerr_aper3_r AB mag Magnitude error in 3 arcsec aperture (r)
29 flags_r SExtractor flag (r)
30 class_star_r SExtractor star/galaxy separator
31 spread_model_r Difference in PSF and Sérsic magnitude (r)
32 spread_modelerr_r Error in spread_model (r)
33 mag_model_i AB mag Model magnitude (i)
34 magerr_model_i AB mag Error in model magnitude (i)
35 mag_auto_i AB mag Kron magnitude (i)
36 magerr_auto_i AB mag Error in Kron magnitude (i)
37 mag_psf_i AB mag PSF magnitude (i)
38 magerr_psf_i AB mag Error in PSF magnitude (i)
39 mag_petro_i AB mag Petrosian magnitude (i)
40 magerr_petro_i AB mag Error in Petrosian magnitude (i)
41 mag_aper3_i AB mag Magnitude in 3 arcsec aperture (i)
42 magerr_aper3_i AB mag Magnitude error in 3 arcsec aperture (i)
43 flags_i SExtractor flag (i)
44 class_star_i SExtractor star/galaxy separator
45 spread_model_i Difference in PSF and Sérsic magnitude (i)
46 spread_modelerr_i Error in spread_model (i)
47 mag_model_z AB mag Model magnitude (z)
48 magerr_model_z AB mag Error in model magnitude (z)
49 mag_auto_z AB mag Kron magnitude (z)
50 magerr_auto_z AB mag Error in Kron magnitude (z)
51 mag_psf_z AB mag PSF magnitude (z)
52 magerr_psf_z AB mag Error in PSF magnitude (z)
53 mag_petro_z AB mag Petrosian magnitude (z)
54 magerr_petro_z AB mag Error in Petrosian magnitude (z)
55 mag_aper3_z AB mag Magnitude in 3 arcsec aperture (z)
56 magerr_aper3_z AB mag Magnitude error in 3 arcsec aperture (z)
57 flags_z SExtractor flag (z)
58 class_star_z SExtractor star/galaxy separator
59 spread_model_z Difference in PSF and Sérsic magnitude (z)
60 spread_modelerr_z Error in spread_model (z)
61 z_phot Photometric redshift
62 z_phot_err Photometric redshift error
63 z_phot_flag Within annz training set parameter space

Notes. Explanation and contents of catalogs in the BCS survey release. More details on some of the parameters can be found in the SExtractor manual. The magnitudes are corrected for galactic extinction.

Download table as:  ASCIITypeset image

Footnotes

Please wait… references are loading.
10.1088/0004-637X/757/1/83
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载