2016-04-04T08:34:00
20
CALIFA (Calar Alto Legacy Integral Field
spectroscopy Area) survey DR3
Sánchez, F.; The CALIFA collaboration
3
disk-galaxies
interstellar-medium
spectroscopy
Calar Alto Observatory
PMAS/PPAK at 3.5m Telescope
2016A&A...594A..36S
Optical
Survey
The Calar Alto Legacy Integral Field Area (CALIFA) survey provides
spatially resolved spectroscopic information for 667 galaxies, mainly
within the local universe (0.005 < z < 0.03).
CALIFA data was obtained using the PPAK integral field unit (IFU), with a
hexagonal field-of-view of 1.3 square arcmin, with a 100% covering factor
by adopting a three-pointing dithering scheme. has been taken in two
setups: V500 (6 Å bin size, 646 galaxies) and V1200 (2.3 Å bin size, 484
galaxies). A final product ("COMBO") combining both data sets, covering
3700-7500 Å at 6 Å bin size, is made availble for 484 galaxies.
CALIFA is a legacy survey, intended for the community. This is the (final)
Data Release 3.
CALIFA asks you to acknowledge:
"This study uses data provided by the Calar Alto Legacy Integral Field Area
(CALIFA) survey (http://califa.caha.es/)."
"Based on observations collected at the Centro Astronómico Hispano Alemán
(CAHA) at Calar Alto, operated jointly by the Max-Planck-Institut fűr
Astronomie and the Instituto de Astrofísica de Andalucía (CSIC)."
and to cite both of :bibcode:`2014A&A...569A...1W` and
:bibcode:`2012A&A...538A...8S`
3
it's a hint of having Halpha emission. As you see, I use to continuum points
at both sides of the line.
Then, I would ask for all spectra in the cubes having this condition.
Solution
--------
There are several ways to solve this problem using VO tools and
services, but we will, somewhat unorthodoxly, use TAP, a protocol to
exchanges SQL (actually, ADQL_) queries and query results with servers.
There are several clients that can do this; if you have no other
preferences, use TOPCAT_. There, open VO/TAP, push the pin in the left
upper corner of the window (to keep the window open after sending off a
query), and in the TAP URL field below, enter http://dc.g-vo.org/tap.
Hit "Enter query", and you are in a dialog that lets you inspect a
server's table metadata (look for tables in califadr3) and enter
queries.
The targets of the CALIFA survey and their redshifts are in the
`califadr3.objects`_ table. So, to compute the wavelengths in question you
could say::
select
target_name,
califaid,
6563*(1+redshift) as lha,
6540*(1+redshift) as lri,
6700*(1+redshift) as lle
from califadr3.objects (1)
The wavelength/fluxes pairs, on the other hand, and in tables
`califadr3.fluxv500`_ (or v1200) and `califadr3.fluxposv500`_; in the former,
flux is given over lambda, califa id (a small integer; let's use 909,
corresponding to UGC 12519, as an example further down) and pixel coordinates,
in the latter, it is celestial positions.
Since we cannot usefully compare floats for equality in generally, and
in particular not here, to retrieve fluxes for a single wavelength you
need to restrict lambda to an interval wide enough. How wide the
interval needs to be, you can figure out be determining the spacing of
samples. For the V500 data set, this could look like this::
select distinct top 2 lambda
from califadr3.fluxv500
order by lambda (2)
(This is fast although there's dozens of millions of lambdas in this table
because there's an index on lambda, and there are not many distinct values).
If you try it, you'll see that we have steps of two Ångström, so you'll
want the interval to be something like 2.2 Ångström.
To obtain fluxes while while having the redshift available -- as needed
here -- you need to join the objects with the flux tables; here, you can use
a "NATURAL" join, which means all columns identically names are to be
used for joining::
select top 5000 target_name, xindex, yindex, lambda, flux
from
califadr3.fluxv500
natural join califadr3.objects
where
lambda between 6563*(1+redshift)-1.1 and 6563*(1+redshift)+1.1
and target_name='UGC12519' (3)
The "top 5000" is necessary here to retrieve all fluxes: to protect your
nerves, the server inserts a "top 2000" unless you order something
else. The restriction on the califaid makes sure we don't retrieve
all fluxes between our two wavelengths in the CALIFA tables.
This table has pixel coordinates. The physical positions for each pixel are
in a table called `califadr3.spectra`_. To get them into your table,
you'll need another join::
select top 5000 target_name, raj2000, dej2000, lambda, flux
from
califadr3.fluxv500
natural join califadr3.objects
natural join califadr3.spectra
where
lambda between 6563*(1+redshift)-1.1 and 6563*(1+redshift)+1.1
and target_name='UGC12519' (3)
At this point you could retrieve the flux maps at the three tables and
use, for instance, TOPCAT's crossmatch functionality to do the match
client-side. However, let's assume we want to keep processing
server-side, e.g., because we want to run this on a lot of objects and
don't want to download all the layers.
To retrieve fluxes at multiple wavelengths, you can query like this::
select top 5000 target_name, raj2000, dej2000, lambda, flux
from
califadr3.fluxv500
natural join califadr3.objects
natural join califadr3.spectra
where (
lambda between 6563*(1+redshift)-1.1 and 6563*(1+redshift)+1.1
or lambda between 6540*(1+redshift)-1.1 and 6540*(1+redshift)+1.1
or lambda between 6700*(1+redshift)-1.1 and 6700*(1+redshift)+1.1)
and target_name='UGC12519' (5)
The problem with this is that you cannot compute the criterion for a
strong Halpha emission from this as the three fluxes are in three
different rows. When you are in that situation, SQL offers grouping --
this means that rows sharing a criterion end up in a bag; you can then
use "aggregate functions" on these. Note, that as most things in the
relational world, the items in the bag are not sorted.
As a general word of advice: In the SQL world, thinking in array terms
is typically going to lead to complicated and slow queries. Try
thinking in terms of sets and matters will become manageable.
ADQL's aggregate functions are a bit limited, but for this case they're
just enough -- just compare the maximum flux to the average flux::
select top 5000 califaid, xindex, yindex
from
califadr3.fluxv500
natural join califadr3.objects
where (
lambda between 6563*(1+redshift)-1.1 and 6563*(1+redshift)+1.1
or lambda between 6540*(1+redshift)-1.1 and 6540*(1+redshift)+1.1
or lambda between 6700*(1+redshift)-1.1 and 6700*(1+redshift)+1.1)
and target_name='UGC12519'
group by califaid, xindex, yindex
having (max(flux)/avg(flux)>3) (6)
We group on pixel coordinates here as that's much more robust (and also
somewhat faster) than going by the floating point ra/dec pairs. To turn the
pixel coordinates to postitions you can do something with, again do a join with
califadr3.spectra as in (4), except this time you turn the entire query so
far into a subquery::
select raj2000, dej2000 from (
select califaid, xindex, yindex
from
califadr3.fluxv500
natural join califadr3.objects
where (
lambda between 6563*(1+redshift)-1.1 and 6563*(1+redshift)+1.1
or lambda between 6540*(1+redshift)-1.1 and 6540*(1+redshift)+1.1
or lambda between 6700*(1+redshift)-1.1 and 6700*(1+redshift)+1.1)
and target_name='UGC12519'
group by califaid, xindex, yindex
having (max(flux)/avg(flux)>3)) as spots
natural join califadr3.spectra
Finally, if you want to have all such "interesting" points in CALIFA,
drop the constaint on the object name::
select raj2000, dej2000 from (
select califaid, xindex, yindex
from
califadr3.fluxv500
natural join califadr3.objects
where (
lambda between 6563*(1+redshift)-1.1 and 6563*(1+redshift)+1.1
or lambda between 6540*(1+redshift)-1.1 and 6540*(1+redshift)+1.1
or lambda between 6700*(1+redshift)-1.1 and 6700*(1+redshift)+1.1)
group by califaid, xindex, yindex
having (max(flux)/avg(flux)>3)) as spots
natural join califadr3.spectra (8)
This is a fairly long-running query, which will time out on you on when
you enter it through the "synchronous" TAP endpoint that TOPCAT uses by
default (because it's simple and has little overhead). To use the
"async" endpoint, uncheck "Synchronous" in TOPCAT's TAP dialog (or do
the equivalent thing in another client). With this, you can turn off
your computer, take it somewhere else, and resume operations when you
get back; in this case, the job should be done in 20 minutes or so
depending on server load and similar factors.
Once you have the data, try a few of the nice VO features you get. If
you used TOPCAT to query, the result is in a table. With this, start
Aladin_, in TOPCAT, select Interop/Send Table To/Aladin and watch your
matches in Aladin. Hit, e.g., "Optical" in Aladin, and you can zoom in
on the "interesting" spots and see them overplotted on sky images.
You could now load the corresponding spectrum into a spectral analysis
tool like, say Splat. To do that, in Aladin click on one point, go to
the load dialog and there the "all VO" tab. Optionally, go to "Detailed
List", hit "Uncheck all" and just check the "CALIFA Spectra" service
(this is going to speed up your queries significantly).
Then start Splat_, in the Aladin load dialog set the search radius to
0.01' (i.e., .6 arcsec) and submit. You should see both the V1200 and
the V500 spectra -- right click on the one you want to see, select "Open
with", "Splat", and work with your spectrum there.
If you don't want to get Splat, TOPCAT will do as well, although it has
much less of built-in knowledge about spectra -- in that case, open
TOPCAT's VO/SSA dialog, in the list of SSA services select "califa ssa",
make sure "Accept Sky Positions" is checked, in "Diameter" again say
something like 0.6 arcsec. If you hover over a point in Aladin, you'll
get your positions filled in, and if you hit "Ok", the spectrum will be
retrieved. If you, again, use the pushpin to make TOPCAT keep the
window open, you have a quick way of downloading spectra that look
interesting to you.
.. _topcat: http://www.star.bris.ac.uk/~mbt/topcat/
.. _adql: http://www.g-vo.org/adql
.. _aladin: http://aladin.u-strasbg.fr/aladin.gml
.. _splat: http://star-www.dur.ac.uk/~pdraper/splat/splat-vo/
.. _califadr3.spectra: /__system__/dc_tables/show/tableinfo/califadr3.spectra
.. _califadr3.objects: /__system__/dc_tables/show/tableinfo/califadr3.objects
.. _califadr3.fluxposv500: /__system__/dc_tables/show/tableinfo/califadr3.fluxposv500
.. _califadr3.fluxv500: /__system__/dc_tables/show/tableinfo/califadr3.fluxv500
]]>
common stuff for the flux tables; set the setup attribute
to v500 or v1200.
2147483647
True
True
Flux and errors versus position for CALIFA setup
\setup. Positions are pixel indices into the CALIFA cubes. The
associate positions are in califadr.spectra; use
"JOIN califadr3.spectra USING (califaid, xindex, yindex)" to join
that table (or use the fluxpos tables).
QC parameters come as mean, max, and rms
//scs#pgs-pos-index
sdl
ssa_pubDID
Metadata for individual spectra. Note that the spectra result
from reducing a complex dithering scheme and are not independent
from one another.
//ssap#hcd
Common items for the joins between the flux tables and spectra.
See fluxitems, and again give a setup attribute.
True
true
Data cubes of positions and fluxes in the optical for a sample of
galaxies, obtained by the CALIFA project in the \setup setup.
Note that due to the dithering
scheme, the points here do not actually correspond to raw measurements
but instead represent a reduction of several measurements.
CREATE VIEW \\curtable (\\colNames) AS (SELECT
\\colNames FROM \\schema.fluxv500 JOIN \\schema.spectra
USING (califaid, xindex, yindex))
Metadata for the CALIFA data cubes as delivered by the project.
dl
obs_publisher_did
//products#table
//obscore#publish
application/x-votable+xml;content=Datalink
Datalink
Each flag may be NULL (undefined), 0 (good quality), 1 (warning:
minor issues that do not significantly affect the quality), or
2 (bad: significant issues affecting the quality)
The CALIFA id is
* <1000 for objects from the diameter-selected mother
sample (cf. 2014A&A...569A...1W)
* 1000 .. 1999 for the dwarf galaxy extension (34 observed)
* 2000 .. 2999 for companions of main sample galaxies (29 observed)
* 3000 .. 3999 for the early-type galaxy extension (36 observed)
* 4000 .. 4999 for galaxies observed during pilot studies (3 included)
* 5000 .. 5999 for the supernova environments extension (14 observed)
* 7001 and 8000 for two solitary galaxies
* 9000 .. 9999 for the compact early-type galaxy extension (17 included)
The califa cubes come in three setups:
* V1200 – R about 1650, covering 340 .. 484 nm, with significant
vignetting outside of 365 .. 462 nm; exposure time 1800 s per pointing
(3 pointings per object).
* V500 – R about 850, covering 374.5 .. 750 nm, with significant
vignetting outside of 424 .. 714 nm; exposure time 900 s per pointing
(3 pointings per object).
* COMBO – V500 data combined with V1200 data in the overlap region.
These have better S/N there. The unvignetted wavelength range
for these is 370-714 nm.
2.64824e-19 5.44232e-19
55357.9 57378.2
6/43,50,56,131,545,552,554,556,568,573,620-621,626-627,721,733,738,907-909,948-949,975,995,1701,2206,2228-2229,2248,2251,2254,2272-2274,2276,2290,2641,4053,4167,4178,4279,4342,4445,4454,4472,4477,4507,4511,4526,4647,4654-4655,4667,4688,4696,4704,4712,4719,4731,4753-4754,4772,4799,4801,4806-4807,4813,4829,4840,4843-4844,4850,4912,4950,4957,4993,5023-5024,5028,5030,5075,5115,5148,5241,5247,5298,5332,5399,5482,5505,5530,5533,5579,5586,5591,5593,5596,5648,5695,5701,5704,5757,5862,5889,5942,5951,5975,5980,6000,6011,6045-6046,6074,6087,6093,6096,6116,6162,6212,6214,6216,6240-6241,6261,6270,6342,6362,6366,6411,6448,6465,6501,6532,6541,6547,6554,6607,6630,6977,7198,7204,7214,7308-7309,7312,7340,7386,7442,7501,7559-7560,7572,7584,7589,7591,7696,7783,7822,7824,7834,8045,8164,8195,8297,8327,8331,8341,8366,8376,8458,8477,8479-8480,8492,8498,8522,8536,8552,8557,8562-8563,8570,8589,8605,8621,8646,8656,8673,8705,8719,8728,8743,8752,8754,8825,8883,8898,8900,8919,8921-8922,8973,8984,8992,9098,9101,9124,9126,9133,9141,9187,9190,9216,9220,9381,9468,9470-9471,9622,9628,9630,9695,9718,9798,9812,9859,9873-9875,9882-9883,9885,9891,9946,9974,9991,10025,10111,10113,10144,10182-10183,10189,10200,10203,10213,10220-10221,10280,10286,10379-10381,10398,10400-10401,10414,10422,10434,10448,10485,10501,10511-10512,10515,10518-10520,10523-10524,10530,10540,10560,10566,10570,10577,10583,10586,10588,10594-10595,10609,10612,10621,10627-10629,10637,10661-10662,10669,10674,10705,10711,10719,10721,10727,10736,10739-10740,10752,10773,10778,10846,10921,10987,11010,11026,11044,11070,11111-11112,11164,11173,11196,11272,11290,11314,11325-11326,11341-11342,11369,11372,11374,11377,11383,11387,11392,11396,11418,11433,11435,11443,11451,11525,11544,11761,11777,11789,11793,11800-11801,11804,11832,11905,11944,12306,12378,12399,12532,12709,13316,13380-13381,13426,13529,15002,15004,16007,16696,16702,16787-16788,16793,17233-17234,17237-17238,17350,17392,17398,17486,17492,17499,17502,17509,17511,17516,17520,17571,17578,17661-17663,17667,17672,17745,17747-17748,17751,17758,17761,17767,17773,17819,17912,17998,18008,18069,18071,18081,18173,18241-18242,18252,18295,18300-18302,18381,18383,18523,18775,18779,18781,18790,18800,18838,18854,18931-18932,18934-18935,18941,19027,19073,19108,19186,19227,19260,19267,19325,19387,19417,19432,19458,19487,19489,19508,19515,19536-19538,19593,19618,19633,19642,19678,19696,19749,19848,19939-19940,19954,19981,19983-19984,19991,19996-19997,20009,20020,20023,20031-20033,20040,20193,20254,20279,20417-20418,20429,21857,22353,22356,22361,22364,22651-22652,22654-22655,22737,22740-22742,22826,23051,23072,23076,23182,23204,23206,23208,23273,23337,23353,23356,23401,23405,24055,26102,26108,26125,26318,26451,26495,26551,27133,27259,27368,27559,27567,27608,27719,27748,27830,27862,27912,27914-27915,28112,28213,28263-28264,28294,28310,28322,28365,28430,28458,28491,28497,28513,28522,28579,28606,28617,28639,29973,31478,31546,31628,31642-31643,31688-31689,31736-31737,32462,32471,32478,32482,32555,32559,32682,36604,36733,36765,36793,36797,36805,36847,36857,36861,48880,49084,49121,49130,49150
Object data for DR3 sample.
The photometric and derived quantities are from growth curve
analysis of the SDSS images for galaxies from the mother sample
(califaid<1000), from SDSS DR7/12 photometry otherwise.
Position J2000 "raj2000" "dej2000"
Redshift OPTICAL "redshift"
//ssap#sdm-instance
A spectrum from the CALIFA project
CALIFA dithers their observations from three pointings and several
fibers (in a way I don't quite understand). Several values are
obtained by manipulating sequences of these individual observations.
The headers all start with PPAK, then Pn(Fm).
This apply computes such values.
import operator
_SUB_OBS_KEYS = ( # V1200 keys
"P1F1 P1F2 P1F3 P2F1 P2F2 P2F3 P3F1 P3F2 P3F3".split()+
# V500 keys
"P1 P2 P3".split())
def getForAllPointings(prefixes, header, keyword):
res = []
for prefix in prefixes:
for subKey in _SUB_OBS_KEYS:
key = "%sPPAK %s %s"%(prefix, subKey, keyword)
if key in header:
res.append(header[key])
return res
prefixes = {
"V500": ["V500 "],
"V1200": ["V1200 "],
"COMB": ["V500 ", "V1200 "]}[@setup]
mjds = getForAllPointings(prefixes, vars, "MJD_OBS"
)+getForAllPointings(prefixes, vars, "MJD-OBS")
if not mjds:
base.ui.notifyWarning("No pointings found for "+\inputRelativePath)
vars["meanDateObs"] = vars["t_min"] = \
vars["t_max"] = vars["cumulativeExposure"] = None
else:
vars["meanDateObs"] = sum(mjds)/len(mjds)
vars["t_min"] = min(mjds)
vars["t_max"] = max(mjds)
vars["cumulativeExposure"] = sum(
getForAllPointings(prefixes, vars, "EXPTIME"))
The califa cubes have totally free-form object names.
This stream normalises them using a huge translation table.
The result is a key cleanedobject.
"cleaned_object"
True
"califa/res/dr3namemap"
@OBJECT.split("_")[-1]
previews3/spectra
datadr3/*.rscube.fits
@accref = @obsId
@pubDID = getStandardPubDID(@accref)
yield row
50000
"application/x-votable+xml"
makeAbsoluteURL(
"\rdId/sdl/dlget?ID="+urllib.parse.quote(@pubDID))
"califadr3.spectra"
@accref
\standardPreviewPath
"image/png"
@CRVAL3+(1-@CRPIX3)*@CDELT3
@CRVAL3+(@NAXIS3-@CRPIX3)*@CDELT3
@specHi-@specLo
(@specHi+@specLo)/2
@RA
@Dec
@CALIFAID
@RA
0.0003
"OPTICAL"
@meanDateObs
3
@Dec
"CALIFA %s %s-%s-%s"%(
@cleaned_object, @setup, @raInd, @decInd)
@NAXIS3
getStandardPubDID(@prodtblAccref)
@MED_VEL/3e5
@computedSNR if @computedSNR==@computedSNR else None
@specExt*1e-10
@specMid*1e-10
@specLo*1e-10
@specHi*1e-10
@cleaned_object
"Galaxy"
@cumulativeExposure
Common stuff for importing the fluxpos tables for the two setups.
You'll need to give the setup parameter (v500 or v1200).
make_fluxpos\setup
datadr3/V500/*.rscube.fits
datadr3/COMB/*.rscube.fits
datadr3/V1200/*.rscube.fits
cubePreviews
datadr3/*.rscube.fits
"\schema.cubes"
\dlMetaURI{dl}
'application/x-votable+xml;content=datalink'
10000
\standardPreviewPath
"image/jpeg"
datetime.datetime(2016, 4, 10)
if "V1200" in \inputRelativePath:
@setup = "V1200"
@em_xel = 1701
elif "V500" in \inputRelativePath:
@setup = "V500"
@em_xel = 1877
elif "COMB" in \inputRelativePath:
@setup = "COMB"
@em_xel = 1901
else:
@setup = "UNKNOWN"
from gavo.utils import pyfits
def nanToNone(val):
if val!=val:
return None
return val
def fitsTableToDict(relPath, setup):
rd = parent.parent.table.rd
hdus = pyfits.open(rd.getAbsPath(relPath))
fitsTable = hdus[1].data
names = [n.lower() for n in fitsTable.dtype.names]
return dict(((d["califaid"], setup), d)
for d in (
dict(zip(names, map(nanToNone, row)))
for row in fitsTable))
@utils.memoized
def getQCFlags():
res = fitsTableToDict(
"datadr3/QCflags_std_V500_DR3.fits", "V500")
res.update(
fitsTableToDict("datadr3/QCflags_std_V1200_DR3.fits",
"V1200"))
res.update(
fitsTableToDict("datadr3/QCflags_std_COMB_DR3.fits",
"COMB"))
return res
@utils.memoized
def getQCParams():
res = fitsTableToDict(
"datadr3/QCpars_std_V500_DR3.fits", "V500")
res.update(
fitsTableToDict(
"datadr3/QCpars_std_V1200_DR3.fits", "V1200"))
res.update(
fitsTableToDict(
"datadr3/QCpars_std_COMB_DR3.fits", "COMB"))
return res
vars.update(getQCFlags()[@CALIFAID, @setup])
if @setup!="COMB":
vars.update(getQCParams()[@CALIFAID, @setup])
# assume CRVAL1, CRVAL2 are approximate center and make a circle
# from it
vars["roughCircle"] = pgsphere.SCircle(
pgsphere.SPoint.fromDegrees(@CRVAL1, @CRVAL2),
1/60.*math.pi/180)
vars["s_ra"] = @CRVAL1
vars["s_dec"] = @CRVAL2
\standardPubDID
for col in context.getById("cubes"):
if col.name.startswith("flag_"):
yield {"item": col.name}
from gavo.utils import pyfits
rawdicts = {}
filesRead = 0
global filesRead
table = pyfits.open(self.sourceToken)[1].data
colNames = [c.name.lower() for c in table.columns.columns]
if "classnum" in self.sourceToken:
colNames[-3:] = ["num_"+n for n in colNames[-3:]]
if "Mstar" in self.sourceToken:
colNames[-15:] = ["abs_"+n for n in colNames[-15:]]
rows = [dict(zip(colNames, tuple)) for tuple in table]
idCol = "CALIFAID"
if not idCol in rows[0]:
idCol = "califaid"
for row in rows:
rawdicts.setdefault(row[idCol], {}).update(row)
filesRead += 1
if filesRead==9: # that many I expect
for rawdict in rawdicts.values():
yield rawdict
target_name: dbname,
raj2000: ra,
dej2000: de,
axis_ratio: ba,
magu: u,
magg: g,
magr: r,
magi: i,
magz: z,
err_magu: u,
err_magg: g,
err_magr: r,
err_magi: i,
err_magz: z,
redshift: bestz,
maj_axis: isoa_r
if not @vmax_nocorr>0:
@vmax_nocorr = None
if not @vmax_denscorr>0:
@vmax_denscorr = None
if @minmerg=='M':
@mergesig = ' (M)'
elif @merg=='M':
@mergesig = ' (m)'
elif @minmerg=='M':
@mergesig = ' (i)'
else:
@mergesig = ''
CALIFA DR3 tables
from gavo.utils import pyfits
from gavo.utils import fitstools
parts = re.match("(.*)-(\d+)-(\d+)$",
self.sourceToken["accref"]).groups()
xInd, yInd = int(parts[1]), int(parts[2])
hdus = pyfits.open(
os.path.join(base.getConfig("inputsDir"),
parts[0]+".rscube.fits"))
# see section 4 of the paper
values, errors, errWeights, valid = hdus[:4]
fluxes = values.data[:,yInd, xInd]
fluxErrors = errors.data[:,yInd, xInd]
isNull = valid.data[:, yInd, xInd]
lambdaAxis = fitstools.WCSAxis.fromHeader(hdus[0].header, 3,
forceSeparable=True)
for ind, (flux, error, skip) in enumerate(
zip(fluxes, fluxErrors, isNull)):
if not skip:
yield {"flux": flux, "spectral": lambdaAxis.pix0ToPhys(ind),
"error": error}
CALIFA Cube Datalink Service
CALIFA cubes can be cut out along RA, DEC, and spectral axes.
CIRCLE and POLYGON cutouts yield bounding boxes. Also note that the
coverage of CALIFA cubes is hexagonal in space. This explains
the empty area when cutting out :genparam:`CIRCLE(225.5202 1.8486 0.001)`
:genparam:`BAND(366e-9 370e-9)` on
:dl-id:`ivo://org.gavo.dc/~?califa/datadr3/V1200/UGC9661.V1200.rscube.fits`.
accref = getAccrefFromStandardPubDID(descriptor.pubDID)
setups = {
"V500": "larger coverage lower resolution",
"COMB": "larger coverage lower resolution high-SNR",
"V1200": "smaller coverage higher resolution"}
for srcSetup in setups:
if srcSetup in accref:
break
else:
raise NotImplementedError("Unknown setup")
for destSetup in setups:
if srcSetup==destSetup:
continue
newAccref = accref.replace(srcSetup, destSetup)
if os.path.exists(
os.path.join(base.getConfig("inputsDir"), newAccref)):
yield descriptor.makeLink(
makeAbsoluteURL("califa/q3/dl/dlmeta?ID=%s"%(
urllib.parse.quote(getStandardPubDID(newAccref)))),
contentType=base.votableType+";content=datalink",
description="This cube, %s"%setups[destSetup],
contentQualifier="#cube")
"%s/datadr2/%%s.V1200.rscube.fits"%rd.resdir
"%s/datadr2/%%s.V500.rscube.fits"%rd.resdir
"%s/data/V1200/reduced_v1.3c/%%s.V1200.rscube.fits"%rd.resdir
"%s/data/V500/reduced_v1.3c/%%s.V500.rscube.fits"%rd.resdir
objId = os.path.basename(descriptor.pubDID).split(".")[0]
for template, descFrag, dlid in [
(dr2v1200template, "2 medium resolution (V1200)",
"califa/q2#dl"),
(dr2v500template, "2 low resolution (V500)",
"califa/q2#dl"),
(dr1v1200template, "1 medium resolution (V1200)",
"califa/q#dl"),
(dr1v500template, "1 low resolution (V500)",
"califa/q#dl")]:
path = os.path.join(base.getConfig("inputsDir"), template%objId)
if os.path.exists(path):
destLink = base.resolveCrossId(dlid).getURL("dlmeta"
)+"?ID="+urllib.parse.quote(getStandardPubDID(path))
yield descriptor.makeLink(destLink,
description="This cube in Data Release "+descFrag,
contentType="application/x-votable+xml;content=datalink",
contentLength=10000,
contentQualifier="#cube")
'califa/data'
DLFITSProductDescriptor
'#cube'
CALIFA Spectral Datalink Service
"\rdId#spectra"
if descriptor.pubDID is None:
return
cubeDID = "-".join(descriptor.pubDID.split("-")[:-2])+".rscube.fits"
yield descriptor.makeLink(
rd.getById("dl").getURL("dlmeta")+"?ID="+urllib.parse.quote(cubeDID),
description="Full data cube this spectrum was taken from",
contentType="application/x-votable+xml;content=datalink",
contentLength=10000,
contentQualifier="#cube")
"\rdId#makePartialSpectrum"
Split spectra from the CALIFA DR3 cubes. This service serves one spectrum
each per pixel in each cube where there is at least one valid
spaxel. Where both V500 and COMB data is available, COMB spectra
are served. WARNING: The individual spectra are not independent.
Also, error estimates over wide spectral ranges based on the
error estimates served here are unreliable.
auto
califa ssa
CALIFA DR3
survey
POS=286.4171792%2C63.92494278&SIZE=0.001
cutout
http://califa.caha.es/
GAVO CALIFA Cubes
s/ssap.xml
row = self.getFirstVOTableRow()
self.assertEqual(row['ssa_pubDID'],
'ivo://org.gavo.dc/~?califa/datadr3/COMB/UGC12519.COMB-75-47')
self.assertAlmostEqual(row['ssa_specstart'], 3.701e-07)
self.assertAlmostEqual(row['ssa_dateObs'], 55830.9910027067)
self.assertAlmostEqual(row['dej2000'], 15.9571125164442)
self.assertTrue(row['accref'].endswith(
'califa/datadr3/COMB/UGC12519.COMB-75-47'))
self.assertAlmostEqual(row['ssa_timeExt'], 2700.0)
self.assertEqual(row['ssa_dstitle'],
'CALIFA UGC12519 COMB-75-47')
/getproduct/califa/datadr3/COMB/UGC12519.COMB-67-11
row = self.getFirstVOTableRow(rejectExtras=False)
self.assertAlmostEqual(row['spectral'], 5017.0)
self.assertAlmostEqual(row['flux'], 0.00326692801900208)
self.assertAlmostEqual(row['error'], 0.0051887105)
/getproduct/califa/datadr3/V1200/UGC12519.V1200-01-38
row = self.getFirstVOTableRow(rejectExtras=False)
self.assertAlmostEqual(row['spectral'], 3650.0)
self.assertAlmostEqual(row['flux'], -0.08278895169496536)
self.assertAlmostEqual(row['error'], 0.4010688)
sdl/dlmeta
rows = self.getVOTableRows()
rowsPassed = 0
for row in rows:
if row["semantics"]=="#progenitor":
self.assertEqual(row["access_url"].split("/q3/")[1],
"dl/dlmeta?ID=ivo%3A//org.gavo.dc/~%3Fcalifa/datadr3/COMB/UGC12519.COMB.rscube.fits")
rowsPassed += 1
self.assertEqual(rowsPassed, 1)
sdl/dlmeta
self.assertXpath("v:RESOURCE[@type='meta']/v:GROUP[@name='inputParams']"
"/v:PARAM[@name='BAND']/v:VALUES/v:MIN",
{"value": EqualingRE("3.65\\d*e-07")})
self.assertXpath("v:RESOURCE[@type='meta']/v:GROUP[@name='inputParams']"
"/v:PARAM[@name='BAND']/v:VALUES/v:MAX",
{"value": EqualingRE("4.8\\d*e-07")})
dl/dlmeta
from gavo import votable
import io
data, metadata = votable.load(io.BytesIO(self.data))
rows = list(sorted(metadata.iterDicts(data), key=lambda d:
(d["semantics"], d["access_url"])))
bySemantics = {}
for row in rows:
if row["error_message"]:
self.fail("Fault declared in Datalink: %s"%row["error_message"])
if row["semantics"] in ("#counterpart", "#old-version", "#this"):
self.assertEqual(row["content_qualifier"], "#cube")
bySemantics.setdefault(
(row["access_url"] and row["access_url"].split("/", 3)[-1],
row["semantics"]), []
).append(row)
row = bySemantics[(u'califa/q3/dl/dlmeta?ID=ivo%3A//org.gavo.dc/~%3Fcalifa/datadr3/COMB/UGC12519.COMB.rscube.fits',
"#counterpart")][0]
self.assertEqual(row["content_type"],
'application/x-votable+xml;content=datalink')
self.assertEqual(row["description"],
'This cube, larger coverage lower resolution high-SNR')
row = bySemantics[(u'califa/q3/dl/dlmeta?ID=ivo%3A//org.gavo.dc/~%3Fcalifa/datadr3/V1200/UGC12519.V1200.rscube.fits',
"#counterpart")][0]
self.assertEqual(row["content_type"],
'application/x-votable+xml;content=datalink')
self.assertEqual(row["description"],
'This cube, smaller coverage higher resolution')
row = bySemantics[(None, u'#proc')][0]
self.assertNotEqual(row["service_def"], None)
row = bySemantics[(None, u'#proc')][0]
self.assertNotEqual(row["service_def"], None)
row = bySemantics[(u'califa/q3/dl/dlget?ID=ivo%3A//org.gavo.dc/~%3Fcalifa/datadr3/V500/UGC12519.V500.rscube.fits', u'#this')][0]
self.assertEqual(row["content_type"], "image/fits")
self.assertEqual(row["description"], "The full dataset.")
for r in rows:
self.assertEqual(r["ID"],
"ivo://org.gavo.dc/~?califa/datadr3/V500/UGC12519.V500.rscube.fits")
self.assertXpath(
"//v:RESOURCE[2]/v:GROUP[@name='inputParams']/v:PARAM[@name='DEC']", {
"ucd": "pos.eq.dec",
"unit": "deg",
"xtype": "interval",
"arraysize": "2"})
self.assertXpath(
"//v:RESOURCE[2]/v:GROUP[@name='inputParams']/v:PARAM[@name='RA']"
"/v:DESCRIPTION", {
None: "The longitude coordinate"})
self.assertXpath(
"//v:RESOURCE[2]/v:GROUP[@name='inputParams']/v:PARAM[@name='BAND']"
"/v:VALUES/v:MIN", {
"value": "3.749e-07"})
dl/dlget
self.assertHasStrings(
"BITPIX = -32",
"NAXIS2 = 5",
"CDELT3 = 2.0",
"H79")
dl/dlget
self.assertHasStrings(
"BITPIX = -32",
"NAXIS1 = 15",
"NAXIS3 = 7",
b"\\x0a\\x48\\x3c\\xad\\x75\\xab")
dl/dlmeta
oldVersions, onBigServer = [], False
for row in self.getVOTableRows():
if row["semantics"]=="#old-version":
oldVersions.append(row["access_url"].split("/califa/")[1])
self.assertEqual(row["content_qualifier"], "#cube")
if not row["access_url"].startswith("http://local"
) and not row["access_url"].startswith("http://victor"):
onBigServer = True
self.assertTrue('q2/dl/dlmeta?ID=ivo%3A//org.gavo.dc/~'
'%3Fcalifa/datadr2/UGC12519.V1200.rscube.fits'
in oldVersions, "Califa DR2 missing")
if onBigServer:
self.assertEqual(len(oldVersions), 2,
"Califa old data data missing")
dl/dlmeta
self.assertXpath(
"//v:RESOURCE[2]/v:GROUP/v:PARAM[@name='BAND']", {
"xtype": "interval",
"ucd": "em.wl"})
dl/dlasync
self.assertHTTPStatus(303)
self.pointNextToLocation("/phase")
self.assertHeader("location",
EqualingRE(".*/califa/q3/dl/dlasync/.*"))
overridden in previous test
self.assertHTTPStatus(303)
self.assertHeader("location",
EqualingRE(".*/califa/q3/dl/dlasync/.*"))
self.pointNextToLocation()
overriden in the previous test
EXECUTING")
except AssertionError:
self.assertHasStrings("COMPLETED")
else:
import time; time.sleep(2) # with a little bit of luck,
# this might be enough for the job to finish, and we won't need
# to do the evil loop in the next test (that will poison our
# test counts and is evil in other ways, too)
self.followUp.url.content_ = self.url.httpURL
]]>
overridden in the previous test
EXECUTING" in self.data:
import time
time.sleep(2)
self.followUp = self
else:
self.followUp = self.realFollowUp
self.assertHasStrings("COMPLETED")
self.followUp.url.content_ = self.url.httpURL+"/results/result"
]]>
overridden by previous test
self.assertHasStrings("SIMPLE = T",
"NAXIS1 = 15",
"NAXIS3 = 177",
b"\\xbd\\xd6\\x38\\x4c\\x3d\\xb5\\xdc\\xd9")