This directory contains three datasets, one created by Svea ca. 2010 (q.rd), another done by Jonas in 2018 (q2.rd), and then a third one by Jonas from 2021 (q3.rd). TODO: create relationships between the three things as q3 goes live. All are lists of candidates for "positional" lensing. eDR3 update =========== As before: * dachs imp q3 build_highpm (where "high proper motion" is now not to be taken too literally; this will do a seqscan through edr3 and hence takes a while) * dachs imp q3 build_rawcands -- the result is plc3.rawcands. * then some magic happens at Jonas' end (TODO: figure it out) yielding data3/amlensing.fits. * I run the import. Jonas' new data =============== Jonas created his list as follows: * dachs imp q2 build_highpm; this yields a table plc2.highpm of high proper motion stars from Gaia DR2 together with the areas we should check for stars (because they might be lensed) * dachs imp q2 build_rawcands; this actually checks wether suitable stars are within the regions of interest established in step 1. This results in a table plc2.rawcands. * Jonas pulled the result and did processing on his box, which eventually encluded running creation/amlensing.py * That script adds some quality information obtained from a full DR2 table (which is in data/hpms.vot; see the paper for how it came to be), and then computes the impact geometries to figure out which events might actually be observable. * This gave data/amlensing2.fits, which we're then ingesting. Svea's old data =============== The final list is created in three steps: * Select candiate lens stars using ``gavo imp q import_highpm``; this runs SQL scripts pulling in data in a common format into a table plc.highpm. * Select candiate lens-lensed pairs doing a crossmatch with PPMXL with some tricks. Use ``gavo imp q match``, data is left in plc.rawcands * Compute predicted encounters, etc, for the object pairs in plc.rawcands doing ``gavo imp q compute``. What's actually executed is defined in res/makecands.py. To make the tests run on a development machine that doesn't have the full source catalogs, transfer a dump: alnilam: dachs dump create trans.dump amlensing/q#data dev machine: dachs dump load trans.dump