The eSTAR Prototype
The prototype network consisted of a number of autonomous telescopes, and associated rapid data reduction pipelines, connected together using GLOBUS middleware. Intelligent agents carried out resource discovery, submitted observing requests, and analysed the reduced data returned by the telescope nodes. The agents were capable of carrying out data mining and cross-correlation tasks using online catalogues and databases and, if necessary, requesting follow-up observations from the telescope nodes.
For the prototype we made use of off the shelf hardware, using Meade LX200 and ETX telescopes with SBIG and Apogee CCD cameras as telescope nodes. However the telescope control and agent software was bespoke and written by the project to test the infrastructure which could then be applied to larger scale projects.
Prototype Agent Software
The prototype Intelligent Agent (IA) software was designed for long term dwarf nova. It cross-correlates a point source catalogue returned by a telescope node with USNO-A2 (in real time) in an attempt to find candidate variable stars in the observed field. The IA data mines SIMBAD for possible known variables to match the candidate stars.
If the candidate turns out to be a dwarf nova of interest to the user, the IA will request further followup observations from the network of telescope nodes.
We have been pleasantly suprised by the speed of the cross-correlation stage, which normally completes in four to ten seconds, including recovering data from three different online catalogues.
While the prototype agent did lack complexity, it was deliberately developed in a modular fashion to maximise code reuse, and could be trivially re-engineered to handle very different science goals.
The prototype project made use of Globus IO to transport RTML documents between the agents and the telescope nodes, with SSL encryption providing secure data transport. For the next generation work, where we deployed the eSTAR software onto research class telescopes, we discarded this approach and re-engineered the agent software to use a SOAP based transport scheme.
Several testbed clients were written by the project to test the feasibility of the data mining and data correlation algorithims we were implementing and released as stand alone pieces of software. The two main test beds were known as the Field Correlation Client (FCC) and the Data Mining Client (DMC).
The FCC and DMC served as testbeds for the low level and middleware Perl classes and wrappers underpinning the intelligent agent (IA) software. The DMC allowed the user to data mine various online archives and catalogues (such as SIMBAD) collating the available information for a specific target, or on all targets near a specified RA and Dec. The FCC went further than this allowing you to perform cross-correlation of a point source catalogue with other catalogues such as USNO-A2 available online and queried in real-time by the client.