Traditional radio astronomy instrumentation relies on custom built designs, specialized for each science application. Traditional high performance computing (HPC) uses general purpose clusters and tools to parallelize the each algorithm across a cluster. In real time radio astronomy processing, a simple CPU/GPU cluster alone is insufficient to process the data. Instead, digitizing and initial processing of high bandwidth data received from a single antenna is often done in FPGA as it is infeasible to get the data into a single server. Choosing which platform to use for different parts of an instrument is a growing challenge. With instrument specifications and platforms constantly changing as technology progresses, the design space for these instruments is unstable and often unpredictable. Furthermore, the astronomers designing these instruments may not be technology experts, and assessing tradeoffs between different computing architectures, such as FPGAs, GPUs, and ASICs and determining how to partition an instrument can prove difficult. In this work, I present a tool called Optimal Rearrangement of Cluster-based Astronomy Signal Processing, or ORCAS, that automatically determines how to optimally partition an instrument across different types of hardware for radio astronomy based on a high level description of the instrument and a set of benchmarks. In ORCAS, each function in a high level instrument gets profiled on different architectures. The architectural mapping is then done by an optimization technique called integer linear programming (ILP). The ILP takes the function models as well as the cost model as input and uses them to determine what architecture is best for every function in the instrument. ORCAS judges optimality based on a cost function and generates an instrument design that minimizes the total monetary cost, power utilization, or another user-defined cost.




Download Full History