PDF

Description

Large Language Models (LLMs) have shown to be highly effective at performing in-context learning, where, given a prompt, the model can learn from the prompt and complete the sequence without needing to perform additional gradient steps or fine-tuning. In this project, we investigated the ability of Transformer models to perform in-context learning on linear dynamical systems. We first experimented with Transformers trained on a single system, where the task for evaluation was to filter noise on trajectories sampled from the same system. Then, we experimented with Transformers trained on multiple systems of the same type, where the task was to perform simultaneous system identification and filtering. This is still very much a work in progress, and I hope to continue to work on this in the coming weeks.

Details

Files

Statistics

from
to
Export
Download Full History