RAG in Finance: Cutting-Edge or Yesterday’s News?
In the early days of commercial LLMs, capabilities were defined by models' inherent architecture and the way they made use of their underlying training data. When you asked a question to a chatbot, it would famously tell you that it only had knowledge going up to 2021. Additionally, chatbots could only process and remember so much information in a prompt. You had to be very specific, clear and consistent in your prompting if you wanted the same in your outputs. Even then, what LLMs could generate was limited by the information they had access to.