Open
Description
The model should learn that if it's given text with the question it should find the answer within the text
otherwise attempt to answer the question
This will lead to an alighnment problem where the model will attempt to answer questions it's not confident on.
We will need to think of a way to align the model later to say it is not confident in it's answer that triggers a
lookup / search process for scholarly articles related to the topic/question
Metadata
Metadata
Assignees
Labels
No labels