Looking through the repo, reading the doc, an LLM looks to be part of the implementation. LLMs cannot explain their reasoning, so if there is an LLM, then the system as a whole cannot explain its reasoning, because part of the system is a black box? reasoning can be explained up to the point the LLM comes into play, and also then afterwards, with whatever is done with LLM output?
Github repo: github.com/VectifyAI/PageIndex
Is there a plan to allow localhosting with ollama or pinokio or llmstudio?
Can this system explain its reasoning, and so explain its answer?
Yes, the explanation and reasons for relevance can be included in the search and reflected in the answer.
Looking through the repo, reading the doc, an LLM looks to be part of the implementation. LLMs cannot explain their reasoning, so if there is an LLM, then the system as a whole cannot explain its reasoning, because part of the system is a black box? reasoning can be explained up to the point the LLM comes into play, and also then afterwards, with whatever is done with LLM output?
Can you explain your reasoning?
Makes perfect sense. Looking forward to trying this.
[dead]
great work mate!