This is a beta module of NID System that empowers its users to leverage the capabilities of generative AI. This feature makes use of completely localized Large Language Model and vector embedding stores to generate answers to users queries based on information available in Austria-Forum and NID library. This creates a cutting edge yet secure environment to encode the semantic meaning and context of text, allowing LLMs to understand context and judge similarity when returning answers to query prompts.
At the moment we are testing various embedding schemes and LLMs for optimal results. We are also exploring how effective the system performs on standard CPU servers and value additions in form of GPU based NID hosting infrastructure.
At the moment only limited document sources from NID are being added to NID-GPT vector store once the module matures the functionality will be extended to a larger dataset available in NID and Austria-Forum repository.
Searching
- In order to ask a question, type it in the question field like:
What is the NID library system? - Hit Enter on the keyboard or click on the Go button
- Wait (Please be patient!!) while the LLM model consumes the prompt and prepares the answer. Currently, the system's processing is offloaded to the local CPU and a modest onboard GPU. In the future, adding a more powerful GPU cluster will enhance the system's response time and capabilities.
- Once done, it will print the answer and the 4 sources it used as context from your documents; you can then ask another question without re-running the script, just wait for the prompt again.
NOTE: GPT/RAG systems do not always guarantee correct answers, as AI hallucinations can cause LLMs to generate responses with false or misleading information presented as fact.