Quantcast
Channel: XWiki Forum - Latest topics
Viewing all articles
Browse latest Browse all 1373

Local LLM integration with LLM Internal Inference

$
0
0

Hello everyone,

I’m new to the Xwiki community and I’m interested in LLM integration.

I didn’t quite understand where the connection to the local LLM is made if you don’t want to use OpenAI. I understand that when configuring the extension manager in LLM Application, you mustn’t enter a URL, but I don’t understand how the connection is made.

What’s more, I don’t quite understand the embedding models. Are they there to increase the relevance of our prompt in the UI when using an LLM chosit in chat? Like a shadow LLM that ensures there’s a good understanding between the user and the model?

Finally, my last question is whether we can use a model that doesn’t follow the OpenAI chat/completion model if we configure a Chat Request Filter?

Thanks in advance for your answers!

1 post - 1 participant

Read full topic


Viewing all articles
Browse latest Browse all 1373

Trending Articles