Use Azure OpenAI chat-completion models (like gpt-35-turbo or ChatGPT) to enrich your documents with: summaries, table description (based on its schema), content classification, data extraction, etc. You design and configure the prompts and include whatever metadata or content from the document to query for what you want.
You'll need to configure two workflow components in the following order:
Use Azure OpenAI models for vector embedding generation of the content extracted from the documents.
You'll need to configure two workflow components in the following order:
Deploy on-premise python models BERT, MiniLM, T5, GTR-T5 for vector embedding generation of the content extracted from the documents.
You'll need to configure two workflow components in the following order:
Split the text from the documents into smaller segments, these segments could later be used for vector embedding generation.
You need to configure the following component:
Available from version Aspire v5.2