Skip to content

feat: added new notebooks of agents fast start #1175

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,47 +1,36 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "StkY5oHGU-iN"
},
"source": [
"# LLMWare Model Exploration\n",
"\n",
"## This is the 'entrypoint' example that provides a general introduction of llmware models.\n",
"\n",
"This notebook provides an introduction to LLMWare Agentic AI models and demonstrates their usage."
],
"metadata": {
"id": "StkY5oHGU-iN"
}
]
},
{
"cell_type": "code",
"source": [
"# install dependencies\n",
"!pip3 install llmware"
],
"execution_count": null,
"metadata": {
"collapsed": true,
"id": "KyaEnPzOVTJe"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"# install dependencies\n",
"!pip3 install llmware"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "mcOxXgs1XTjD"
},
"source": [
"If you have any dependency install issues, please review the README, docs link, or raise an Issue.\n",
"\n",
Expand All @@ -58,55 +47,58 @@
"The second script `\"welcome_to_llmware.sh\"` will install all of the dependencies.\n",
"\n",
"If using Windows, then use the `\"welcome_to_llmware_windows.sh\"` script."
],
"metadata": {
"id": "mcOxXgs1XTjD"
}
]
},
{
"cell_type": "code",
"source": [
"# Import Library\n",
"from llmware.models import ModelCatalog"
],
"execution_count": null,
"metadata": {
"id": "n4aKjcEiVjYE"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"# Import Library\n",
"from llmware.models import ModelCatalog"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ePtRGBIlZEkP"
},
"source": [
"## GETTING STARTED WITH AGENTIC AI\n",
"All LLMWare models are accessible through the ModelCatalog generally consisting of two steps to access any model\n",
"\n",
"- Step 1 - load the model - pulls from global repo the first time, and then automatically caches locally\n",
"- Step 2 - use the model with inference or function call"
],
"metadata": {
"id": "ePtRGBIlZEkP"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true,
"id": "D0xL5WOgVlGX"
},
"outputs": [],
"source": [
"# 'Standard' Models use 'inference' and take a general text input and provide a general text output\n",
"\n",
"model = ModelCatalog().load_model(\"bling-answer-tool\")\n",
"response = model.inference(\"My son is 21 years old.\\nHow old is my son?\")\n",
"\n",
"print(\"\\nresponse: \", response)"
],
"metadata": {
"collapsed": true,
"id": "D0xL5WOgVlGX"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true,
"id": "1AkSZ3Z_VqWt"
},
"outputs": [],
"source": [
"# Optional parameters can improve results\n",
"model = ModelCatalog().load_model(\"bling-phi-3-gguf\", temperature=0.0,sample=False, max_output=200)\n",
Expand All @@ -120,16 +112,13 @@
"response = model.inference(prompt,add_context=text_passage)\n",
"\n",
"print(\"\\nresponse: \", response)"
],
"metadata": {
"collapsed": true,
"id": "1AkSZ3Z_VqWt"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "OuNEktB-aPVw"
},
"source": [
"## Models we have and support\n",
"Inference models can also be integrated into Prompts - which provide advanced handling for integrating with knowledge retrieval, managing source information, and providing fact-checking\n",
Expand All @@ -140,13 +129,16 @@
"- we do **include other popular models** such as `phi-3`, `qwen-2`, `yi`, `llama-3`, `mistral`\n",
"- it is easy to extend the model catalog to **include other 3rd party models**, including `ollama` and `lm studio`.\n",
"- we do **support** `open ai`, `anthropic`, `cohere` and `google api` models as well."
],
"metadata": {
"id": "OuNEktB-aPVw"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true,
"id": "erEHenbjaYqi"
},
"outputs": [],
"source": [
"all_generative_models = ModelCatalog().list_generative_local_models()\n",
"print(\"\\n\\nModel Catalog - load model with ModelCatalog().load_model(model_name)\")\n",
Expand All @@ -156,28 +148,28 @@
" model_family = model[\"model_family\"]\n",
"\n",
" print(\"model: \", i, model)"
],
"metadata": {
"collapsed": true,
"id": "erEHenbjaYqi"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tLCuxZcYdTHn"
},
"source": [
"## Slim Models\n",
"Slim models are 'Function Calling' Models that perform a specialized task and output python dictionaries\n",
"- by design, slim models are specialists that **perform single function**.\n",
"- by design, slim models generally **do not require any specific** `'prompt instructions'`, but will often accept a `\"parameter\"` which is passed to the function."
],
"metadata": {
"id": "tLCuxZcYdTHn"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true,
"id": "1ZS2wo8zdDOd"
},
"outputs": [],
"source": [
"model = ModelCatalog().load_model(\"slim-sentiment-tool\")\n",
"response = model.function_call(\"That was the worst earnings call ever - what a disaster.\")\n",
Expand All @@ -186,16 +178,15 @@
"print(\"\\nresponse: \", response)\n",
"print(\"llm_response: \", response['llm_response'])\n",
"print(\"sentiment: \", response['llm_response']['sentiment'])"
],
"metadata": {
"collapsed": true,
"id": "1ZS2wo8zdDOd"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Ohp-shGjkDkz"
},
"outputs": [],
"source": [
"# here is one of the slim model applied against a common earnings extract\n",
"\n",
Expand All @@ -212,15 +203,15 @@
"response = model.function_call(text_passage,function=\"extract\",params=[\"revenue\"])\n",
"\n",
"print(\"\\nextract response: \", response)"
],
"metadata": {
"id": "Ohp-shGjkDkz"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "w13LLW_OdxCm"
},
"outputs": [],
"source": [
"# Function calling models generally come with a test set that is a great way to learn how they work\n",
"# please note that each test can take a few minutes with 20-40 test questions\n",
Expand All @@ -232,39 +223,50 @@
"ModelCatalog().tool_test_run(\"slim-summary-tool\")\n",
"ModelCatalog().tool_test_run(\"slim-xsum-tool\")\n",
"ModelCatalog().tool_test_run(\"slim-boolean-tool\")"
],
"metadata": {
"id": "w13LLW_OdxCm"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "kOPly8bfdnan"
},
"source": [
"## Agentic AI\n",
"Function calling models can be integrated into Agent processes which can orchestrate processes comprising multiple models and steps - most of our use cases will use the function calling models in that context\n",
"\n",
"## Last note:\n",
"Most of the models are packaged as `\"gguf\"` usually identified as GGUFGenerativeModel, or with `'-gguf'` or `'-tool` at the end of their name. These models are optimized to run most efficiently on a CPU-based laptop (especially Mac OS). You can also try the standard Pytorch versions of these models, which should yield virtually identical results, but will be slower."
],
"metadata": {
"id": "kOPly8bfdnan"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "rvLVgWYMe6RO"
},
"source": [
"## Journey is yet to start!\n",
"Loved it?? This is just an example of our models. Please check out our other Agentic AI examples with every model in detail here: https://github.com/llmware-ai/llmware/tree/main/fast_start/agents\n",
"\n",
"Also, if you have more interest in RAG, then please go with our RAG examples, which you can find here: https://github.com/llmware-ai/llmware/tree/main/fast_start/rag\n",
"\n",
"If you liked it, then please **star our repo https://github.com/llmware-ai/llmware** ⭐"
],
"metadata": {
"id": "rvLVgWYMe6RO"
}
"If you liked it, then please **star our repo https://github.com/llmware-ai/llmware** ⭐\n",
"\n",
"Any doubts?? Join our **discord server: https://discord.gg/GN49aWx2H3** 🫂"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
]
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Loading