Skip to main content

NVIDIA

This will help you getting started with NVIDIA models. For detailed documentation of all NVIDIA features and configurations head to the API reference.

Overviewโ€‹

The langchain-nvidia-ai-endpoints package contains LangChain integrations building applications with models on NVIDIA NIM inference microservice. These models are optimized by NVIDIA to deliver the best performance on NVIDIA accelerated infrastructure and deployed as a NIM, an easy-to-use, prebuilt containers that deploy anywhere using a single command on NVIDIA accelerated infrastructure.

NVIDIA hosted deployments of NIMs are available to test on the NVIDIA API catalog. After testing, NIMs can be exported from NVIDIAโ€™s API catalog using the NVIDIA AI Enterprise license and run on-premises or in the cloud, giving enterprises ownership and full control of their IP and AI application.

NIMs are packaged as container images on a per model basis and are distributed as NGC container images through the NVIDIA NGC Catalog. At their core, NIMs provide easy, consistent, and familiar APIs for running inference on an AI model.

This example goes over how to use LangChain to interact with NVIDIA supported via the NVIDIA class.

For more information on accessing the llm models through this api, check out the NVIDIA documentation.

Integration detailsโ€‹

ClassPackageLocalSerializableJS supportPackage downloadsPackage latest
NVIDIAlangchain_nvidia_ai_endpointsโœ…betaโŒPyPI - DownloadsPyPI - Version

Model featuresโ€‹

JSON modeImage inputAudio inputVideo inputToken-level streamingNative asyncToken usageLogprobs
โŒโœ…โŒโŒโœ…โŒโŒโŒ

Setupโ€‹

To get started:

  1. Create a free account with NVIDIA, which hosts NVIDIA AI Foundation models.

  2. Click on your model of choice.

  3. Under Input select the Python tab, and click Get API Key. Then click Generate Key.

  4. Copy and save the generated key as NVIDIA_API_KEY. From there, you should have access to the endpoints.

Credentialsโ€‹

import getpass
import os

if not os.getenv("NVIDIA_API_KEY"):
# Note: the API key should start with "nvapi-"
os.environ["NVIDIA_API_KEY"] = getpass.getpass("Enter your NVIDIA API key: ")

Installationโ€‹

The LangChain NVIDIA AI Endpoints integration lives in the langchain_nvidia_ai_endpoints package:

%pip install --upgrade --quiet langchain-nvidia-ai-endpoints

Instantiationโ€‹

See LLM for full functionality.

from langchain_nvidia_ai_endpoints import NVIDIA
llm = NVIDIA().bind(max_tokens=256)
llm

Invocationโ€‹

prompt = "# Function that does quicksort written in Rust without comments:"
print(llm.invoke(prompt))

Stream, Batch, and Asyncโ€‹

These models natively support streaming, and as is the case with all LangChain LLMs they expose a batch method to handle concurrent requests, as well as async methods for invoke, stream, and batch. Below are a few examples.

for chunk in llm.stream(prompt):
print(chunk, end="", flush=True)
llm.batch([prompt])
await llm.ainvoke(prompt)
async for chunk in llm.astream(prompt):
print(chunk, end="", flush=True)
await llm.abatch([prompt])
async for chunk in llm.astream_log(prompt):
print(chunk)
response = llm.invoke(
"X_train, y_train, X_test, y_test = train_test_split(X, y, test_size=0.1) #Train a logistic regression model, predict the labels on the test set and compute the accuracy score"
)
print(response)

Supported modelsโ€‹

Querying available_models will still give you all of the other models offered by your API credentials.

NVIDIA.get_available_models()
# llm.get_available_models()

Chainingโ€‹

We can chain our model with a prompt template like so:

from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human", "{input}"),
]
)

chain = prompt | llm
chain.invoke(
{
"input_language": "English",
"output_language": "German",
"input": "I love programming.",
}
)
API Reference:ChatPromptTemplate

API referenceโ€‹

For detailed documentation of all NVIDIA features and configurations head to the API reference: https://python.langchain.com/api_reference/nvidia_ai_endpoints/llms/langchain_nvidia_ai_endpoints.llms.NVIDIA.html


Was this page helpful?


You can also leave detailed feedback on GitHub.