JupyterLab3 reached its end of maintenance date on May 15, 2024. As a result, we will not backport new features to the v1 branch supporting JupyterLab 3. Fixes for critical issues will still be backported until December 31, 2024. If you are still using JupyterLab 3, we strongly encourage you to upgrade to JupyterLab 4 as soon as possible. For more information, see JupyterLab 3 end of maintenance on the Jupyter Blog.
The jupyter_ai_magics package, which provides exclusively the IPython magics,does not depend on JupyterLab or jupyter_ai. You can installjupyter_ai_magics without installing jupyterlab or jupyter_ai.If you have both jupyter_ai_magics and jupyter_ai installed, you shouldhave the same version of each, to avoid errors.
Jupyter AI internally uses Pydantic v1 and should work with either Pydanticversion 1 or version 2. For compatibility, developers using Pydantic V2should import classes using the pydantic.v1 package. See theLangChain Pydantic migration planfor advice about how developers should use v1 to avoid mixing v1 and v2classes in their code.
Jupyter AI supports a wide range of model providers and models. To use Jupyter AI with a particular provider, you must install its Python packages and set its API key (or other credentials) in your environment or in the chat interface.
The environment variable names shown above are also the names of the settings keys used when setting up the chat interface.If multiple variables are listed for a provider, all must be specified.
The first time you open the chat interface, Jupyter AI will ask you which models you want to use as a language model and as an embedding model. Once you have made your selections, the UI may display text boxes for one or more settings keys.
An embedding model is used when learning and asking about local data. These models can transform your data, including documents and source code files, into vectors that can help Jupyter AI compose prompts to language models.
To compose a message, type it in the text box at the bottom of the chat interface and press ENTER to send it. You can press SHIFT+ENTER to add a new line. (These are the default keybindings; you can change them in the chat settings pane.) Once you have sent a message, you should see a response from Jupyternaut, the Jupyter AI chatbot.
The chat backend remembers the last two exchanges in your conversation and passes them to the language model. You can ask follow up questions without repeating information from your previous conversations. Here is an example of a chat conversation with a follow up question:
Jupyter AI supports language models hosted on SageMaker endpoints that use JSONschemas. The first step is to authenticate with AWS via the boto3 SDK and havethe credentials stored in the default profile. Guidance on how to do this canbe found in theboto3 documentation.
Request schema: The JSON object the endpoint expects, with the promptbeing substituted into any value that matches the string literal "".In this example, the request schema "text_inputs":"" generates a JSONobject with the prompt stored under the text_inputs key.
Note that each model comes with its own license, and that users are themselvesresponsible for verifying that their usage complies with the license. You canfind licensing details on the GPT4All official site.
, where should be substituted with the corresponding URLhosting the model binary (within the double quotes). After restarting theserver, the GPT4All models installed in the previous step should be available touse in the chat interface.
To get started, follow the instructions on the Ollama website to set up ollama and download the models locally. To select a model, enter the model name in the settings panel, for example deepseek-coder-v2.
Especially if your prompt is detailed, it may take several minutes to generateyour notebook. During this time, you can still use JupyterLab and Jupyter AIas you would normally. Do not shut your JupyterLab instance down whileJupyter AI is working.
Using the /learn command, you can teach Jupyter AI about local data so that Jupyternaut can include it when answering your questions. This local data is embedded using the embedding model you selected in the settings panel.
By default, /learn will not read directories named node_modules, lib, or build,and will not read hidden files or hidden directories, where the file or directory namestarts with a .. To force /learn to read all supported file types in all directories,use the -a or --all-files option.
The /learn command also provides downloading and processing papers from the arXiv repository. You will need to install the arxiv python package for this feature to work. Run pip install arxiv to install the arxiv package.
Use the /export command to export the chat history from the current session to a markdown file named chat_history-YYYY-MM-DD-HH-mm-ss.md. You can also specify a filename using /export . Each export will include the entire chat history up to that point in the session.
The /fix command can be used to fix any code cell with an error output in aJupyter notebook file. To start, type /fix into the chat input. Jupyter AIwill then prompt you to select a cell with error output before sending therequest.
After this, the Send button to the right of the chat input will be enabled, andyou can use your mouse or keyboard to send /fix to Jupyternaut. The code celland its associated error output are included in the message automatically. Whencomplete, Jupyternaut will reply with suggested code that should fix the error.You can use the action toolbar under each code block to quickly replace thecontents of the failing cell.
Jupyter AI can also be used in notebooks via Jupyter AI magics. This sectionprovides guidance on how to use Jupyter AI magics effectively. The examples inthis section are based on the Jupyter AI example notebooks.
Once the extension has loaded, you can run %%ai cell magic commands and%ai line magic commands. Run %%ai help or %ai help for help with syntax.You can also pass --help as an argument to any line magic command (for example,%ai list --help) to learn about what the command does and how to use it.
The %%ai cell magic allows you to invoke a language model of your choice witha given prompt. The model is identified with a global model ID, which is a string with thesyntax :, where is the ID of theprovider and is the ID of the model scoped to that provider.The prompt begins on the second line of the cell.
Jupyter AI also includes multiple subcommands, which may be invoked via the%ai line magic. Jupyter AI uses subcommands to provide additional utilitiesin notebooks while keeping the same concise syntax for invoking a language model.
Optionally, you can specify a provider ID as a positional argument to %ai listto get all models provided by one provider. For example, %ai list openai willdisplay only models provided by the openai provider.
If your model ID is associated with only one provider, you can omit the provider-id andthe colon from the first line. For example, because ai21 is the only provider of thej2-jumbo-instruct model, you can either give the full provider and model,
By default, Jupyter AI assumes that a model will output markdown, so the output ofan %%ai command will be formatted as markdown by default. You can override thisusing the -f or --format argument to your magic command. Valid formats include:
Please review any code that a generative AI model produces before you run itor distribute it.The code that you get in response to a prompt may have negative side effects and mayinclude calls to nonexistent (hallucinated) APIs.
Using curly brace syntax, you can include variables and other Python expressions in yourprompt. This lets you execute a prompt using code that the IPython kernel knows about,but that is not in the current cell.
You can use the special In and Out list with interpolation syntax to explain codelocated elsewhere in a Jupyter notebook. For example, if you run the following code ina cell, and its input is assigned to In[11]:
Jupyter AI also adds the special Err list, which uses the same indexes as In and Out.For example, if you run code in In[3] that produces an error, that error is captured inErr[3] so that you can request an explanation using a prompt such as:
Jupyter AI supports language models hosted on SageMaker endpoints that use JSON schemas. Authenticate with AWS via the boto3 SDK and have the credentials stored in the default profile. Guidance on how to do this can be found in the boto3 documentation.
All SageMaker endpoint requests require you to specify the --region-name, --request-schema, and --response-path options. The example below presumes that you have deployed a model called jumpstart-dft-hf-text2text-flan-t5-xl.
The --request-schema parameter is the JSON object the endpoint expects as input, with the prompt being substituted into any value that matches the string literal "". For example, the request schema "text_inputs":"" will submit a JSON object with the prompt stored under the text_inputs key.
This configuration allows specifying arbitrary parameters that are unpacked andpassed to the provider class. This is useful for passing parameters such asmodel tuning that affect the response generation by the model. This is also anappropriate place to pass in custom attributes required by certainproviders/models.
The second option is to drop it in a location that JupyterLab scans for configuration files.The file should be named jupyter_jupyter_ai_config.json in this case. You can find these paths by running jupyter --pathscommand, and picking one of the paths from the config section.
The plan allows for theme park attractions alongside hotels on the west side of Disneyland Drive and theme park attractions alongside new shopping, dining and entertainment to the southeast on what is today the Toy Story Parking Area.
3a8082e126