Setup AI Project and perform Chat Completion from VS Code

Exercise 1: Setup AI Project and perform Chat Completion from VS Code

* Focus on setting up the environment to build AI Agents.

* Steps involved:

- Configure an AI Project in Azure AI Foundry.
- Deploy a Large Language Model (LLM).
- Deploy Embedding Models for vector operations.
- Connect Visual Studio Code to the AI Project.

* Run a chat completion call to verify the setup.

Task : 1 Setting up the AI Project in the Azure AI Foundry 

Step 1 : On the Azure Portal page, in the Search resources box at the top of the portal, enter Azure AI Foundry (1), and then select Azure AI Foundry (2) under Services.




Step 2 : In the left navigation pane for the AI Foundry, select AI Hubs (1). On the AI Hubs page, click on Create (2) and select Hub (3) from the drop-down.




Step 3 : On the Create an Azure AI hub pane enter the following details under Basics (1) :


Subscription : Leave default subscription (2)

Resource Group : AgenticAI (3)

Region : eastus (4)

Name : ai-foundry-hub-1951715 (5)

Connect AI Services incl. OpenAI : Click on Create New (6).

Connect AI Services incl. OpenAI : Provide a name my-ai-service-1951715 (7).

Click on Save (8), followed by Next:Storage (9)





Step 4 : Click on Review + Create tab followed by Create.







Step : 5 Wait for the deployment to complete and then click on Go to resource.




Step 6 : On the Overview pane, click on Launch Azure AI Foundry. This will navigate you to the Azure AI Foundry portal..


Step 7 : Scroll down and click on + New project on the Hub Overview.


Step : 8 Provide the project name as ai-foundry-project-1951715 , then click on Create (2).


Step 9 : Once the project is created, scroll down and copy the Project connection string, then paste them into Notepad or a secure location, as they will be required for upcoming tasks.




Task 2 : Deploying an LLM and embedding models


* The task involves deploying two models within an Azure AI Foundry project.

- Large Language Model (LLM) for AI-driven applications.
- Embedding Model for vector-based search and similarity operations.
- These deployed models will be used in upcoming labs.
- The embedding model will provide vector search capabilities in the project.

Step 1 : In your AI Foundry project, navigate to the My assets (1) section, then select Models + endpoints (2). Click Deploy model (3), and choose Deploy base model (4) to proceed.



Step 2 : On a Select a model window, search for gpt-4o (1), select gpt-4o (2) and select Confirm (3)



Step 3 : On Deploy model gpt-4o window, select Customize.


Deployment Name: gpt-4o (1)

Deployment type: Global Standard (2)

Change the Model version to 2024-08-06 (Default) (3)

Change the Tokens per Minute Rate Limit to 200K (4)

Click on Connect and Deploy (5)



Step 4 : Click on Model + Endpoints (1), there you can see the deployed gpt-4o (2) model.



Step 5 : Navigate back to Azure Portal and search for Open AI (1) and select Azure Open AI (2) resource.


Step 6 : On the AI Foundry | Azure OpenAI page, select + Create (1) then select Azure OpenAI (2) to create Azure OpenAI resource.


Step 7 : On Create Azure OpenAI page, provide the following settings and click on Next (6):

Setting Value
Subscription Keep the default subscription (1)
Resource group AgenticAI (2)
Region East US (3)
Name my-openai-service1951715 (4)
Pricing tier Standard S0 (5)




Step 8 : Click on Next till Review + submit tab appears.

Step 9 : On the Review + submit page, click on Create



Step 10 : Wait until the deployment got succeeded and select Go to resource.


Step 11 : On the my-openai-service1951715 resource page, select Go to Azure AI Foundry portal.


Step 12 : In your AI Foundry project, navigate to the Shared resources section, then select Deployments (1). Click Deploy model (2), and choose Deploy base model (3) to proceed.




Step 13 : On a Select a model window, search for text-embedding-3-large (1), then select text-embedding-3-large (2) and select Confirm (3)


Step 14 : On Deploy model text-embedding-3-large window,

Deployment type: Select Standard (1)

Tokens per Minutes Rate Limit: 120K (2)

Select Deploy (3) to deploy the model.



Step 14 : Click on Deployment (1), you can see the deployed text-embedding-3-large (2) model.



Task 3: Install dependencies, create a virtual environment, and create an environment variables file

- Install the required dependencies. 
- Set up a virtual environment for an isolated development setup. 
- Create an environment variables (.env) file. 
- This setup ensures a controlled and consistent development environment. It also allows for secure management of configuration and sensitive settings for the AI project.

Step 1 : Open VS Code.

Step 2 : Click on File (1), then Open Folder.


Step 3 : Navigate to C:\LabFiles\Day-2-Azure-AI-Agents (1), select the azure-ai-agents-labs (2) folder and then click on Select folder (3).


Step 4 : Click on Yes, I Trust the authors,


Step 5 : Click on the elipses(...) (1), then Terminal (1) and then New Terminal (3).


Step 6 : Make sure your in azure-ai-agents-labs project directory. Run the below powershell commands to create and activate your virtual environment:



Step 7 : Run the below powershell command. This installs all the required packages:



Step 8 :  



Step 9 : Run the below command to log into your Azure account.


Step 10 : Select the odl_user_1951715@sandboxailabs1004.onmicrosoft.comodl_user_1951715@sandboxailabs1004.onmicrosoft.com user account to authorize.



Step 11 : Once the Authorization is completed, navigate back to the Visual studio code.


Step 12 : Open the Sample.env file and provide the necessary environment variables.


Navigate to Azure AI foundry portal, click on gpt-4o(2) model from the Models + endpoints(1) section under My assets, copy the under Endpoint from right pane, copy and paste the Target URI (1) and Key (2) in a notepad



Step 13 : On the Sample.env file,

AIPROJECT_CONNECTION_STRING: Provide Project connection string value you have copied in step 9 of Task 1

CHAT_MODEL_ENDPOINT: Provide the Target URI of gpt-4o model you have copied in the previous step

CHAT_MODEL_API_KEY: Provide the Key value of gpt-4o model you have copied in the previous step

CHAT_MODEL: gpt-4o


Step 14 : Save changes to the Sample.env file.

Step 15 : Run the below powershell command. This creates your .env file:


Step 16 : Later Open the Lab 1 - Project Setup.ipynb file. The Lab 1 - Project Setup.ipynb notebook guides you through setting up an AI Project in Azure AI Foundry, deploying an LLM and embedding models, and configuring VS Code connectivity. It also includes a simple Chat Completion API call to verify the setup. Running this notebook ensures that your environment is correctly configured for developing AI-powered applications.

Lab 1 - Project Setup.ipynb 

Step 17 : Select the Select kernel (1) setting available in the top right corner and select Install/enable selected extensions (python+jupyter) (2).


Step 18 : Select Python Environments to ensure that Jupyter Notebook runs in the correct Python interpreter with the necessary dependencies installed.


Step 19 : Select venv (Python 3.x.x) from the list as this version is likely required for compatibility with Azure AI Foundry SDK and other dependencies.


Step 20 : Run the first cell to import necessary Python libraries for working with Azure AI services.

# Import packages
import os
from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential
from dotenv import load_dotenv

load_dotenv() # Load environment variables from .env file


Step 21 : Run the below cell to retrieve the project connection string and model name from environment variables. These values are needed to interact with the Large Language Model (LLM) securely, without hardcoding sensitive information.

# Get the project connection string and model from environment variables, which are needed to make a call to the LLM
project_connection_string = os.getenv("AIPROJECT_CONNECTION_STRING")
model = os.getenv("CHAT_MODEL")


Step 22 : Run the below cell to connect to your Azure AI Foundry project using the connection string. This establishes a secure connection with AIProjectClient, enabling interactions with your project resources.



Step 23 : Run the below cell to interact with the GPT-4o model using your Azure AI Foundry project. This code initializes a chat client, sends a request for a joke about a teddy bear, and prints the response. Finally see the output provided from the chat model.


# Chat with the gpt-4o model
chat = project.inference.get_chat_completions_client()
response = chat.complete(
    model=model,
    messages=[
        {
            "role": "system",
            "content": "You are an AI assistant that tells jokes for toddlers.",
        },
        {"role": "user", "content": "Hey, can you tell a joke about teddy bear?"},
    ],
)

print(response.choices[0].message.content)





























































































































































Comments

Popular posts from this blog

AI-900-3,4

AI-900 12,13

AI-900 10,11