MCP (Model Context Protocol)

 Model Context Protocol (MCP)


1. Why Do We Need MCPs? 

- LLMs, like ChatGPT, can only create text, images, audio, or video. 
- They cannot perform actions, such as booking a flight, changing a database, or calling an API. 
- To perform actions, we need AI agents. 
AI agents interact with: 
- Third-party platforms (APIs)
- Internal databases
- User preferences
- LLMs for decision-making 


2. What Is an AI Agent? 

- AI agents are similar to automation scripts, like VMware orchestrator, Zapier, or Python scripts. However, they can also think using LLMs. 
They: 
- Process user input with an LLM
- Decide which tools or APIs to use
- Interact with platforms, like flights, hotels, and databases
- Make decisions in loops
- Stop only when the task is complete
- Use memory to store user preferences 
- Essentially, old automation scripts combined with LLM reasoning create an AI agent. 

3. How AI Agent Workflow Works (Example: Flight Booking) 

- User: “I want to fly to North London.” 
- Agent → LLM: “What does the user mean?” 
- LLM: Destination = London 
- Agent → LLM: “Which tools or APIs should I call?” 
- Agent calls airline APIs and retrieves flight data. 
- Agent fetches user preferences from the database. 
- Agent → LLM: “Pick the best flight.” 
- LLM chooses one. 
- Agent books the flight and sends details to the user. 
- Agent and LLM interact multiple times during the workflow. 

4. How Do Agents Interact With Platforms? → Tools 

- Agents use tools to call third-party APIs. 
Example (from pseudocode): 
- Airline A: /api/flights 
- Airline B: /flights/list 
- Airline C: /listFlights 
Each API is different, so the agent must know: 
- Different endpoints 
- Different response formats 
- Different input fields 
- Doing this manually is difficult. AI should manage it automatically. 

5. What Makes This Easy? – MCP (Model Context Protocol) 

MCP provides: 
- A standard method for AI agents to discover and use APIs 
- A common protocol for tools to expose capabilities 
- Consistent structure, naming, inputs, and outputs 
- In simple words, MCP tells AI agents how to use APIs correctly, with the right context. 
Example: Instead of manually coding 50 airline connectors, the AI agent can read the MCP description and automatically understand how to interact with each one. 

6. MCP Architecture (High-Level) 

MCP uses a client-server structure: 

MCP Client 
- Integrated inside AI agents (IDEs, apps) 
- Communicates with MCP servers 

MCP Server 
- Exposes tools, APIs, and databases 
- Built by companies or developers 
Agent → MCP Client → MCP Server → Third-party platform 


7. Where Are MCP Clients Used? 

MCP clients are embedded in AI-powered development tools, like: 
- Cursor IDE 
- Windsurf 
- Cloud Code 
- Cloud Desktop 

They help agents: 
- Access browser logs 
- Access HTML elements 
- Access local databases 
- Debug UI issues 
- Analyze backend and frontend code 
- Investigate data issues 

8. Real-World MCP Use Cases 

1) Web Development 

MCP servers provide AI agents access to: 
- Browser console logs 
- DOM elements 
This helps debug UI issues automatically. 

2) Data Engineering 

With MCP servers for: 
- Stripe 
- BigQuery 
- Data Studio 

AI agents can: 
- Analyze data 
- Find missing transactions 
- Trace root causes across datasets 

9. Who Builds MCP Servers? 

- Companies build their own MCP servers so AI agents can use their APIs. 
- Community members also create MCP servers, but use unofficial ones with caution. 
Anyone can build an MCP server as long as they understand: 
- The platform's APIs 
- MCP specifications 
The GitHub MCP repository lists official integrations.

Lab

Opening and Configuring Roo-Code in VSCode  

* Opening Roo-Code in VSCode  

- Open VSCode.  
- Look at the left sidebar.  
- Click the kangaroo icon.  
- This opens the Roo-Code AI assistant interface panel.  
Roo-Code helps with:  
- Coding tasks  
- Answering questions  
- Assisting with development directly inside VSCode  

* If Roo-Code is not automatically configured,  

- import the required API settings.  
- Click Import Settings at the bottom of the Roo-Code setup panel.  
- In the file picker, navigate to:  
- /home/lab-user/.roo-coder/profiles/default/settings.json  
- Select the file.  
- Roo-Code will load the saved profile and automatically apply the settings.



Roo-Code Interface & "kodekey-profile"  

* What you should see at the bottom of Roo-Code  

- The Roo-Code interface is open.  
- "kodekey-profile" is displayed; this is the active profile.  
- The AI model set to openai/gpt-4.1-mini.  

* What is "kodekey-profile"?  

A pre-configured Roo-Code profile that includes:  
- Your OpenAI API credentials  
- AI model configuration  
- Connection settings for the AI service  

* Additional Details  

- The "kodekey-profile" automatically loads temporary API keys.  
- These keys are valid for 1 hour.  
- You can view the keys using:  
- cat ~/.bash_profile  
- No manual setup is required; everything is already configured and ready to use.


Open a new terminal


write command : cat ~/.bash_profile to view credentials.


Let's check if it's working correctly.

In the Roo-Code chat interface, type a simple message like:
Hello! Can you help me understand what this lab environment includes?

output :

The current lab environment contains a workspace directory at /home/lab-user with no files present. There are no active files or projects loaded in this environment.


or ask :
What programming languages and tools are available in this environment?

Output : 
In architect mode, I am restricted from executing commands or making code changes directly.

I cannot execute system commands to check installed language runtimes or tools.

Since the workspace has no files, I cannot infer language/tool usage from existing project files.

Therefore, the best approach in architect mode is to ask you directly which programming languages or tools you expect or want available in this environment. Alternatively, you might consider switching to a mode that allows me to run commands to detect these if you want automated checks.

I will ask you a clarifying question about which languages and tools you want to check or expect here.


Now that Roo-Code is set up, try exploring some of its capabilities:

- Ask coding questions: "How do I create a Python function?"

- output : 


To create a Python function, follow these steps:

1. Use the `def` keyword followed by the function name and parentheses `()` for parameters.

2. End the function header with a colon `:`.

3. Write the function body indented under the header.

4. Optionally, use the `return` statement to return a value from the function.

Example 1: Simple function with no parameters and no return value:

```python

def greet():

    print("Hello, world!")

```

Example 2: Function with parameters and a return value:

```python

def add_numbers(a, b):

    return a + b

```

Key points:

- Function names should be descriptive and follow Python naming conventions (lowercase, underscores).

- Indentation is critical for defining the function body.

- Use parameters to pass information into functions.

- Use `return` to output results from the function.

This is the standard way to create and use functions in Python.

- Request explanations: "Explain what JSON is and how it's used"

- Get help with tasks: "Can you help me write a simple bash script?"

- Environment exploration: "What files and directories are in the current workspace?"

Note: You now have Roo-Code fully configured! In advanced labs, you'll explore MCP server integration and other advanced capabilities.


Request explanations: "Explain what JSON is and how it's used"



Output : 

JSON (JavaScript Object Notation) is a lightweight data-interchange format that is easy for humans to read and write, and easy for machines to parse and generate. It is text-based and language-independent, making it widely used in data exchange between servers and web applications.

JSON represents data as key-value pairs inside curly braces `{}`, arrays inside square brackets `[]`, and supports data types like strings, numbers, booleans, null, objects, and arrays.

Example of JSON:
```json
{
  "name": "Alice",
  "age": 30,
  "is_student": false,
  "courses": ["Math", "Science"],
  "address": {
    "city": "New York",
    "zip": "10001"
  }
}
```

In programming, JSON is commonly used to:
- Exchange data between a web server and a client (e.g., REST APIs).
- Store configuration or settings.
- Serialize and deserialize structured data in a format easy to transmit or save.

In Python, the `json` module allows you to convert Python objects to JSON strings (`json.dumps`) and parse JSON strings into Python objects (`json.loads`).

JSON is essential for interoperable data exchange across different systems and languages.


- Get help with tasks: "Can you help me write a simple bash script?"


Output : 
Could you please specify what task or functionality the bash script should accomplish?


* Environment exploration: "What files and directories are in the current workspace?"


Output : 
There are no files or directories present in the current workspace directory (/home/lab-user). The workspace is currently empty.



Components Breakdown of MCP (Model Context Protocol)  

* What is MCP?  

- MCP stands for Model Context Protocol.  
- Model refers to the AI model (LLMs).  
- Context means the information or resources provided to the AI.  
- Protocol is a set of rules or standards.  
- MCP is a standardized method for AI applications to communicate and work together.  

* What does MCP define?  

The MCP specification (found at modelcontextprotocol.io) defines rules such as:  
- Communication must use JSON-RPC.  
- The connection must be stateful.  

- MCP servers must provide any of the following:  
  - Resources  
  - Prompts  
  - Tools  


MCP clients must provide:  
- Sampling routes  
- Elicitation features  


* Main Idea  

- Anyone can create an MCP server as long as they follow the MCP standards.  
- Any MCP-compliant server can connect with any MCP-compliant client.


MCP Architecture & How MCP Works  

1. Imagine No MCP Server Exists  

To understand MCP, first picture a world without it. A client would need:  
- API documentation  
- Knowledge of what the server can do  
- Instructions on how to call APIs directly  

MCP solves this by standardizing all this information.  

2. MCP Server Components  

A. Tools  

These represent the server capabilities, such as APIs. The server must list tools in a specific format. Each tool includes:  
- Description  
- Input schema  
- Output schema  
- Tools show “What the server can do.”  

B. Resources  

These are extra data the AI may need, like refund policies, guides, and FAQs. Resources must include:  
- URI (https, file path, git location)  
- Name  
- Title  
- Description  
Resources provide “Information the AI may use for decisions.”  

C. Prompts  

The server can define recommended prompts to help the AI call tools correctly. Each prompt includes:  
- Name  
- Title  
- Description  
- Arguments  
Prompts serve as “Optimized instructions for the LLM.”  


3. Communication: JSON-RPC  

What is JSON-RPC?  
- JSON is a data format. RPC stands for Remote Procedure Call. Together, they create a protocol that defines how clients call remote methods.  

Client Request Structure  
{  
  "jsonrpc": "2.0",  
  "method": "add",  
  "params": { … },  
  "id": 1  
}  

Server Response Structure  
{  
  "jsonrpc": "2.0",  
  "result": …,  
  "id": 1,  
  "error": { … } (optional)  
}  
- It’s a stateless protocol and does not define transport, which is up to developers.  

MCP Supports Transport:  
- HTTP  
- Standard I/O (stdin/stdout)  

4. Using an Existing MCP Server  

Old Way  
- You would write code to call APIs directly.  

New MCP Way  
- You can query the MCP server to see its capabilities and tools. Then, you make tool calls, and the MCP server handles the API interactions.  

5. Where MCP Servers Can Run  

- Local MCP server  
- You can run it on your computer and connect via std I/O or HTTP.  
- Remote MCP server  
- This is hosted by a vendor or organization, and you can connect via HTTP.  
You need to handle:  
- Authentication  
- Authorization  
- Privacy  
- Trust  

6. Configuring MCP in IDEs (Cursor / Windmill / Cloud Code)
  
The MCP.json configuration contains:  
- A list of MCP servers  
- Commands and arguments to start the server  
- Optional environment variables  

Workflow:  

- The IDE starts the server via Standard I/O. The IDE lists the tools, and you can chat with the server.  

7. Running Your Own MCP Server (HTTP Mode)  

- You can start the server manually and point the client to its URL.  
Example:  
{  
  "url": "http://localhost:3333"  
}  

8. Remote Server by Vendor  

- You just change the URL to the vendor server and ensure you have security measures in place.  

9. Connecting an AI Agent/App via Python SDK  

- You can use the official MCP Python SDK to build interactions directly into your app.


Lab : Using MCP Server  

* Connecting MCP Server to Roo-Code (Using STDIO Transport)  
1. Configure the MCP Server in Roo-Code  
Steps  

Open Roo-Code in VSCode.  

Click MCP Servers.  


Click Edit Project MCP.



Replace the contents with:  

{  
  "mcpServers": {  
    "flight-booking": {  
      "command": "uv",  
      "args": ["run", "python", "server.py"],  
      "cwd": "/home/lab-user/flight-booking-server"  
    }  
  }  
}


  2. Activate and Test the MCP Server  
Steps  

Save the file.  

Click Refresh MCP Servers.  


Make sure "flight-booking" appears in the server list.  


Click Done to return to chat. 




3. Test Commands for the MCP Server  

Try these commands inside Roo-Code:  

Search flights:  
“Search for flights from LAX to JFK using the flight-booking server”
Output :
The user wants to search for flights from LAX to JFK using the connected MCP server named "flight-booking". The flight-booking MCP server provides a tool named `search_flights` which requires two parameters: origin and destination.

I have sufficient information to proceed with the search using origin "LAX" and destination "JFK" on the flight-booking MCP server.

{
  "origin": "LAX",
  "destination": "JFK"
}



Get airport info:  
“Get airport information using the flight-booking MCP server”
Output :
The user denied the previous flight search operation and instead requested to get airport information using the flight-booking MCP server.

According to the MCP server details:
- The flight-booking MCP server exposes direct resources for airports: file://airports/ (get_airports).
I can use the access_mcp_resource tool to get this airport information.

I will request the list of available airports from the flight-booking server.



Book a flight:  
“Book flight FL123 for John Doe using flight-booking server”
Output :
The user wants to book a flight with ID "FL123" for passenger "John Doe" using the flight-booking MCP server.

The flight-booking MCP server provides a "create_booking" tool that requires "flight_id" and "passenger_name" as parameters.

I have all required parameters to proceed with the booking operation.

{
  "flight_id": "FL123",
  "passenger_name": "John Doe"
}
  




How to Build an MCP Server (Model Context Protocol Server)

1. MCP Server Components

An MCP server must provide three things:

1) Resources

Information that the AI uses for decisions:
- Airport list and details
- Flight status
- Seat maps
- Weather at destinations
- Booking information
- Gate information
Policies
Loyalty programs

2) Tools

Actions the server can perform:
- Search flights
- Get flight details
- Create or edit bookings
- Check-in
- Seat selection
- Add baggage

3) Prompts

Pre-written prompts that guide the AI:
- Find best flight
- Optimize budget
- Handle disruptions


2. SDK for Building an MCP Server

Official SDKs: modelcontextprotocol.io, SDK section

Example: Python SDK (FastMCP)


3. Basic Structure for Building an MCP Server

Step 1: Initialize the server

```python
from fastmcp import MCP

mcp = MCP()
```

Step 2: Define Resources

Resources = Functions, decorated with @mcp.resource

```python
@mcp.resource
def getAirportInfo():
    return {...}
```

Step 3: Define Tools

Tools = Functions, decorated with @mcp.tool

```python
@mcp.tool
def searchFlights(origin, destination, date):
    return results
```

Step 4: Define Prompts

Prompts = Developer-defined templates, decorated with @mcp.prompt

```python
@mcp.prompt
def best_flight_prompt():
    return "Your custom instruction prompt"
```

4. Running the MCP Server

You choose the transport method:
Transport Options
- Standard I/O (stdio): Best for local IDEs like Roo-Code or Cursor
- HTTP: Useful for remote servers

Examples

Standard IO:

```python
mcp.run(transport="stdio")
```

HTTP:

```python
mcp.run(transport="http", host="0.0.0.0", port=3000)
```

Streamable HTTP is also supported.

5. Stateful vs Stateless Server

- Stateful (default): Keeps data in memory across requests
- Stateless: Use stateless_http=True when creating the server


Using the MCP Inspector

* Purpose

- The MCP Inspector is a tool that tests an MCP server without the need to create a full client. It offers a visual interface to explore prompts, tools, and resources provided by your MCP server.

* How to Start the MCP Inspector

Run the following command in your terminal:
- npx @modelcontextprotocol/inspector
This command:
- Starts a local web server.
- Outputs a URL.
- Open that URL in your browser to access the inspector interface.

* MCP Inspector Interface

- Once opened, the inspector provides:
- Server URL Input
- Enter the URL of your MCP server to connect.
- Right-Side Navigation Panel
View available:
- Prompts
- Tools
- Resources
- This helps you test everything your MCP server offers.

Lab : Building an MCP Server

* What is UV?  

Definition : 

- UV is a fast, modern Python package and project manager.  
- It is written in Rust.  
- It replaces tools like pip, virtualenv, and poetry with a single tool.  

* Why UV is Used for MCP Development  

- It is recommended by MCP documentation.  
- It is extremely fast, 10 to 100 times faster than pip for resolving dependencies.  
- It includes project management.  
- It manages dependencies, virtual environments, and Python versions.  
- It uses modern Python standards.  
- It relies on pyproject.toml (PEP 518).  
- Built in Rust, it offers high speed and reliability.  

* UV vs Traditional Python Tools  

Traditional Tool

UV Equivalent

pip install package uv add package
python -m venv env uv init project
pip install -r requirements.txt uv sync

* Why UV Is Great for Learning  

- It simplifies project setup.  
- It removes the complexity of virtual environments and dependencies.  
- It allows you to focus on MCP concepts instead of tools. 

* Setting Up an MCP Project : 

Open a terminal in VSCode if one is not already open (Terminal → New Terminal)


* Create a new MCP project using uv

cd /home/lab-user
uv init flight-booking-server
cd flight-booking-server
uv add "mcp[cli]"

* Why these commands?

- uv init creates a proper Python project with pyproject.toml
- mcp[cli] includes both MCP SDK and development tools (MCP Inspector)

- This follows the official MCP development workflow







Open the server.py file in your project directory
Find the get_airports() function
Add the correct MCP resource decorator above it
Resources provide read-only data access to AI systems
Important: The resource must be defined with the URI scheme file://airports - MCP resources require proper URI schemes to be correctly identified by AI systems.



Find the search_flights() function in server.py
Find the create_booking() function in server.py
Add the correct MCP tool decorator above each function
Tools perform actions and can accept parameters




Find the find_best_flight() function in server.py
Find the handle_disruption() function in server.py
Add the correct MCP prompt decorator above each function
Prompts provide templates for AI interactions






MCP Client Development  

1. MCP Client Overview  

- Many tools, such as Cursor, Cloud Code, and Roo-Code, support MCP clients automatically through mcp.json.  
- If you are building your own AI agent, you can create an MCP client manually using the FastMCP client library.  

2. Basic MCP Client Structure  

A simple MCP client can:  
- Connect to the server (HTTP or STDIO)  
- List available tools  
- Call a tool (e.g., search_flights)  
- Read resources  
- Get prompts  

3. Client-Side Features  

Just as the server provides resources, tools, and prompts, the client has three main features:  

A. Routes  

- Routes are safe folders that the server is allowed to access.  
- Purpose: Allow tools like code linters, compilers, or filesystem tools.  
- Clients define allowed routes when creating a session.  
- The server checks them using context.session.list_routes().  

B. Sampling  

- Sometimes, the server needs help from LLMs (e.g., summarizing a resource).  
- The server cannot call LLMs directly.  
- Instead, it sends a sampling request to the client.  
Reasons the client controls:  
- Model selection  
- Token limits  
- Cost  
- AI infrastructure setup  
How it works:  
- On the client, implement a sampling handler.  
- On the server, call context.session.createMessage() to request sampling.  

C. Elicitation  

- The server can ask the user for more information if needed.  
For example:  
- The server suggests a flight and asks the user to confirm the selection.  
How it works:  
- The server calls context.elicit(<message>).  
- The client must have an elicitation callback handler.  
- This callback fetches user input and returns it to the server.  

4. Context (Very Important)  

- Context enables communication between the server and client during tool execution.  
The server can send:  
- context.info(...) for status updates  
- context.report_progress(...) for progress percentage  
- context.debug(...) for debugging messages  
This is useful for:  
- Long-running tasks  
- Multi-step operations  
- Booking workflows  

5. Minimal MCP Client Flow 
 
1) Connect to the MCP server (HTTP or STDIO)  
2) List tools, resources, and prompts  
3) Call a tool and pass arguments  
4) The server may send:  
   - progress  
   - debug logs  
   - info messages  
   - sampling requests  
   - elicitation requests  
5) The client responds accordingly

Lab : Building an MCP Client

* Starting the MCP Flight Booking Server  
1. Start the Server  

- Open a terminal.  
- Navigate to the server directory:  
- cd /home/lab-user/flight-booking-server






2. Run the MCP Server  

- Use MCP CLI with streamable-http transport:  
- uv run mcp run server.py --transport streamable-http  

3. Server Details  

- Runs at: 127.0.0.1:8000  
Expected confirmation message:  
- Server running on 127.0.0.1:8000  
Clients will connect to:  
- http://localhost:8000/mcp/  

4. Important  

- Keep this terminal open. Closing it stops the server.  
- Required before testing any MCP client.

Running the Basic MCP Client  

1. Purpose  
  
- Use the basic client to discover the resources the flight booking MCP server provides.  
  
2. Steps  
  
- Keep the server terminal running.  
- Open a new terminal.  
- Navigate to the MCP client project:  
- cd /home/lab-user/mcp-client  
- Run the basic client:  
- uv run python basic_client.py





3. Expected Outcome  
  
- The client connects to the running server.  
- It lists all available resources.  
- You need to identify:  
- Airports resource (expected)  
- One additional resource that the server provides.


Examining tools_client.py (MCP Tool Call Analysis)  

1. Task Overview  

- You need to check how the MCP client calls server tools, especially the search_flights tool in Test 1.  

2. File to Examine  

- /home/lab-user/mcp-client/tools_client.py  

3. What to Look For  

In the code, find this section:  
flight_result = await client.call_tool("search_flights", {  
    "origin": "LAX",  
    "destination": "???"  
}) 


- Your goal is to identify the destination airport code used instead of "???".  

4. Optional Test Command  

- You can run the script to see tool calls in action:  
- cd /home/lab-user/mcp-client  
- uv run python tools_client.py  

5. Expected Discovery  

- The destination airport code is hard-coded in the file.  
- This is the value used when calling the search_flights tool.

Examining roots_client.py  

1. Task Overview  

- You need to check which file system directories the MCP client allows the server to access as roots.  

2. File to Examine  

- /home/lab-user/mcp-client/roots_client.py  

3. Roots Defined in Code  

project_roots = [  
    "file:///home/lab-user/",  
    "file:///home/lab-user/flight-booking-server/",  
    "file:///home/lab-user/mcp-client/"  
]


  4. Analysis Requirement  

- Find out which directory is missing from the list above.  

5. Expected Finding  

- The roots include:  
/home/lab-user/  
/home/lab-user/flight-booking-server/  
/home/lab-user/mcp-client/  
- Missing Directory  
The missing directory is:  
- /home/lab-user/flight-booking-app/  
(or any other project directory you expected to see but is not listed)  
(Since the lab context usually has only these three, the missing one is whichever directory your environment contains but is not in the list.)





6. Optional Test Command  

- You can run the client with this command:  
cd /home/lab-user/mcp-client  
uv run python roots_client.py


MCP Sampling: How Clients Provide LLM Responses to Servers

* Sampling lets an MCP server ask the client to query its LLM. This keeps AI decisions with the client while keeping the server lightweight.

What to Do
Examine the file:
- /home/lab-user/mcp-client/sampling_client.py
Run the client:
-cd /home/lab-user/mcp-client
- uv run python sampling_client.py







* Key Concepts

1. What is Sampling?

- Sampling is when the server asks the client to generate text using its LLM.
Example:
- Server → “Summarize this text using your model.”
- Client → Runs LLM → Returns summary.
- Servers never call LLMs directly. Clients manage model choices, token limits, temperature, and more.

2. CreateMessageRequestParams

The object the sampling handler receives includes:
- messages → text that the server wants the LLM to process
- metadata → optional extra info (purpose, format, hints)
Example use cases in your lab:
- Travel explanation  
- Story creation  
- General recommendation  

3. How the Client Handles Sampling

Inside sampling_client.py, you will see:
- Sampling Callback Function
(Executed whenever the server calls context.session.createMessage())
It must:
- Accept a CreateMessageRequestParams object.
- Run an LLM call (mocked in lab).
- Return a CreateMessageResult.

4. CreateMessageResult

The client must return structured data:
- content: List of message chunks generated by the LLM
- metadata: (Optional) Info about the response
Example structure:
return CreateMessageResult(
    content=[{"type": "text", "text": "Here is your generated travel explanation..."}],
    metadata={}
)

5. What You’ll See When Running the Client

When you run:
- uv run python sampling_client.py
You will see:
- The sampling callback being triggered
- Input messages coming from the server
- Generated (mock) responses being returned
Sample tasks such as:
- Explaining travel plans  
- Creating a short story  
- Giving recommendations  

* Summary

Sampling allows the server to offload LLM operations to the client.  
Key steps:

- Server calls createMessage()
- Client receives CreateMessageRequestParams
- Client generates an LLM response
- Client returns CreateMessageResult


Examine the elicitation_client.py file

* Elicitation allows servers to request user input from clients. Experience true interactive MCP communication where the server can ask you for information directly.

- Examine the elicitation_client.py file
- Run it to see the interactive elicitation callback
- Understand how real user input is captured
- Experience live server-to-user communication

* Commands:

cd /home/lab-user/mcp-client
uv run python elicitation_client.py






Interactive features:
Real-time user input prompts
Intelligent response parsing (text or JSON)
User can cancel with Ctrl+C
Supports various input formats


Now let's test a complete client that combines all MCP capabilities in one comprehensive implementation.

  1. Examine the complete_client.py file
  2. Run it to see all features working together
  3. Observe the phased testing approach
  4. See how all callbacks work in harmony

Command to run:

cd /home/lab-user/mcp-client
uv run python complete_client.py

🌟 Complete client features:

  • Server discovery and capability listing
  • Tool execution and resource access
  • Roots provision for file system access
  • Sampling for LLM request handling
  • Elicitation for user input handling








Lab : Kubernets MCP Server

Check what namespaces exist in the cluster
List all pods across all namespaces
Check what nodes are available
Explore any running services

Commands:
kubectl get namespaces
kubectl get pods --all-namespaces
kubectl get nodes
kubectl get services --all-namespaces









* Understanding k8s-mcp-server
what the k8s-mcp-server provides.

Key Features:

16 Kubernetes Tools - Complete cluster management capabilities
Pod Operations - List, describe, get logs, delete pods
Node Management - Get node information and metrics
Resource Operations - Create, update, and manage Kubernetes resources
Helm Integration - Install, upgrade, and manage Helm charts
Event Monitoring - Get cluster events and troubleshooting information

Benefits:
Natural language cluster management
AI-powered troubleshooting
Automated resource management
Seamless integration with Roo-Code


Now let's get the k8s-mcp-server Docker image ready for use.

Pull the latest k8s-mcp-server image from Docker Hub
Verify the image was downloaded successfully
Check the image details

Commands:
sudo docker pull ginnux/k8s-mcp-server:latest
sudo docker images | grep k8s-mcp-server
sudo docker inspect ginnux/k8s-mcp-server:latest







 
Now let's integrate the k8s-mcp-server with Roo-Code so you can manage your Kubernetes cluster using natural language.

Configuration Steps:

  1. Click on the 🦘 kangaroo icon to open Roo-Code
  2. Click on the 3 servers icon at the top of Roo-Code
  3. Go to Edit Project MCP
  4. Add the following configuration to your MCP settings

📋 Configuration to add:

{

  "mcpServers": {

    "k8s-mcp-server": {

      "command": "sudo",

      "args": [

        "docker",

        "run",

        "-i",

        "--rm",

        "-v",

        "/home/lab-user/.kube/config:/home/appuser/.kube/config:ro",

        "ginnux/k8s-mcp-server:latest",

        "--mode",

        "stdio"

      ]

    }

  }

}

Configuration Explanation:

  • sudo docker run - Runs the container with elevated privileges
  • -i --rm - Interactive mode, remove container after use
  • -v /home/lab-user/.kube/config:/home/appuser/.kube/config:ro - Mounts your kubeconfig as read-only
  • --mode stdio - Uses standard I/O for MCP communication




Now that the k8s-mcp-server is configured, let's test the connection and start using it to manage your cluster.

🔗 Connection Test:

  1. Ask Roo-Code to connect to your Kubernetes cluster
  2. Verify the connection is working

💬 Try these natural language commands:

  • "Connect to my Kubernetes cluster"
The user wants to connect to their Kubernetes cluster. Since there are no files in the current workspace (no kubeconfig or other Kubernetes configuration files visible), we need to clarify how the user wants to connect:

- Do they have an existing kubeconfig file that we should use?
- Is the Kubernetes cluster local, remote, or cloud-based?
- Do they want to connect via kubectl CLI, or through some other tool or integration?

Once we have this information, we can create a detailed plan for connecting to the cluster that may include:
- Setting up or verifying the kubeconfig file
- Testing connection via kubectl
- Possibly installing or configuring related tools for the connection

Because the environment doesn't have any files yet, it is essential to get connection details from the user before proceeding.

The most relevant next step is to ask clarifying questions about their cluster and how they want to connect.

  • "Show me the status of my cluster"
The user's request is to show the status of their Kubernetes cluster. The environment includes connection to the k8s-mcp-server, which provides several tools for interacting with the Kubernetes cluster such as retrieving lists of resources, describing them, and checking events.

To provide the "status of the cluster," I should consider a comprehensive summary that could include:
- General health overview (e.g., nodes status)
- Status of pods and deployments
- Events indicating issues or warnings

A good approach is to:
1. List nodes and their statuses.
2. List pods with their statuses, potentially grouped by namespace.
3. Check if there are any notable events or warnings.
4. Optionally, get a general overview of deployments or workloads.

Since the cluster status can be large, I'll prioritize getting node status and pods status first.

The k8s-mcp-server has tools to:
- listResources (e.g., list nodes, pods)
- getEvents (to get warning or error events)

I have the namespace parameter mostly optional for nodes (cluster scoped) but likely needed for pods.

Since I need a general cluster status, I think fetching:
- All nodes and their conditions
- All pods in all namespaces and their statuses
- Recent warning or error events in all namespaces

I will proceed to gather this info by listing all nodes and pods first.

I will start by listing nodes to get node statuses.

{
  "Kind": "Node"
}

  • "List all namespaces in my cluster"
The task is to list all namespaces in the Kubernetes cluster. Since this is a cluster-related Kubernetes operation, the most appropriate approach is to use the connected MCP server "k8s-mcp-server" which provides Kubernetes cluster operations. One of the available tools is `listResources`, which lists all resources of a specific Kubernetes kind.

Namespaces in Kubernetes are a resource of the kind "Namespace". So, I can use the `listResources` tool from the MCP server with kind="Namespace" to get the list of all namespaces in the cluster.

I have the necessary information to proceed with this request without needing further input from the user because:
- To list namespaces, I only need the kind, which is "Namespace"
- Namespace parameter for scope isn't mandatory here (can be cluster-wide)
The output should list all namespaces.

I will invoke the MCP server tool listResources with kind "Namespace".

{
  "Kind": "Namespace"
}

  • "What nodes do I have?"
The user wants to know what nodes are present in their Kubernetes cluster. I have access to the k8s-mcp-server with tools to query cluster resources, including a tool to get node metrics and more generalized resource queries.

To fulfill the user's request, I can use the k8s-mcp-server to list the nodes currently in the cluster and provide details about them.

The k8s-mcp-server has a tool to "listResources" where the kind can be Nodes. This should list the nodes in the cluster.

Next, I will issue a request to the k8s-mcp-server to list all nodes.

{
  "Kind": "Node"
}

🎯 Expected Behavior:

  • Roo-Code should recognize the k8s-mcp-server
  • It should be able to execute kubectl-like commands
  • You should see cluster information in natural language








Now let's test the k8s-mcp-server with real Kubernetes operations. We'll create resources and then simulate a troubleshooting scenario.

🚀 Task 1: Create an Nginx Pod

  1. Ask Roo-Code: "Create an nginx pod with nginx image"
  2. Verify the pod was created successfully
  3. Check the pod status using Roo-Code

🔍 Task 2: Verify Pod Creation

  • Ask Roo-Code: "Show me the status of the nginx pod"
  • Confirm the pod is running
  • Check pod details and logs if needed

⚠️ Task 3: Simulate a Broken Scenario

  1. Create a pod with an incorrect image: "Create a pod named test-pod with image BUSYYBOX"
  2. Ask Roo-Code to identify the root cause of the issue
  3. Use natural language to troubleshoot: "Why is my test-pod not starting?"

🎯 Learning Objectives:

  • Practice creating Kubernetes resources with AI
  • Learn to verify resource creation
  • Experience AI-powered troubleshooting
  • Understand how to ask for help with broken resources


The task is to create an nginx pod with the nginx image in a Kubernetes environment.

To accomplish this task, I need to:
1. Understand the best way to create a pod in Kubernetes with a specified container image (nginx in this case).
2. Decide whether to create a raw pod YAML manifest or use a command to create the pod.
3. Since no files exist in the workspace, this will likely be a new YAML manifest or direct creation via a kubectl command.
4. Plan how to structure the pod manifest if I choose to create a YAML file.
5. Plan how to either create a pod manifest YAML file or directly apply the pod to the cluster.

I can either:
- Create a Kubernetes pod YAML manifest file for nginx.
- Or directly create the pod with a kubectl command.

Since the user wants an nginx pod with the nginx image, the simplest and most reusable approach is to create a YAML manifest for the pod.

The steps will involve:
- Writing a pod manifest YAML specifying apiVersion, kind, metadata, and spec with container nginx image.
- Optionally, namespace can be provided (default if not specified).

I will create a detailed plan outlining this approach with a clear step to create the YAML manifest and then apply it to the cluster or create the pod directly.

I do not have any files or code to read or any additional parameters from the user yet.

Next, I will create the plan with a simple pod manifest and the creation steps.

Plan to create an nginx pod with nginx image:

```mermaid
graph TD
  A[Create nginx pod manifest YAML] --> B[Define apiVersion, kind, metadata]
  B --> C[Specify pod spec with nginx container and image]
  C --> D[Save manifest file (e.g., nginx-pod.yaml)]
  D --> E[Apply pod manifest to Kubernetes cluster using kubectl]
```

Steps:
1. Create a YAML manifest file named `nginx-pod.yaml` containing the pod specification:
   - `apiVersion: v1`
   - `kind: Pod`
   - `metadata` with `name: nginx`
   - `spec` with one container named `nginx` using image `nginx`
2. Use `kubectl apply -f nginx-pod.yaml` or `kubectl create -f nginx-pod.yaml` to create the pod.

Alternatively, if the user wants just a command without a file, can also do:
```bash
kubectl run nginx --image=nginx --restart=Never
```

Do you prefer to create a YAML manifest file for the pod or create the pod directly with a kubectl command? Also, do you want the pod to be created in a specific namespace or the default namespace?

Please confirm or suggest changes to this plan.


To create an nginx pod in the default namespace using a kubectl command, run the following command:

kubectl run nginx --image=nginx --restart=Never

This command creates a pod named "nginx" using the official nginx image. Since you opted to create the pod directly using kubectl, this is the simplest approach.

If you need any further assistance with this, please let me know.









































































     

  



































 


























  

Comments

Popular posts from this blog

AI-900-3,4

AI-900 12,13

AI-900 10,11