AI-900-5,6

Module : 5  Get Started with Azure Generative AI


Introduction to Generative AI with Microsoft Azure

Generative AI, a branch of AI that creates new content, has quickly changed how we work and innovate. From writing to coding, it is impacting every industry.

Key Use Cases:

Marketing: Microsoft Copilot helps write product descriptions, blogs, and social media posts quickly and consistently.

Customer Support: AI chatbots handle customer queries around the clock in natural language.

Code Generation: GitHub Copilot suggests code, functions, and even full modules based on text prompts.

Image & Video Creation: Azure AI Foundry models transform text into visuals for campaigns or concept art.

Education: AI tools develop personalized quizzes, notes, and study guides for each student.

Microsoft’s Role: 

With tools like Copilot, Azure AI, and GitHub Copilot, Microsoft offers a complete ecosystem for developers to build and innovate with generative AI.



Understanding Generative AI Applications

Generative AI applications use language models that serve as the basis for user interactions and app functions.




-Assistants

Chat-based assistants, such as Microsoft Copilot, help users find information, create content, and complete tasks efficiently.

For users: Improve productivity with AI-generated content and automated tasks.

For developers: Expand Copilot with plug-ins or create similar assistants for apps and business processes.

-Agents

Agents are sophisticated AI systems that can act on their own. For example, they can book a taxi after locating your meeting place. They consist of:

A language model for reasoning and understanding.

Instructions that define goals and behaviors.

Tools or functions to perform tasks.


-Modern AI solutions often combine assistants and agents through orchestration, which coordinates multiple AI components to work together smoothly.


-Framework for Generative AI Applications

Generative AI apps can be divided into three types:

Ready-to-use: No coding required—just start chatting.

Extendable: Customize with your own data (e.g., Microsoft Copilot).

Built from scratch: Create your own assistants or agents using language models.

Microsoft tools like Copilot Studio and Azure AI Foundry make it easy to extend or build these generative AI applications.


Understanding Tools to Develop Generative AI

Microsoft provides a robust set of tools for creating generative AI solutions. These tools support developers and organizations throughout the entire AI lifecycle.

- Azure AI Foundry

At the center is Azure AI Foundry, a platform-as-a-service that enables developers to build, customize, and deploy generative AI models in the cloud. It gives users a single portal for creating and managing AI applications and agents.


- Key Components:

Model Catalog: Discover, compare, and deploy AI models.

Playgrounds: Test and experiment with models easily.

Azure AI Services: Build, test, and deploy AI-powered solutions.

Solutions: Create custom agents and fine-tune models.

Observability: Monitor usage and model performance.


- Copilot Studio

For low-code development, Microsoft Copilot Studio enables users to create conversational AI experiences easily. It is fully managed software hosted in Microsoft 365 and integrates seamlessly with chat tools like Microsoft Teams. There is no need to manage infrastructure or deployment.



Understanding Azure AI Foundry’s Model Catalog

Azure AI Foundry has a lively model catalog, which is a marketplace of AI models from Microsoft, its partners, and the community.




- Key Highlights:

Foundation Models: Microsoft’s Azure OpenAI models are pretrained on large text datasets. They can be fine-tuned for specific tasks with smaller datasets.

Easy Deployment: You can deploy models directly to an endpoint without additional training.

Customization: Developers can adjust models for specific tasks or to improve performance.

Playground Testing: You can try and compare models interactively in the playground before deploying.

Model Leaderboards (Preview): You can see top-performing models ranked by quality, cost, and throughput, with visual comparisons across metrics.




Azure AI Foundry’s model catalog makes it easy to explore, test, and deploy the right generative AI models for your applications.



Understanding Azure AI Foundry Capabilities

The Azure AI Foundry portal provides a user-friendly interface organized around hubs and projects to manage AI development.


Hubs give you complete access to Azure AI and Azure Machine Learning resources.

Projects focus on specific tasks, such as developing models or agents.


From the overview page, you can manage all your projects and explore different Azure AI services, including:

- Azure AI Speech

- Azure AI Language

- Azure AI Vision

- Azure AI Content Safety


You can also test services and models in playgrounds, which include a chat playground for interactive model testing.


Customizing Models

Azure AI Foundry offers several ways to improve model quality, safety, and performance:

- Grounding Data: This aligns AI outputs with factual or reliable data sources.

- Retrieval-Augmented Generation (RAG): This connects models to internal databases for accurate and real-time responses.

- Fine-tuning: This adapts pretrained models for specific business or domain tasks.

- Security & Governance Controls: These ensure data safety, proper access, and content accuracy.


Azure AI Foundry gives developers complete control, covering everything from building and testing models to customizing and securing them for enterprise-ready AI solutions.



Understanding Observability in Generative AI

Observability in generative AI refers to monitoring and assessing how well an AI model works and the safety of its outputs.

Azure AI Foundry offers observability tools called evaluators. These tools measure the quality, safety, and reliability of AI-generated responses.


- Three Key Evaluation Dimensions

Performance and Quality Evaluators: Check accuracy, groundedness, and relevance of AI output.

Risk and Safety Evaluators: Identify harmful or inappropriate content. Ensure safe AI behavior.

Custom Evaluators: Measure performance in specific areas for specialized use cases.


- Common Evaluators in Azure AI Foundry

Groundedness: Ensures responses align with reliable data.

Relevance: Checks if responses match the user query.

Fluency: Evaluates language flow and readability.

Coherence: Assesses logical structure and clarity.

Content Safety: Detects and prevents unsafe or biased outputs.


These observability tools help developers create trustworthy, high-quality generative AI applications using Azure AI Foundry.


Module : 6 Introduction to Natural Language Processing Concepts


- Introduction to Natural Language Processing (NLP)

Natural Language Processing (NLP) is a branch of AI that helps computers understand, interpret, and respond to human language, whether written or spoken. It allows machines to make sense of text just like people do.


- Common NLP Use Cases

Speech-to-Text and Text-to-Speech: Convert audio to written text or the other way around.

Machine Translation: Translate text between languages, like English and Japanese.

Text Classification: Label content, such as marking emails as spam or not.

Entity Extraction: Identify keywords, names, or important details in text.

Question Answering: Provide answers to user questions, such as “What is the capital of France?”

Text Summarization: Shorten long documents into brief summaries.



Although understanding language is difficult for computers, progress in AI and NLP has made it possible to perform these tasks accurately and effectively today.


Understanding How Language Is Processed

Early Natural Language Processing (NLP) techniques used statistical analysis of text, known as a corpus, to find meaning. They identified the most common words to figure out what a document is about.


- Tokenization

The first step in text analysis is tokenization. This means breaking text into smaller pieces called tokens, which can be words, parts of words, or phrases.  

Example:

“We choose to go to the moon” → {we, choose, to, go, to, the, moon}


- Key Tokenization Concepts

Text Normalization: This simplifies text by removing punctuation or converting it to lowercase.

Stop Word Removal: This removes common words like “the” or “a” that contribute little meaning.

n-grams: These are groups of words (e.g., “I have” or “he walked”) that provide more context.

Stemming: This reduces words to their root form. For example, “power,” “powered,” and “powerful” become one token.


These preprocessing techniques help computers understand and analyze human language better, forming the foundation for modern NLP models.



Statistical Techniques for NLP

Two foundational statistical techniques in Natural Language Processing (NLP) are Naïve Bayes and TF-IDF.

Naïve Bayes

Originally used for filtering spam emails, this technique identifies which tokens or words are strongly linked to a category. For example, the phrase "miracle cure" often appears in spam. It uses bag-of-words features, which consider the presence of words without regard to their order.

Term Frequency-Inverse Document Frequency (TF-IDF)

This method compares the frequency of a word in one document to its frequency across a larger collection of texts, known as a corpus. It helps identify relevant or unique words in each document. It is commonly used in information retrieval and document classification.


Example

When you tokenize the phrase “We choose to go to the moon” and count the frequency of each token, you see key words like “moon” and “space.” TF-IDF further highlights words that are important in one document but rare throughout the corpus, improving the understanding of topics.


These techniques are the foundation for modern NLP models that came before deep learning and semantic models.



Understanding Semantic Language Models

Modern NLP relies on deep learning language models that capture the relationships between tokens.

- Embeddings

Tokens are represented as vectors, or arrays of numbers, in multi-dimensional space.

Example : 

- 4 ("dog"): [10,3,2]

- 8 ("cat"): [10,3,1]

- 9 ("puppy") [5,2,1] 

- 10 ("skateboard"): [-3,3,2]




Semantically similar words, such as “dog” and “puppy,” have vectors pointing in similar directions, while unrelated words like “skateboard” point differently.

These embeddings help models understand meaning beyond simple word frequency.


-  Modern NLP Workflow

First, tokenize a large corpus of text.

Next, train language models on these tokens.

Finally, use the models for various NLP tasks like classification, summarization, and translation.




- Machine Learning for Text Classification

Classification algorithms, such as logistic regression, can predict categories from text.

For example, consider sentiment analysis on restaurant reviews:

Positive: “The food and service were great” results in 1.

Negative: “Slow service and substandard food” results in 0.

The model learns relationships between tokens and labels to predict outcomes accurately.


Semantic models form the basis of today’s powerful NLP applications, which allow machines to understand meaning, context, and sentiment in text.

Comments

Popular posts from this blog

AI-900-3,4

AI-900 12,13

AI-900 10,11