top of page
Writer's pictureManpreet Singh

How to Deploy Llama 2 Model in Azure AI Studio?

Azure AI Studio, a comprehensive platform by Microsoft, offers a seamless and user-friendly environment to deploy and manage these models. Developed by Meta Platforms in collaboration with Microsoft, Llama 2 is a large language model that can generate text, answer complex questions, and engage in natural and engaging conversations with users.


This article aims to guide you through the process of deploying a Llama 2 model in Azure AI Studio. Whether you are a seasoned AI practitioner or a beginner stepping into the world of AI, this guide will provide you with a step-by-step approach to successfully deploy your Llama 2 model.


Table of Contents:


What is the Llama Model in Azure AI Studio?

Llama 2 is a powerful language model developed by Meta Platforms and added to Microsoft’s Azure AI Studio. It’s an open-source AI model that can generate text and chat responses for various domains and tasks.


The Llama 2 family of large language models (LLMs) is a collection of pre-trained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. The model family also includes fine-tuned versions optimized for dialogue use cases with reinforcement learning from human feedback (RLHF), called Llama-2-chat.


Llama 2 is now available in the model catalog in Azure Machine Learning. The model catalog, currently in public preview in Azure Machine Learning, is your hub for foundation models and empowers users to easily discover, customize and operationalize large foundation models at scale.


Know more about Llama AI Model: Introduction to Llama AI Model


Benefits of Llama Models in Azure AI Studio:

  • Innovation and Scalability: Embedding Llama 2 and other pre-trained large language models (LLMs) into applications with Azure enables customers to innovate faster, tapping into Azure’s end-to-end machine learning capabilities, unmatched scalability, and built-in security.

  • Safety Integration: Deployments of Llama 2 models in Azure come standard with Azure AI Content Safety integration, offering a built-in layered approach to safety, and following responsible AI best practices.

  • Cost-Effective: Offering a wide range of different open-source Llama models expands the choices of AI for Azure cloud storage and service customers and gives them a far lower-cost option.


Deployments of Llama 2 models in Azure come standard with Azure AI Content Safety integration, offering a built-in layered approach to safety, and following responsible AI best practices. This native support for Llama 2 within the Azure Machine Learning model catalog enables users to use these models, without having to manage any of the infrastructure or environment dependencies.


Types of Llama Model:

Here are the types of Llama models available in Azure AI Studio:

  1. Llama-2-7b (Text Generation)

  2. Llama-2-7b-Chat (Chat Completion)

  3. Llama-2-13b (Text Generation)

  4. Llama-2-13b-Chat (Chat Completion)

  5. Llama-2-70b (Text Generation)

  6. Llama-2-70b-Chat (Chat Completion)


These models are part of the Llama 2 family of large language models (LLMs), a collection of pre-trained and fine-tuned generative text models. The fine-tuned versions, called Llama-2-chat, are optimized for dialogue use cases.


Steps to Deploy Llama 2 Model in Azure AI Studio


STEP 1: Open Azure AI Studio and go to the "Explore" section.


Deploy Llama Model in Azure AI Studio

STEP 2: Under the Model catalog section, look for the Llama 2 APIs model and click on "View Model ". A list of available Llama models will be displayed. Choose the llama model you want to deploy.


Select a Llama Model in Azure AI Studio


STEP 3: Upon clicking on the model, details of that model will appear. Click on "Deploy" and select the "Pay-as-you-go" option.


Select Pay-as-you-go option to deploy Llama Model in Azure AI Studio

There are two options available:

  1. Pay-as-you-go: This is a pricing model that allows you to pay for services as they are used. In the context of Azure AI Studio, certain models in the model catalog can be deployed as a service with pay-as-you-go. This provides a way to consume them as an API without hosting them on your subscription while keeping the enterprise security and compliance organizations' needs. This deployment option doesn’t require a quota from your subscription.

  2. Real-Time Endpoint: Deployments are hosted within an endpoint, and can receive data from clients and send responses back in real-time. You can invoke the endpoint for real-time inference for chat, copilot, or another generative AI application. Prompt flow supports endpoint deployment from a flow or a bulk test run.


STEP 4: Select the AI project where you want to deploy the model. If you haven't created any AI project yet, click "Create a new AI project".


Select AI Project to deploy Llama Model in Azure AI Studio

Otherwise, click "Continue to deploy".


Go to the "Marketplace offer details" section, to learn the pricing.


see pricing before deploying Llama Model in Azure AI Studio

STEP 5: A dialog box will appear. Enter the deployment name. Enter the desired name and click "Deploy".


Enter the deployment name to deploy Llama Model in Azure AI Studio

STEP 6: Once the deployment is complete, click “Open in playground” to test and see how the deployed model works.


Llama Model playground in Azure AI Studio


Settings

Below are the settings that allow you to fine-tune the behavior of the model to better suit your specific use case. However, the exact availability and behavior of these settings might vary depending on using a specific model and platform.


Llama Model playground settings in Azure AI Studio

Llama Model playground settings in Azure AI Studio  2

Max_tokens: This setting controls the maximum length of the model’s response. It sets a limit on the number of tokens per model response. The API supports a maximum of 4096 tokens shared between the prompt and the model’s response.


Temperature: This parameter controls the randomness of the model’s responses. Lowering the temperature results in more deterministic and repetitive responses, while increasing the temperature leads to more unexpected or creative responses.


Top_p: Also known as nucleus sampling, this setting controls the randomness of the model’s responses rather than temperature. It considers the top p most probable tokens at each step of the generation process.


Stop: This setting allows you to specify a sequence at which the model should stop generating further tokens.


Logprobs: This setting returns the log probabilities of the tokens generated by the model.


Presence_penalty: This parameter influences the probability of generated tokens appearing based on their presence in the generated text.


Use_beam_search: This determines whether to use beam search instead of sampling during model inference. Beam search is a search algorithm that explores the most promising nodes in a tree-like structure, which is the sequence of tokens being generated. When use_beam_search is set to True, the best_of parameter must be greater than 1 and the temperature parameter must be 0.


Ignore_eos: This determines whether to ignore the End Of Sentence (EOS) token and continue generating tokens after the EOS token is generated3. When ignore_eos is set to True, the model will continue to generate tokens even after it has generated an EOS token.


Conclusion

Deploying the Llama 2 Model in Azure AI Studio is an easy task that opens up many new opportunities in Artificial Intelligence. This guide has taught you how to use Llama 2 to make your apps better with advanced language features.

Comments

Couldn’t Load Comments
It looks like there was a technical problem. Try reconnecting or refreshing the page.
bottom of page