Skip to content

OpenAI Chat in Python

This page was automatically generated by AI; not yet reviewed for accuracy...

The content and code samples on this page were generated by using the ai CLI with customized prompts in this repository.

It's cool, but, it's experimental. 😁

Please review the content and code before using it in your application.

This sample demonstrates how to use the OpenAI Chat API in a Python console application.

main.py
openai_chat_completions.py

How to generate this sample
Command
ai dev new openai-chat --python
Output
AI - Azure AI CLI, Version 1.0.0
Copyright (c) 2024 Microsoft Corporation. All Rights Reserved.

This PUBLIC PREVIEW version may change at any time.
See: https://aka.ms/azure-ai-cli-public-preview

Generating 'openai-chat' in 'openai-chat-py' (3 files)...

main.py
openai_chat_completions.py
requirements.txt

Generating 'openai-chat' in 'openai-chat-py' (3 files)... DONE!

main.py

STEP 1: Read the configuration settings from environment variables:

main.py
openai_api_key = os.getenv('AZURE_OPENAI_API_KEY', '<insert your OpenAI API key here>')
openai_api_version = os.getenv('AZURE_OPENAI_API_VERSION', '<insert your Azure OpenAI API version here>')
openai_endpoint = os.getenv('AZURE_OPENAI_ENDPOINT', '<insert your OpenAI endpoint here>')
openai_chat_deployment_name = os.getenv('AZURE_OPENAI_CHAT_DEPLOYMENT', '<insert your OpenAI chat deployment name here>')
openai_system_prompt = os.getenv('AZURE_OPENAI_SYSTEM_PROMPT', 'You are a helpful AI assistant.')

STEP 2: Initialize the AzureOpenAI client with the configuration settings:

main.py
client = AzureOpenAI(
  api_key=openai_api_key,
  api_version=openai_api_version,
  azure_endpoint = openai_endpoint
)
messages=[
    {'role': 'system', 'content': openai_system_prompt},
]

STEP 3: Obtain user input, use the helper function to get the assistant's response, and display responses:

main.py
def main():
    while True:
        user_input = input('User: ')
        if user_input == 'exit' or user_input == '':
            break

        response_content = get_chat_completions(user_input)
        print(f"\nAssistant: {response_content}\n")

if __name__ == '__main__':
    try:
        main()
    except EOFError:
        pass
    except Exception as e:
        print(f"The sample encountered an error: {e}")
        sys.exit(1)

openai_chat_completions.py

STEP 1: Create the client and initialize chat message history with a system message:

openai_chat_completions.py
client = AzureOpenAI(
  api_key=openai_api_key,
  api_version=openai_api_version,
  azure_endpoint = openai_endpoint
)

messages=[
    {'role': 'system', 'content': openai_system_prompt},
]

STEP 2: When the user provides input, add the user message to the chat message history:

openai_chat_completions.py
def get_chat_completions(user_input) -> str:
    messages.append({'role': 'user', 'content': user_input})

    response = client.chat.completions.create(
        model=openai_chat_deployment_name,
        messages=messages,
    )

    response_content = response.choices[0].message.content
    messages.append({'role': 'assistant', 'content': response_content})

    return response_content

STEP 3: Define the main function to handle user input and display the assistant's responses:

openai_chat_completions.py
def main():
    while True:
        user_input = input('User: ')
        if user_input == 'exit' or user_input == '':
            break

        response_content = get_chat_completions(user_input)
        print(f"\nAssistant: {response_content}\n")

if __name__ == '__main__':
    try:
        main()
    except EOFError:
        pass
    except Exception as e:
        print(f"The sample encountered an error: {e}")
        sys.exit(1)