当前位置: 首页> 科技> 互联网 > 微信应用平台开发_网址查询地址查询站长之家_金戈枸橼酸西地那非_上海培训机构

微信应用平台开发_网址查询地址查询站长之家_金戈枸橼酸西地那非_上海培训机构

时间:2025/9/7 11:08:29来源:https://blog.csdn.net/qq_42727752/article/details/145082786 浏览次数:0次
微信应用平台开发_网址查询地址查询站长之家_金戈枸橼酸西地那非_上海培训机构

OpenAI的对话和图像API简单体验

  • 前言
    • OpenAI API 对话和图像接口
      • Python
      • JavaScript
    • Azure OpenAI API 对话和图像接口
      • Python
      • JavaScript
    • 总结

前言

JS 和 Python 是比较受欢迎的两个调用 OpenAI 对话 API 的两个库, 这里 简单记录这两个库对 OpenAI 的对话(Chat)和图像(Image)的使用.

使用上这里会包括原生的 OpenAI APIAzure OpenAI API 两个使用例子.

JavaScrip 在网络设置和后端平台开发上不如 Python 方便,但是我们的 AIGC_Playground 的项目是一个前后端的项目,未来能够在网页上有一个好的体验,并且尽量分离前后端,所以这个项目的请求优先用JavaScrip

  • 下面的例子 node 版本: v16.20.2, Python 的版本是 3.10.12

OpenAI API 对话和图像接口

Python

首先 pip3 install openai

直接看代码吧 代码比较简单

import httpx
import asyncio
from openai import OpenAIproxy_url = "http://xxxx:xxxx"
api_key = "xxxx"def use_proxy():http_client = Noneif(not proxy_url):http_client = httpx.Client()return http_clienthttp_client = httpx.Client(proxies={'http://': proxy_url, 'https://': proxy_url})return http_client'''
# ===== 非流式的对话测试 =====
'''
async def no_stream_chat():http_client = use_proxy()client = OpenAI(api_key=api_key,http_client=http_client)# 请求results = client.chat.completions.create(model= "gpt-4o-mini", messages=[{"role": "user", "content": [{"type": "text", "text": "Hello?"}]}])print(results.choices[0].message.content)'''
# ===== 流式的对话测试 =====
'''
async def stream_chat():http_client = use_proxy()client = OpenAI(api_key=api_key,http_client=http_client)# 请求results = client.chat.completions.create(model= "gpt-4o-mini", messages=[{"role": "user", "content": [{"type": "text", "text": "Hello?"}]}], stream=True)for chunk in results:choice = chunk.choicesif choice == []:continuecontent = choice[0].delta.contentprint(content)'''
# ===== 生成图像的函数 =====
'''
async def gen_dell3_pic():http_client = use_proxy()client = OpenAI(api_key=api_key,http_client=http_client)# 请求results = client.images.generate(model="dall-e-3", prompt="A cute cat")print(results.data[0].url)if __name__ == "__main__":asyncio.run(no_stream_chat())asyncio.run(stream_chat())asyncio.run(gen_dell3_pic())

JavaScript

首先安装包 npm install openai https-proxy-agent --save

再配置 package.json 支持 ES6 如下:

{"type": "module","dependencies": {"https-proxy-agent": "^7.0.6","openai": "^4.78.1"}
}

同样也直接看代码算了

import { OpenAI } from "openai";const proxyUrl = "http://xxxx:xxx";
const apiKey = "xxxxxxxxxxxxxxxxxxxxxxxxxxxx";/** 配置网络设置 */
async function useProxy(client) {if (!proxyUrl) return;// 动态导入 https-proxy-agent 模块const { HttpsProxyAgent } = await import("https-proxy-agent");// 使用 HttpsProxyAgentconst agent = new HttpsProxyAgent(proxyUrl);const originalFetchWithTimeout = client.fetchWithTimeout;client.fetchWithTimeout = async (url, init, ms, controller) => {const { signal, ...options } = init || {};if (signal) signal.addEventListener("abort", () => controller.abort());const timeout = setTimeout(() => controller.abort(), ms);const fetchOptions = {signal: controller.signal,...options,agent: agent,};if (fetchOptions.method) {// Custom methods like 'patch' need to be uppercasedfetchOptions.method = fetchOptions.method.toUpperCase();}try {return await originalFetchWithTimeout.call(client,url,fetchOptions,ms,controller);} finally {clearTimeout(timeout);}};
}/** ===== 非流式的对话测试 ===== */
async function noStreamChat() {const client = new OpenAI({ apiKey, timeout: 5000 });await useProxy(client);// 请求const results = await client.chat.completions.create({model: "gpt-4o-mini",messages: [{ role: "user", content: [{ type: "text", text: "Hello?" }] }],});for (const choice of results.choices) {console.log(choice.message);}
}/** ===== 流式的对话测试 ===== */
async function streamChat() {const client = new OpenAI({ apiKey, timeout: 5000 });await useProxy(client);// 请求const results = await client.chat.completions.create({model: "gpt-4o-mini",messages: [{ role: "user", content: [{ type: "text", text: "Hello?" }] }],stream: true,});for await (const chunk of results) {console.log(chunk.choices[0]?.delta?.content || "");}
}/** ===== 图片请求 ===== */
async function genDell3Pic() {const client = new OpenAI({ apiKey, timeout: 60000 });await useProxy(client);// 请求const results = await client.images.generate({model: "dall-e-3",prompt: "cute cat",});console.log(results.data[0].url);
}/** ===== 测试主函数 ===== */
async function main() {await noStreamChat();await streamChat();await genDell3Pic();
}main().catch((err) => {console.error("The sample encountered an error:", err);
});

Azure OpenAI API 对话和图像接口

Python

装依赖 pip3 install openai 看代码

import httpx
import asyncio
from openai import AzureOpenAIproxy_url = ""
azure_endpoint = "xxxxxxxxxxxxxxxx"
api_key = "xxxxxxxxxxxxxxxx"
chat_deployment = "xxxxxx"
image_deployment = "xxxxxxx"def use_proxy():http_client = Noneif(not proxy_url):http_client = httpx.Client()return http_clienthttp_client = httpx.Client(proxies={'http://': proxy_url, 'https://': proxy_url})return http_client'''
# ===== 非流式的对话测试 =====
'''
async def no_stream_chat():deployment = chat_deploymentapi_version = "2024-05-01-preview"http_client = use_proxy()client = AzureOpenAI(azure_endpoint=azure_endpoint, api_key=api_key, api_version=api_version, http_client=http_client)# 请求results = client.chat.completions.create(model=deployment, messages=[{"role": "user", "content": [{"type": "text", "text": "Hello?"}]}])print(results.choices[0].message.content)'''
# ===== 流式的对话测试 =====
'''
async def stream_chat():deployment = chat_deploymentapi_version = "2024-05-01-preview"http_client = use_proxy()client = AzureOpenAI(azure_endpoint=azure_endpoint, api_key=api_key, api_version=api_version, http_client=http_client)# 请求results = client.chat.completions.create(model=deployment, messages=[{"role": "user", "content": [{"type": "text", "text": "Hello?"}]}], stream=True)for chunk in results:choice = chunk.choicesif choice == []:continuecontent = choice[0].delta.contentprint(content)'''
# ===== 生成图像的函数 =====
'''
async def gen_dell3_pic():deployment = image_deploymentapi_version = "2024-05-01-preview"http_client = use_proxy()client = AzureOpenAI(azure_endpoint=azure_endpoint, api_key=api_key, api_version=api_version, http_client=http_client)# 请求results = client.images.generate(model=deployment, prompt="cute cat")print(results.data[0].url)if __name__ == "__main__":asyncio.run(no_stream_chat())asyncio.run(stream_chat())asyncio.run(gen_dell3_pic())

JavaScript

安装包 npm install openai https-proxy-agent --save, 再配置 package.json 如下:

{"type": "module","dependencies": {"https-proxy-agent": "^7.0.6","openai": "^4.78.1"}
}

直接看代码吧:

import { AzureOpenAI } from "openai";const proxyUrl = "";
const endpoint = "xxxxxxxxx";
const apiKey = "xxxxxxxxx";
const chatDeployment = "xxx";
const dellDelpoyment = "xxxxxx";/** 配置网络设置 */
async function useProxy(client) {if (!proxyUrl) return;// 动态导入 https-proxy-agent 模块const { HttpsProxyAgent } = await import("https-proxy-agent");// 使用 HttpsProxyAgentconst agent = new HttpsProxyAgent(proxyUrl);const originalFetchWithTimeout = client.fetchWithTimeout;client.fetchWithTimeout = async (url, init, ms, controller) => {const { signal, ...options } = init || {};if (signal) signal.addEventListener("abort", () => controller.abort());const timeout = setTimeout(() => controller.abort(), ms);const fetchOptions = {signal: controller.signal,...options,agent: agent,};if (fetchOptions.method) {// Custom methods like 'patch' need to be uppercasedfetchOptions.method = fetchOptions.method.toUpperCase();}try {return await originalFetchWithTimeout.call(client,url,fetchOptions,ms,controller);} finally {clearTimeout(timeout);}};
}/** ===== 非流式的对话测试 ===== */
async function noStreamChat() {const deployment = chatDeployment;const apiVersion = "2024-05-01-preview";const client = new AzureOpenAI({ endpoint, apiKey, apiVersion, deployment });useProxy(client);// 请求const results = await client.chat.completions.create({messages: [{ role: "user", content: [{ type: "text", text: "Hello?" }] }],});for (const choice of results.choices) {console.log(choice.message);}
}/** ===== 流式的对话测试 ===== */
async function streamChat() {const apiVersion = "2024-05-01-preview";const deployment = chatDeployment;const client = new AzureOpenAI({ endpoint, apiKey, apiVersion, deployment });useProxy(client);// 请求const results = await client.chat.completions.create({messages: [{ role: "user", content: [{ type: "text", text: "Hello?" }] }],stream: true,});for await (const chunk of results) {console.log(chunk.choices[0]?.delta?.content || "");}
}/** ===== 图片请求 ===== */
async function genDell3Pic() {// The prompt to generate images fromconst deployment = dellDelpoyment;const apiVersion = "2024-04-01-preview";const client = new AzureOpenAI({ endpoint, apiKey, apiVersion, deployment });useProxy(client);// 请求const results = await client.images.generate({ prompt: "cute cat" });console.log("image.url :", results.data[0].url);
}/** ===== 测试主函数 ===== */
async function main() {await noStreamChat();await streamChat();await genDell3Pic();
}main().catch((err) => {console.error("The sample encountered an error:", err);
});

总结

  1. 对于 JavaScript 的代码在配置网络时候需要对原来的 fetch 的参数进行复写, 虽然 openainpm 的包时候提供了传入自定义的 fetch 的值, 但是我测试发现这个传入对返回的 response 要做一些处理, 暂时这样操作.

  2. Python 的库直接用 httpx 直接定义网络设置, 还是比较方便的.

  3. 后续再介绍其他的接口.

关键字:微信应用平台开发_网址查询地址查询站长之家_金戈枸橼酸西地那非_上海培训机构

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com

责任编辑: