Skip to content

Local Large Models and How to Use Domestic AIs Compatible with OpenAI ChatGPT Interface

In video translation and dubbing software, AI large models can serve as efficient translation channels, significantly improving translation quality by relating to context.

Currently, most domestic AI interfaces are compatible with OpenAI's technology, so users can operate directly in OpenAI ChatGPT or local large models. You can also deploy and use it locally using ollama.

Moonshot AI Usage

  1. Menu Bar -- Translation Settings -- OpenAI ChatGPT API Settings Interface
  2. Fill in https://api.moonshot.cn/v1 in the API interface address.
  3. Fill in the API Key obtained from the Moonshot Open Platform in the SK field, which can be obtained from this website: https://platform.moonshot.cn/console/api-keys
  4. Fill in moonshot-v1-8k,moonshot-v1-32k,moonshot-v1-128k in the model text box area.
  5. Then select the model you want to use in the model selection, and keep it after testing without problems.

DeepSeek AI Usage

  1. Menu Bar -- Translation Settings -- OpenAI ChatGPT API Settings Interface
  2. Fill in https://api.deepseek.com/v1 in the API interface address.
  3. Fill in the API Key obtained from the Moonshot Open Platform in the SK field, which can be obtained from this website: https://platform.deepseek.com/api_keys
  4. Fill in deepseek-chat in the model text box area.
  5. Then select deepseek-chat in the model selection, and keep it after testing without problems.

Zhipu AI BigModel Usage

  1. Menu Bar -- Translation Settings -- OpenAI ChatGPT API Settings Interface
  2. Fill in https://open.bigmodel.cn/api/paas/v4/ in the API interface address.
  3. Fill in the API Key obtained from the Moonshot Open Platform in the SK field, which can be obtained from this website: https://www.bigmodel.cn/usercenter/apikeys
  4. Fill in glm-4-plus,glm-4-0520,glm-4 ,glm-4-air,glm-4-airx,glm-4-long , glm-4-flashx ,glm-4-flash in the model text box area.
  5. Then select the model you want to use in the model selection. The free model glm-4-flash is optional. Keep it after testing without problems.

Baichuan Intelligent AI Usage

  1. Menu Bar -- Translation Settings -- OpenAI ChatGPT API Settings Interface
  2. Fill in https://api.baichuan-ai.com/v1 in the API interface address.
  3. Fill in the API Key obtained from the Moonshot Open Platform in the SK field, which can be obtained from this website: https://platform.baichuan-ai.com/console/apikey
  4. Fill in Baichuan4,Baichuan3-Turbo,Baichuan3-Turbo-128k,Baichuan2-Turbo in the model text box area.
  5. Then select the model you want to use in the model selection, and keep it after testing without problems.

01.AI

Official website: https://lingyiwanwu.com

API KEY acquisition address: https://platform.lingyiwanwu.com/apikeys

API URL: https://api.lingyiwanwu.com/v1

Available model: yi-lightning

Alibaba Bailian

Alibaba Bailian is an AI model marketplace that provides all Alibaba series models and other manufacturers' models, including Deepseek-r1

Official website address: https://bailian.console.aliyun.com

API KEY (SK) acquisition address: https://bailian.console.aliyun.com/?apiKey=1#/api-key

API URL: https://dashscope.aliyuncs.com/compatible-mode/v1

Available models: Many, see https://bailian.console.aliyun.com/#/model-market for details

Silicon Flow

Another AI marketplace similar to Alibaba Bailian, providing mainstream domestic models, including deepseek-r1

Official website address: https://siliconflow.cn

API KEY (SK) acquisition address: https://cloud.siliconflow.cn/account/ak

API URL: https://api.siliconflow.cn/v1

Available models: Many, see https://cloud.siliconflow.cn/models?types=chat for details

Note: Silicon Flow provides the Qwen/Qwen2.5-7B-Instruct free model, which can be used directly without spending money

ByteDance Volcano Ark

An AI marketplace similar to Alibaba Bailian, in addition to gathering the Doubao series of models, there are also some third-party models, including deepseek-r1

Official website: https://www.volcengine.com/product/ark

API KEY (SK) acquisition address: https://console.volcengine.com/ark/region:ark+cn-beijing/apiKey

API URL: https://ark.cn-beijing.volces.com/api/v3

MODELS: Many, see https://console.volcengine.com/ark/region:ark+cn-beijing/model?vendor=Bytedance&view=LIST_VIEW for details

Note: ByteDance Volcano Ark's compatibility with the OpenAI SDK is a bit strange. You cannot directly fill in the model name. You need to create an inference endpoint in the Volcano Ark console in advance, select the model to use in the inference endpoint, and then fill in the inference endpoint ID in the place where the model is needed, that is, in the software. If you find it troublesome, you can ignore it. Besides the slightly lower price, there are no other advantages. See how to create an inference endpoint https://www.volcengine.com/docs/82379/1099522

Precautions:

  1. Most AI translation channels may limit the number of requests per minute. If an error message indicates that the request frequency has been exceeded, you can click "Translation Channel ↓" on the main interface of the software, and change the pause seconds to 10 in the pop-up window, that is, wait 10 seconds after each translation before initiating the next translation request, up to 6 times per minute, to prevent the frequency from being exceeded.

  2. If the selected model is not intelligent enough, especially the locally deployed model is limited by hardware resources, it is usually small and cannot accurately return translations in the required format according to the instruction requirements. There may be too many blank lines in the translation results. At this time, you can try to use a larger model, or open Menu -- Tools/Options -- Advanced Options -- Send complete subtitle content when using AI translation, and uncheck it.



Use ollama to locally deploy the Tongyi Qianwen large model

If you have certain hands-on skills, you can also deploy a large model locally and then use the model for translation. Take Tongyi Qianwen as an example to introduce the deployment and usage methods

1. Download the exe and run it successfully

Open the website https://ollama.com/download

Click to download. After the download is complete, double-click to open the installation interface and click Install to complete.

After completion, a black or blue window will pop up automatically. Enter 3 words ollama run qwen and press Enter. The Tongyi Qianwen model will be downloaded automatically

Wait for the model to finish downloading, no proxy is needed, the speed is quite fast

After the model is automatically downloaded, it will run directly. When the progress reaches 100% and the "Success" character is displayed, it means that the model has been successfully run. At this point, it also means that the installation and deployment of the Tongyi Qianwen large model is complete, and you can use it happily. Isn't it super simple?

The default interface address is http://localhost:11434

If the window is closed, how to open it again? It's also very simple, click on the computer's start menu, find "Command Prompt" or "Windows PowerShell" (or directly Win key + q key to enter cmd search), click to open, and enter ollama run qwen to complete.

2. Use it directly in the console command window

As shown in the figure, when this interface is displayed, you can actually enter text directly in the window to start using it.

3. Of course, this interface may not be very friendly, so let's get a friendly UI

Open the website https://chatboxai.app/zh and click Download

After downloading, double-click and wait for the interface window to open automatically

Click "Start Settings", and in the pop-up floating layer, click the model at the top, select "Ollama" in the AI model provider, fill in the API domain name address http://localhost:11434, and select Qwen:latest in the model drop-down menu, then save it.

The usage interface displayed after saving, use your imagination and use it freely.

4. Fill in the API in the video translation and dubbing software

  1. Open Menu -- Settings -- Compatible with OpenAI and local large models, add a model ,qwen in the middle text box, as shown below, and then select the model

  1. Fill in http://localhost:11434/v1 in the API URL, and fill in SK arbitrarily, such as 1234

  1. Test whether it is successful, if it is successful, save it and use it

5. What other models can be used

In addition to Tongyi Qianwen, there are many models that can be used, and the usage method is just as simple. Only 3 words are needed ollama run model name

Open this address https://ollama.com/library You can see all the model names. If you want to use which one, copy the name, and then execute ollama run model name.

Remember how to open the command window? Click the start menu and find Command Prompt or Windows PowerShell

For example, I want to install the openchat model

Open Command Prompt, enter ollama run openchat, press Enter and wait until Success is displayed.

Precautions:

  1. Most AI translation channels may limit the number of requests per minute. If an error message indicates that the request frequency has been exceeded, you can click "Translation Channel ↓" on the main interface of the software, and change the pause seconds to 10 in the pop-up window, that is, wait 10 seconds after each translation before initiating the next translation request, up to 6 times per minute, to prevent the frequency from being exceeded.

  2. If the selected model is not intelligent enough, especially the locally deployed model is limited by hardware resources, it is usually small and cannot accurately return translations in the required format according to the instruction requirements. There may be too many blank lines in the translation results. At this time, you can try to use a larger model, or open Menu -- Tools/Options -- Advanced Options -- Send complete subtitle content when using AI translation, and uncheck it.