Home > News

AMD may not have its own chatbot, but it will let you run one

Last Updated on March 7, 2024
Logo of AMD (Advanced Micro Devices) on a blue gradient background, flanked by a run icon and a guide icon on the top left corner.
PC Guide is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Read More
You can trust PC Guide: Our team of experts use a combination of independent consumer research, in-depth testing where appropriate - which will be flagged as such, and market analysis when recommending products, software and services. Find out how we test here.

Following OpenAI’s highly publicized release of ChatGPT, other organizations have since launched their own versions of these chatbots. But despite the continued hubbub surrounding generative AI, one company continues to avoid jumping on the hype train: AMD. However, that doesn’t mean the Ryzen manufacturer is entirely against the idea after recently releasing an article on using a Large Language Model (LLM) on your AMD PC.

Based on AMD’s comprehensive step-by-step guide, it functions similarly to Nvidia’s Chat with RTX; since it runs on your machine without needing an internet connection, it’s more secure than your typical online chatbots. Of course, it won’t have the added convenience of being available on any machine à la ChatGPT. But if data protection is one of your main concerns, you can’t beat the security that AMD’s localized chatbot provides.

An AMD-native chatbot to rival Nvidia’s Chat with RTX

If you’re looking to try out this chatbot on your AMD machine, you’re in luck. You won’t have to do any heavy tinkering to do so, according to the previously mentioned guide. The first thing you’ll need to do to try it out is to download the correct version of LM Studio. There are two separate versions: one for AMD processors and another for RX 7000 Series GPUs. Once it’s finished downloading, you’ll have to run the file and enter either of the following codes: TheBloke/OpenHermes-2.5-Mistral-7B-GGUF OR TheBloke/Llama-2-7B-Chat-GGUF. The former runs the Mistral 7B LLM, while the latter runs Llama v2 7B.

You can also try out other LLM models if you choose to do so. Afterward, you’ll have to download the Q4 K M model file from the right-hand panel. Finally, head to the Chat tab, press the drop-down menu at the top of the page, and select the mode. As soon as it’s loaded up, you can start sending it prompts. However, if you’re running a Radeon GPU, AMD recommends checking “GPU offload” on the right-hand panel, moving the slider to Max, and ensuring AMD ROCm is shown as the detected GPU type.

Nico is a Tech News Writer for PC Guide. He is also adept at finding a good deal every now and then, stemming from his days penny-pinching as a broke college kid.