Model Training on AMD 16-core CPU with 8GB RAM running in a virtual machine for Bitcoin Price Prediction – Part 2 – Updated

Continuing with Over 500,000+ Data Points for Bitcoin (BTC) Price Prediction

Using the Python program, the first method I tried was SVR (Support Vector Regression) for prediction. However… how many steps should I use for prediction? 🤔

Previously, I used a Raspberry Pi 4B (4GB RAM) for prediction, and… OH… 😩
I don’t even want to count the time again. Just imagine training a new model on a Raspberry Pi!

So, I switched to an AMD 16-core CPU with 8GB RAM running in a virtual machine to perform the prediction.

  • 60 steps calculation: Took 7 hours 😵
  • 120 steps: …Man… still running after 20 hours! 😫 Finally !!! 33 Hours

Do I need an M4 machine for this? 💻⚡

ChatGPT provided another approach.
OK, let’s test it… I’ll let you know how it goes! 🚀

🧪 Quick Example of More Time Steps Effect

Time Step (X Length)Predicted AccuracyNotes
30⭐⭐⭐Quick but less accurate for long-term trends.
60⭐⭐⭐⭐Balanced context and performance.
120⭐⭐⭐⭐½Better for long-term trends but slower.
240⭐⭐Risk of overfitting and slower training.

#SVR #Prediction #Computing #AI #Step #ChatGPT #Python #Bitcoin #crypto #Cryptocurrency #trading #price #virtualmachine #vm #raspberrypi #ram #CPU #CUDB #AMD #Nvidia

Install and Run OLLAMA on Linux Machine

So many tech guys already share how to install the OLLAM. I wont say too details. Just a brief step for you.

  1. Prepare a Machine with good GPU, CPU and > 16G RAM. (Raspberry Pi can run with the Deepseek 1.5B, other………. Please chec my last post)
  2. Install update your linux repos.
    sudo apt-get update -y
    sudo apt-get upgrade -y

  3. Install Ollam by the follow command
    curl -fsSL https://ollama.com/install.sh | sh
  4. Run the LLM model, if you wont have the model at your machine, it will be download automatically.
    ollama run <model>
    e.g.: ollama run deepseek-r1:8b

  5. The model will be downloaded to /usr/share/ollama/.ollama/models/
  6. what model you can run? Check here
    https://ollama.com/search
  7. OLLAMA command line is a little bit similar with Docker, check this.

PS: You also can install OLLAMA at WINDOWS, please also check OLLAMA website.

Lets try your own AI locally!

#OLLAMA #Model #AI #CPU #GPU #CUDB #RAM #RaspberryPI #Docker

Deepseek 1.5b vs 8b version

Well, we all expect that 1.5b and 8b may have a different of AI’s knowledge.

We made a test,
1. 1.5b we are on Raspberry PI 4B 4G Ram.,
2. 8b on virtual machine with AMD Radeon and 16G Ram on Ubuntu.

We only ask a question.

“what is the difference between you and chatGPT”

  • 1. 1.5b versions

  • 2. 8b versions

The knowledge base of course 8b will be better. However, we will most concern of the resource usage. Can Raspberry PI CPU base can process this efficient?

#deepseek #AI #CPU #raspberrypi #GPU #nvidia #CUDB #AMD