Global Latency Map – by using RIPE Atlas

Using RIPE Atlas to perform the global network latency map. Atlas Probes are existing world wide and let you to perform the measurement by using your CREDIT. Self developed Python automation program used to perform the test by RIPE Atlas API.

Access the following to see our work.

https://www.bgptrace.com/atlas/ping_map.html

Can Starlink to be a testable probe?

More Information: https://atlas.ripe.net/

#ripe #Internet #measurement #python #automation #latency #starlink #atlas

Deepseek 1.5b vs 8b version

Well, we all expect that 1.5b and 8b may have a different of AI’s knowledge.

We made a test,
1. 1.5b we are on Raspberry PI 4B 4G Ram.,
2. 8b on virtual machine with AMD Radeon and 16G Ram on Ubuntu.

We only ask a question.

“what is the difference between you and chatGPT”

  • 1. 1.5b versions

  • 2. 8b versions

The knowledge base of course 8b will be better. However, we will most concern of the resource usage. Can Raspberry PI CPU base can process this efficient?

#deepseek #AI #CPU #raspberrypi #GPU #nvidia #CUDB #AMD

Deepseek on Raspberry PI?????

Tech guys are interested in how AI and LLM model processing on an IOT, low power devices such as Raspberry PI.

But??!!!!

NO GPU!!!!!!!!!!!!

How to run the AI model????

OK, We dont want to talk about how to install and run on OLLAMA.

We have tried on 1.5b version of Deepseek on our PI4 4G RAM device.

Amazing that it works! However, you cannot expect the response time and token would be good enough for fast response.

By this kind of success, we can imagine that more other model can be running on CPU based IOT device. Therefore, will the home assistant widely adopt?

Let see……………….