The easiest way I know of to check any machine is to put another router or machine in front of it with a white list firewall or way of logging DNS traffic. You just need to spot the address in the list.
DNS filtering usually only filters on incoming packets, but for bot stuff that should catch issues.
In general, most routers run everything from a serial flash chip on the board. These are usually 8, 16, or 32 megabytes. They have a simple bootloader like U-Boot. This is what loads the operating system. These devices have a UART serial port on the PCB. You can use a USB to serial UART adaptor to see what is happening in the device. With a proprietary OS, you are still likely to see the pre-init boot sequence that the bootloader prints to terminal. Most operating systems also print information to this interface, at least of the couple dozen junk devices I have been given and messed around with. I make a little mount for a USB to serial adaptor and add it to all of my routers when new, so I only need to plug in USB to get to the internal bootloader and tty terminal interface of OpenWRT. You will need to know the default baud rate of the device, although it is probably listed somewhere online or can be guessed as one of the common high values at or above 9600.
Getting into this further gets complicated. It is probably better to look for any CVE that is relevant to the device or software and work backwards. Look for any software updates that have obfuscated the risk for each CVE. If the issue was not fixed, that is where to look to see if someone has exploited the device. Ultimately, they need clock cycles from the CPU scheduler. So it must be a process or some way of executing code from unregistered memory.
This is getting to the edge of what I have messed around with and understand. There may be a way to get a memory map that includes unused pages, and compare that with a hex dump of the flash memory. This is outside of your scope of a proprietary OS, but hopefully frames the abstract scope of what is possible on this class of device when you have an open source stack. The main advantage of this kind of device and issue is that you can physically remove the flash chip and then see and manipulate every page and memory location. The device likely doesn’t have microcode loaded into the CPU(s) that make it challenging to determine what is going on.
There is probably an easier way, but a hex dump of the current system can be hashed against the factory updated version to see if any differences are present. It is likely that any exploit will include a string with the address to connect to somewhere in flash memory. It could be obfuscated through encryption or a cypher, but a simple check for strings in the hex dump and a grep for “http” is a simple way to looks for issues.
The OpenWRT forum is a good general source. The people behind the bootloaders for these devices are also Linux kernel developers and on the OpenWRT forum.





llama.cpp is at the core of almost all offline, open weights models. The server it creates is Open AI API compatible. Oobabooga Textgen WebUI is more user GUI oriented but based on llama.cpp. Oobabooga has the setup for loading models with a split workload between the CPU and GPU which makes larger gguf quantized models possible to run. Llama.cpp, has this feature, Oobabooga implements it. The model loading settings and softmax sampling settings take some trial and error to dial in well. It helps if you have a way of monitoring GPU memory usage in real time. Like I use a script that appends my terminal window title bar with GPU memory usage until inference time.
Ollama is another common project people use for offline open weights models, and it also runs on top of llama.cpp. It is a lot easier to get started in some instances and several projects use Ollama as a baseline for “Hello World!” type stuff. It has pretty good model loading and softmax settings without any fuss, but it does this at the expense of only running on GPU or CPU but never both in a split workload. This may seem great at first, but if you never experience running much larger quantized models in the 30B-140B range, you are unlikely to have success or a positive experience overall. The much smaller models in the 4B-14B range are all that are likely to run fast enough on your hardware AND completely load in your GPU memory if you only have 8GB-24GB. Most of the newer models are actually Mixture of Experts architectures. This means it is like loading ~7 models initially, but then only inferencing two of them at any one time. All you need is the system memory or the Deepspeed package (uses disk drive for excess space required) to load these larger models. Larger quantized models are much much smarter and more capable. You also need llama.cpp if you want to use function calling for agentic behaviors. Look into the agentic API and pull history in this area of llama.cpp before selecting what models to test in depth.
Huggingface is the goto website for sharing and sourcing models. That is heavily integrated with GitHub, so it is probably as toxic long term, but I do not know of a real FOSS alternative for that one. Hosting models is massive I/O for a server.