
🔖 No cloud. No internet. No coding.
🔖 Just you, your laptop, and 100+ powerful AI models running locally.
Imagine building your own chatbot tha...
For further actions, you may consider blocking this person and/or reporting abuse
Interesting for beginners, but I think calling this “building a chatbot” sets the bar too low. You’re not building — you’re wiring pre-built models through GUI steps. There’s value in accessibility, but let’s not confuse orchestration with engineering. This isn't "AI development", It’s tool usage. A real chatbot comes from training models, building architectures, and understanding the core.
Yes, it is. You can also train the model from your dataset. Try to explore model hq once. It's more likely for enterprise
But you can't use it on PC from 2008 like AMD Phenom™ Triple-Core Processor (2.40 GHz) with 2 GB RAM (total) DDR2 and no GPU. By low level mathematics and everything from scratch, you can train even a Transformer model (same as described in the paper) with deep stack on this PC in minutes.
Specifications are already mentioned, that's why. Everything needs an upgrade, and running AI models locally and privately, without internet, was still an imagination before the launch of Model HQ.
If it provides no comfort to the user, then there's no sense in launching it. 😉
Model HQ makes things easier, sure — but true capability isn’t tied to hardware upgrades. It’s about what you can build when all you have are fundamentals. When you design a model yourself using low-level math and stats — without bloated frameworks — the entire model can be under 150 KB, versus Model HQ’s multi-GB setups that demand 16 GB RAM just to run.
I'm totally getting you, but it's just not limited to a simple chat bot.
You can do RAG, you can create agents, you can do multi-docs RAG, and a lot of features. The RAM requirement is according to the model size and runtime behaviour.
Please read this once: dev.to/llmware/how-to-run-ai-model...
Really appreciate how you broke down the steps, it's wild seeing what local AI can do now with zero coding needed.
Curious, have you noticed any surprising use cases pop up since you’ve started using Model HQ offline?
Thanks for reading.
Yups, there are so many that's why we built Model HQ.
Will try it out for sure! Nice explanation!🙌
Great!
This is extremely impressive, honestly. I've wanted to keep things offline for ages and you made it look so doable
Yuss, try it once!!
What's your thought on this?
I'm planning to try out ModelHQ. I recently downloaded an LLM and was running it through the terminal. ModelHQ seems like a replacement for that, but it’s quite resource-intensive. I was using a 3GB model, and even with that, RAM usage spiked to 100%. It makes me wonder how much RAM would be needed for models over 32GB 🤯. It gets really challenging to use when your project itself is also consuming a lot of memory.
Hi @thevaibhavmaurya , running an LLM is quite memory intensive. However, we are able to run models up to 32 GB on device by optimizing the models for the specific hardware you are using. But to your point, running models on device does consume a lot of memory, period. Our product itself does not consume a lot of memory - it is barely 80 MB for Qualcomm devices and 140 MB for Intel (about the size of a PowerPoint or PDF presentations).
Thanks for the insight! Looking forward to exploring it.
great