RSS Parrot

BETA

🦜 Xenova / @xenovacom

@xcancel.com.xenovacom@rss-parrot.net

I'm an automated parrot! I relay a website's RSS feed to the Fediverse. Every time a new post appears in the feed, I toot about it. Follow me to get all new posts in your Mastodon timeline! Brought to you by the RSS Parrot.

---

Twitter feed for: @xenovacom. Generated by xcancel.com

Your feed and you don't want it here? Just e-mail the birb.

Site URL: xcancel.com/xenovacom

Feed URL: rss.xcancel.com/xenovacom/rss

Posts: 10

Followers: 1

R to @xenovacom: I can't wait to see what the community builds with it! πŸ€— Links: πŸ“„ Blog post: https://huggingface.co/blog/ibm-granite/granite-4-nano πŸš€ Demo (+ source code): https://huggingface.co/spaces/ibm-granite/Granite-4.0-Nano-WebGPU

Published: October 28, 2025 17:06

I can't wait to see what the community builds with it! πŸ€— Links: πŸ“„ Blog post: huggingface.co/blog/ibm-gran… πŸš€ Demo (+ source code): huggingface.co/spaces/ibm-gr…

IBM just released Granite-4.0 Nano, their smallest LLMs ever (300M & 1B)! 😍 The models demonstrate remarkable instruction following and tool calling capabilities, and can even run locally in-browser! This means they can interact with websites and call browser APIs for you! 🀯

Published: October 28, 2025 17:06

IBM just released Granite-4.0 Nano, their smallest LLMs ever (300M & 1B)! 😍 The models demonstrate remarkable instruction following and tool calling capabilities, and can even run locally in-browser! This means they can interact with websites and call…

R to @xenovacom: Just remember: nanochat isn't meant to be the most powerful LLM out there. Rather, it's designed as an educational resource to show you how to build a full-stack LLM from start to finish (< $1000). Enjoy! πŸ€— Try it out yourself (+ source code): https://huggingface.co/spaces/webml-community/nanochat-webgpu

Published: October 20, 2025 22:23

Just remember: nanochat isn't meant to be the most powerful LLM out there. Rather, it's designed as an educational resource to show you how to build a full-stack LLM from start to finish (< $1000). Enjoy! πŸ€— Try it out yourself (+ source code):…

BOOM! πŸ’₯ Today I added WebGPU support for @karpathy's nanochat models, meaning they can run 100% locally in your browser (no server)! The d32 version runs at over 50 tps on my M4 Max πŸš€ Pretty wild that you can now deploy AI applications using just a single index.html file πŸ˜…

Published: October 20, 2025 22:23

BOOM! πŸ’₯ Today I added WebGPU support for @karpathy's nanochat models, meaning they can run 100% locally in your browser (no server)! The d32 version runs at over 50 tps on my M4 Max πŸš€ Pretty wild that you can now deploy AI applications using just a single…

Introducing Granite Docling WebGPU 🐣 State-of-the-art document parsing 100% locally in your browser! 🀯 πŸ” No data sent to a server (private & secure) πŸ’° Completely free... forever! πŸ”‚ Docling ecosystem enables conversion to HTML, Markdown, JSON, and more! Try out the demo! πŸ‘‡

Published: October 7, 2025 19:44

Introducing Granite Docling WebGPU 🐣 State-of-the-art document parsing 100% locally in your browser! 🀯 πŸ” No data sent to a server (private & secure) πŸ’° Completely free... forever! πŸ”‚ Docling ecosystem enables conversion to HTML, Markdown, JSON, and more! …

IBM just released Granite 4.0, their latest series of small language models! These models excel at agentic workflows (tool calling), document analysis, RAG, and more. πŸš€ The "Micro" (3.4B) model can even run 100% locally in your browser on WebGPU, powered by πŸ€— Transformers.js!

Published: October 2, 2025 16:16

IBM just released Granite 4.0, their latest series of small language models! These models excel at agentic workflows (tool calling), document analysis, RAG, and more. πŸš€ The "Micro" (3.4B) model can even run 100% locally in your browser on WebGPU, powered…

R to @xenovacom: πŸ—‚οΈ Model collection: https://huggingface.co/collections/ibm-granite/granite-40-language-models-6811a18b820ef362d9e5a82c πŸ”— WebGPU demo + source code: https://huggingface.co/spaces/ibm-granite/Granite-4.0-WebGPU

Published: October 2, 2025 16:16

πŸ—‚οΈ Model collection: huggingface.co/collections/i… πŸ”— WebGPU demo + source code: huggingface.co/spaces/ibm-gr…

RT by @xenovacom: Do you remember #Jarvis, the AI Assistant from Iron Man? I just built one entirely in #JavaScript as an #MCP-Client powered by #TransformersJS. It's a local-first assistant that runs completely in the browser on your machine. No API calls, no cloud dependency!

Published: September 30, 2025 09:35

Do you remember #Jarvis, the AI Assistant from Iron Man? I just built one entirely in #JavaScript as an #MCP-Client powered by #TransformersJS. It's a local-first assistant that runs completely in the browser on your machine. No API calls, no cloud…

RT by @xenovacom: Transformers.js lets you run AI models offline in the browser with ONNX + WebGPU. Build a chatbot with Llama 3.2, optimize performance, and try local AI tasks like object detection & background removal.

Published: September 29, 2025 14:34

Transformers.js lets you run AI models offline in the browser with ONNX + WebGPU. Build a chatbot with Llama 3.2, optimize performance, and try local AI tasks like object detection & background removal.