ollama
RT @osanseviero: https://luma.com/ollama-gemma4
ollama
RT @osanseviero: See you this Wednesday at the Ollama Gemma Meetup! 💎 https://twitter.com/osanseviero/status/2043836929477882301/photo/1
中文: RT @osanseviero:本周三在Ollama Gemma聚会上见!💎
ollama
We are working through the night to continue to scale capacity for Ollama's cloud. Apologies if you see any performance degradation. https://twitter.com/ollama/status/2043619568959341027/photo/1
中文: 我们正整夜努力,继续扩大奥拉马云的容量。 如果您看到性能下降,请表示歉意。
ollama
Ollama 0.20.6 is here with improved Gemma 4 tool calling! more improvements to come for Gemma 4! https://twitter.com/ollama/status/2043609267236913164/photo/1
中文: Ollama 0.20.6 提供改进的 Gemma 4 工具调用! Gemma 4 将带来更多改进!
ollama
MiniMax M2.7 is available on Ollama's cloud, and is licensed for commercial usage. Use it with OpenClaw: ollama launch openclaw --model minimax-m2.7:cloud Coding agents, such as Claude: ollama launch claude --model minimax-m2.7:cloud Chat with the model: ollama run…
中文: MiniMax M2.7 可在 Ollama 的云端使用,并授权用于商业用途。 与 OpenClaw 一起使用: ollama 发布 openclaw -- model minimax-m2.7:云 编码代理,例如克劳德: ollama 发布 云 - model minimax-m2.7:云 与模型聊天: 奥拉玛奔跑......
ollama
RT @ronaldmannak: MLX creator @awnihannun sharing the story of MLX. Apple management called him right after the launch: “why didn’t you tel…
中文: RT @ronaldmannak:MLX 创作者 @awnihannun 分享了 MLX 的故事。苹果公司管理层在发布会后立即给他打了电话:“你为什么不联系......
ollama
RT @awnihannun: There were some exceptionally cool demos from @ollama and omlx using MLX to run Qwen 3.5 and Gemma 4 on Apple silicon. The…
中文: RT @awihannun:有一些特别酷的演示,来自 @ollama 和 omlx,使用 MLX 在苹果芯片上运行 Qwen 3.5 和 Gemma 4。 ......
ollama
RT @minchoi: Google's Gemma 4 is pretty wild. You can now run it locally with OpenClaw in 3 steps. 1. Install Ollama 2. Pull Gemma 4 mode…
中文: RT @minchoi:谷歌的《Gemma 4 非常疯狂”。 现在,使用 OpenClaw 以 3 个步骤进行本地运行。 1。安装奥拉玛 2。拉动Gemma 4模式......
ollama
RT @twid: Watching @awnihannun at @ollama https://twitter.com/twid/status/2042425382859841926/photo/1
中文: RT @twid:在@ollama 观看 @awnihannun 的
ollama
RT @parthsareen: mlx @ollama! https://twitter.com/parthsareen/status/2042428552126349374/photo/1
ollama
RT @sundarpichai: Lots of love for Gemma 4! Team just told me it’s already had 10M+ downloads since last week’s launch. Gemma models have n…
中文: RT @sundarpichai:对Gemma 4的热爱!团队刚刚告诉我,自上周发布以来,它已经拥有了1000万以上的下载量。杰玛模型有......
ollama
RT @ollama: GLM-5.1 is here! Try it on OpenClaw🦞🦞🦞 ollama launch openclaw --model glm-5.1:cloud Claude Code ollama launch claude --mo…
中文: RT @ollama:GLM-5.1 来了! 在 OpenClaw🦞🦞 上试用一下 ollama 发布 openclaw — model glm-5.1:cloud 克劳德密码 奥拉玛发布云——mo...
ollama
.@steipete has been an amazing supporter of open models and local models. This is even before the rise of OpenClaw. 🦞 Ollama is here to love. ❤️ Let's make the world a better place together.
中文: @steipete 一直是开放模型和本地模型的出色支持者。这甚至在OpenClaw兴起之前。🦞 奥拉玛在这里要爱。❤️ 让我们共同让世界变得更美好。
ollama
RT @openclaw: OpenClaw 2026.4.7 🦞 🔮 openclaw infer 🎬 music + video editing 💾 session branch/restore 🔗 webhook-driven TaskFlows 🤖 Arcee, Ge…
中文: RT @openclaw:OpenClaw 2026.4.7 🦞 🔮 开学 🎬 音乐 + 视频编辑 💾 会话分支/恢复 🔗 以网页钩为驱动的任务 🤖 阿西,葛......
ollama
RT @awnihannun: See you there!
中文: RT @awnihannun:见!
ollama
We are hosting Ollama's MLX meetup this Thursday night (April 9th) at Ollama's office in Palo Alto at 6pm. Come meet amazing people! RSVP is required as space is very limited. Food & drinks will be available. More details: https://luma.com/mlx https://twitter.com/ollama/status/2041610283152797818/photo/1
中文: 我们将于本周四晚(4月9日)下午6点在帕洛阿尔托的奥拉马办公室举行奥拉马的MLX会议。 快来认识很棒的人吧! 需要使用RSVP,因为空间非常有限。 食品及食品饮料将提供。 更多详细信息:
ollama
GLM-5.1 is here! Try it on OpenClaw🦞🦞🦞 ollama launch openclaw --model glm-5.1:cloud Claude Code ollama launch claude --model glm-5.1:cloud Chat with the model ollama run glm-5.1:cloud
中文: GLM-5.1 来了! 在 OpenClaw🦞🦞 上试用一下 ollama 发布 openclaw — model glm-5.1:cloud 克劳德密码 ollama 发布 claude — Model glm-5.1:cloud 与模型聊天 ollama 运行 glm-5.1:云
ollama
RT @MichaelGannotti: My @openclaw has been running Kimi K2.5 via @ollama all last night and this morning doing some serious content creatio…
中文: RT @MichaelGannotti:昨晚我的@openclaw一直通过@ollama运营Kimi K2.5,今天早上又在进行一些严肃的内容创作......
ollama
RT @googlegemma: https://x.com/i/article/2041504032410222593
ollama
🦞Ollama's cloud is one of the best places to run OpenClaw. $20 plan is enough for most day to day OpenClaw usage with open models! To make the switch, all you need is to open the terminal and type: ollama launch openclaw Choose a model: kimi-k2.5:cloud…
中文: 🦞 奥拉玛云是运行OpenClaw的最佳平台之一。 20美元计划适用于大多数日常使用OpenClaw和开放式型号! 要进行切换,您只需打开终端并输入: ollama 发布 openclaw 选择一个模型: kimi-k2.5:云...
ollama
Ollama's cloud is now the best place to run Gemma 4 in the cloud! Available through a subscription for developers and third-party integrations. 🦞OpenClaw ollama launch openclaw --model gemma4:31b-cloud Claude Code ollama launch claude --model gemma4:31b-cloud Run the model… https://twitter.com/ollama/status/2041238722914685336/photo/1
中文: 奥拉马云现在是云中运行Gemma 4的最佳平台! 通过订阅面向开发者和第三方集成。 🦞 OpenClaw ollama 发布 openclaw -- model gemma4:31b-cloud 克劳德密码 ollama 发布云——model gemma4:31b-cloud 运行模型......
ollama
RT @GlennCameronjr: @osanseviero @ollama Lots of collaboration across Google orgs as well
中文: RT @GlennCameronjr:@osanseviero @ollama 在谷歌组织中也进行了大量合作
ollama
RT @osanseviero: People underestimate the level of collaboration that needs to happen for a model such as Gemma 4 to land Before the launc…
中文: RT @osanseviero:人们低估了像Gemma 4这样的模特需要实现的协作水平 在拉爑之前......
ollama
RT @dennydotio: @ollama + GLM-5 > Opus 4.5
中文: RT @dennydotio:@ollama + GLM-5 >Opus 4.5
ollama
RT @lalopenguin: I made my brother this app cause he sells a lot of stuff on ebay ... we've been adding things to it that a small business…
中文: RT @lalopenguin:我给我哥哥做了这款应用,因为他在eBay上卖了很多东西......我们一直在为它增添一些东西,那就是一家小企业......
ollama
RT @jackccrawford: I am indeed a loyal @AnthropicAI customer. 46 million tokens so far. And thanks to @ollama Cloud for my fav model. Most…
中文: RT @jackcrawford:我确实是一位忠实的@AntropicAI客户。目前拥有4600万个代币。感谢@ollama Cloud 为我的最忠实模特。大多数......
ollama
RT @jackccrawford: April 2026 ... No fooling ... @ollama https://twitter.com/jackccrawford/status/2040867243341570174/photo/1
中文: RT @jackcrawford:2026年4月...不傻......@ollama
ollama
RT @steipete: @seni0re @OpenAI We have folks from @ollama and @huggingface helping make local even better!
中文: RT @steipete:@seni0re @OpenAI 我们有来自 @ollama 和 @huggingface 的人,帮助让本地变得更加美好!
ollama
RT @wilsonnguy70384: @ollama I just sub and I can tell you this is a game changer. Gemma4 is working out of the box. Token refresh faster t…
中文: RT @willsonnguy70384:@ollama 我刚刚订阅,我可以告诉你,这是一个改变游戏规则的游戏规则。Gemma4 正在抽出球。更新更快......
ollama
RT @codymclain: @ollama been using their cloud for a few weeks now. the $20 tier actually handles way more than you'd expect, especially wi…
中文: RT @codymclain:@ollama 已经使用他们的云平台几周了。这个20美元的套餐实际上比你预期的要多,尤其是Wi...
ollama
RT @davinder0110v: @1kartikkabadi1 @ollama Bro legit wtf. I even asked my agent to confirm the model which I was using because it was crazy…
中文: RT @davinder0110v:@1kartikkabadi1 @ollama 兄弟 合法我甚至让我的经纪人确认我使用的型号,因为这太疯狂了......
ollama
RT @zRdianjiao: @KeridwenCodet @ollama It’s going to happen — you just need a bit more time.
中文: RT @zRdianjiao:@KeridwenCodet @ollama 会发生——你只需要多一点时间。
ollama
RT @MichaelGannotti: Refresh with upped rates by @ollama ! Simply the best! https://twitter.com/MichaelGannotti/status/2040518895291785394/photo/1
中文: RT @MichaelGannotti:刷新率,由 @ollama 进行更新!最好的!
ollama
RT @aaronglazer: @benjaminsehl @openclaw It works for me. I also have @ollama cloud connected and it’s 🔥.
中文: RT @aaronglazer:@benjaminsehl @openclaw 对我来说是有效的。我还有 @ollama 云连接,它是 🔥。
ollama
@Prince_Canuma @spark_arena @WesEklund @Prince_Canuma Thank you for all the work you do! Here to just give you ❤️❤️❤️❤️
中文: @Prince_Canuma @spark_arena @WesEklund @Prince_Canuma 感谢你们所做的一切工作!在这里,请给你❤️❤️❤️❤️
ollama
RT @steipete: @__roycohen @garrytan @sama OpenClaw is owned by me and soon transferred to the OpenClaw Foundation - which is not something…
中文: RT @steipete:@__roycohen @garrytan @sama OpenClaw 归我所有,很快被转至 OpenClaw 基金会——这并非什么......
ollama
All usage refresh has rolled out on Ollama's cloud! Let's go open models! It's time to give your 🦞 an open home.
中文: 所有使用刷新已在Ollama的云平台上推出!让我们去打开模型! 是时候给你的🦞一个开放的家了。
ollama
RT @1kartikkabadi1: @ollama's cloud inference just got serious, hitting 91.9 tokens per second on GLM-5 https://twitter.com/1kartikkabadi1/status/2040477538137583689/photo/1
中文: RT @1kartikkabadi1:@ollama的云计算推断刚刚变得严重,在GLM-5上每秒达到91.9个代币,网址为
ollama
RT @PhantomByteAI: @ollama I’ve been using Ollama Pro for a few months and love it. I’ve done so much with it already this week and still h…
中文: RT @PhantomByteAI:@ollama 我使用 Ollama Pro 已经好几个月了,非常喜欢。这周我已经用它做了这么多,现在仍然......
ollama
Starting tomorrow at 11am PT, Ollama subscriptions usage will refresh to cover increased usage of third-party tools like OpenClaw. Our goal is to help you transition smoothly. All tools will work with Ollama's cloud just like before.
中文: 从明天上午11点开始,Ollama的订阅使用将重新更新,以涵盖OpenClaw等第三方工具的使用量增加。 我们的目标是帮助您顺利过渡。 所有工具都将像以前一样与Ollama的云一起使用。
ollama
RT @MichaelGannotti: @ollama The $20 monthly plan is awesome and $200 annual plan even better! Kimi K2.5 on Ollama powering OpenClaw is a g…
中文: RT @MichaelGannotti:@ollama 每月20美元的计划非常棒,每年200美元的计划也更出色!Ollama 上的 Kimi K2.5 为 OpenClaw 供电为 g...
ollama
RT @thecsguy: @ollama we’ve reached the point where ‘ollama launch’ is basically the new ‘npm start’ for ai. $20 a month to stop larping as…
中文: RT @thecsguy:@ollama 我们已经达到了“ollama 发布”基本成为 ai 新“npm 开始”的阶段。每月 20 美元,用于停止 ...
ollama
RT @steipete: @orelohayo @davemorin We support all models, local and cloud-ones. Not everyone has a beefy machine at home. We even have fol…
中文: RT @steipete:@orelahio @davemorin 我们支持所有型号,以及本地和云端型号。并非每个人家里都有一台坚固的机器。我们甚至有福了......
ollama
RT @BradGroux: @MichaelGannotti @Scobleizer @MiniMax_AI Kimi K2.5 with @ollama cloud, or in Microsoft Foundry is great too.
中文: RT @BradGroux:@MichaelGannotti @Scobleizer @MiniMax_AI K2.5 使用 @ollama 云,或在 Microsoft Foundry 中也很棒。
ollama
RT @davinder0110v: @ollama Up, can vouch for it. Love every bit of it. Best value for money in the expensive token market 😉 @ollama forever…
中文: RT @davinder0110v:@ollama Up,可以担保。喜欢它的每一点。在昂贵的代币市场中性价比最高 😉 @ollama 永远......
ollama
🦞Ollama's cloud is one of the best places to run OpenClaw. $20 plan is enough for most day to day OpenClaw usage with open models! To make the switch, all you need is to open the terminal and type: ollama launch openclaw Choose a model: kimi-k2.5:cloud glm-5:cloud…
中文: 🦞 奥拉玛云是运行OpenClaw的最佳平台之一。 20美元计划适用于大多数日常使用OpenClaw和开放式型号! 要进行切换,您只需打开终端并输入: ollama 发布 openclaw 选择一个模型: 基米-k2.5:云 glm-5:云...
ollama
RT @NVIDIA_AI_PC: Try @GoogleGemma 4 using @ollama, optimized for RTX. 🙌 Check it out: https://ollama.com/library/gemma4
中文: RT @NVIDIA_AI_PC:使用 @ollama 试用 @GoogleGemma 4,针对 RTX 进行了优化。🙌 查看详情:
ollama
RT @MichaelGannotti: Yowza! @ollama is on it with new Gemma 4 models https://twitter.com/MichaelGannotti/status/2039903041642508541/photo/1
中文: RT @MichaelGannotti:Yowza!@ollama 推出新款 Gemma 4 型号
ollama
RT @varun1_yadav: Finally @ollama released v0.20. https://twitter.com/varun1_yadav/status/2039879410925322334/photo/1
中文: RT @varun1_yadav:终于,@ollama 发布了 v0.20。
ollama
.@GoogleDeepMind Gemma 4 is here with state-of-the-art models targeting edge and workstations. Requires Ollama 0.20+ that is rolling out. 4 models: 4B Effective (E4B) ollama run gemma4:e4b 2B Effective (E2B) ollama run gemma4:e2b 26B (4B active MoE) ollama run gemma4:26b… https://twitter.com/ollama/status/2039738348647108680/photo/1
中文: 。@GoogleDeepMind Gemma 4 配备最先进的模型,可针对边缘和工作站。 需要正在推出的 Ollama 0.20+ 以上。 4种型号: 4B 有效(E4B) ollama run gemma4:e4b 2B 有效(E2B) ollama run gemma4:e2b 26B(4B 活跃 MoE) ollama 运行 gemma4:26b...
ollama
RT @aref_vc: wow ! this is awesome freaking fast ! well done @ollama so fast you can barely see what is happening, this is not even accele…
中文: RT @aref_vc:哇!这速度太吓人了!干得好 @ollama 太快了,你几乎看不到发生了什么,这甚至都不是一次......
ollama
💎💎💎💎
ollama
RT @arstechnica: Running local models on Macs gets faster with Ollama's MLX support https://arstechnica.com/apple/2026/03/running-local-models-on-macs-gets-faster-with-ollamas-mlx-support/?utm_campaign=dhtwitter&utm_content=%3Cmedia_url%3E&utm_medium=social&utm_source=twitter
中文: RT @arstechnica:通过支持 Ollama 的 MLX 支持,在 Mac 上运行本地型号会更快
ollama
Regarding missing chocolate products. https://twitter.com/ollama/status/2039394659840237788/photo/1
中文: 关于丢失的巧克力产品。
ollama
RT @Shawkat_m1: I upgraded my Ollama to use MLX and my QWEN3.5:36b speed 2.2Xd instantly. https://twitter.com/Shawkat_m1/status/2039014724071719405/photo/1
中文: RT @Shawkat_m1:我立即将我的Ollama升级为MLX和QWE1.5:36b的速度2.2Xd。
ollama
RT @joreilly: Just tried out new qwen3.5:4b-nvfp4 @ollama model on M1 Max here (in project where it's used with Koog AI agent).....38% fast…
中文: RT @joreilly:此处在M1 Max上尝试了新的qwen3.5:4b-nvfp4 @ollama模型(该项目中使用了Koog AI代理)。
ollama
RT @zcbenz: We have been expecting this since ollama's first pull request to MLX. It is just the beginning, CUDA & CPU backends are still i…
中文: RT @zcbenz:自从奥拉马首次向MLX提出撤职请求后,我们就一直期待这一点。这只是开始,CUDA & CPU后端仍然是......
ollama
RT @Prince_Canuma: You can now run Ollama using MLX as a backend 🚀
中文: RT @Prince_Canuma:现在可以使用 MLX 作为后端来运行 Ollama 🚀
ollama
RT @awnihannun: You can now run LMs with Ollama + MLX! I've been waiting for this moment since MLX was first open sourced, so glad that it…
中文: RT @awnihannun:现在你可以使用 Ollama + MLX 运行 LM! 自从MLX首次开源以来,我一直在等待这一刻,很高兴......
ollama
Ollama is now updated to run the fastest on Apple silicon, powered by MLX, Apple's machine learning framework. This change unlocks much faster performance to accelerate demanding work on macOS: - Personal assistants like OpenClaw - Coding agents like Claude Code, OpenCode,… https://twitter.com/ollama/status/2038835449012351197/video/1
中文: Ollama 现已更新,使用苹果芯片运行速度最快,该芯片由苹果机器学习框架(MLX)提供支持。 这一更改可解锁更快的性能,从而加速在 macOS 上的高要求工作: - 像OpenClaw这样的个人助理 - 编码代理,如Claude Code、OpenCode等......
ollama
ollama launch pi --model kimi-k2.5:cloud Ollama can now launch Pi, the coding agent that powers OpenClaw. Designed to be a minimal coding harness that can be adapted to your workflows to create your own coding agent. Comes bundled with powerful primitives to build on, and… https://twitter.com/ollama/status/2038506792070914079/photo/1
中文: ollama 发布 pi -- 型号 kimi-k2.5:云 Ollama 现在可以启动为 OpenClaw 提供功能的编码代理 Pi。 设计为一种极简的编码工具,可根据您的工作流程进行调整,以创建您自己的编码代理。 附带强大的原语,并在此基础上构建......
ollama
Visual Studio Code now integrates with Ollama via GitHub Copilot. If you have Ollama installed, any local or cloud model from Ollama can be selected for use within Visual Studio Code. https://twitter.com/ollama/status/2037289861745623132/photo/1
中文: Visual Studio Code 现在通过 GitHub Copilot 与 Ollama 集成。 如果已安装 Ollama,可选择来自 Visual Studio Code 的任何本地或云型号。
ollama
RT @atomicbot_ai: Run OpenClaw locally with @ollama 🦙 – Open source models: MiniMax, Qwen, Kimi and more – One click install on macOS and…
中文: RT @atomicbot_ai:使用 @ollama 在本地运行 OpenClaw 🦙 开源型号:MiniMax、Qwen、Kimi 等 – 单击在 macOS 上安装一次,然后......
ollama
Ollama's cloud is now handling the current demand. We continue to monitor the traffic to further scale up when necessary. Again, sorry if you encountered request errors. ❤️ https://twitter.com/ollama/status/2036633051535683684/photo/1
中文: 奥拉马的云现在正在应对当前的需求。 我们继续监控交通,以便在必要时进一步扩大规模。 再次抱歉,如果您遇到请求错误。❤️
ollama
We are scaling Ollama's cloud. So sorry for anyone hitting errors. https://twitter.com/ollama/status/2036601333382361415/photo/1
中文: 我们正在扩大奥拉玛的云量。很抱歉有人有错误。
ollama
Ollama's Pro plan now has an annual option. For $200/year, you get 2 months free. Power OpenClaw, Claude Code, and more using the best open models with web search. https://twitter.com/ollama/status/2036531791801254183/photo/1
中文: Ollama的Pro计划现在有了年度选项。每年200美元,您可免费获得2个月。 使用最佳开放模型进行网页搜索,支持 OpenClaw、Claude Code 等。
ollama
RT @ZixuanLi_: Don't panic. GLM-5.1 will be open source.
中文: RT @ZixuanLi_:不要惊慌。GLM-5.1 将是开源的。
ollama
Nemotron-Cascade-2 is now available to run with Ollama. ollama run nemotron-cascade-2 To run it locally with OpenClaw: ollama launch openclaw --model nemotron-cascade-2 This model from NVIDIA delivers strong reasoning and agentic capabilities on par with models with up to 20x…
中文: Nemotron-Cascade-2 现已可与 Ollama 一起运行。 奥拉马跑出火箭2 使用 OpenClaw 进行本地运行: ollama 发布 openclaw -- 型号 nemotron-cascade-2 NVIDIA 的这款模型具备强大的推理能力和代理能力,与最高 20 倍的型号相当。
ollama
RT @Kimi_Moonshot: Zhilin's full GTC 2026 keynote is here. If you're curious about the "how" behind scaling Kimi’s latest models, this is…
中文: RT @Kimi_Moonshot:Zhilin 的完整 GTC 2026 主题演讲就在这里。 如果你对扩展 Kimi 最新型号背后的“背后”有何好奇,那就是......
ollama
MiniMax-M2.7 is now available on Ollama's cloud. made for coding and agentic tasks 🖥️ Try it inside Claude Code: ollama launch claude --model minimax-m2.7:cloud 🦞 Use it with OpenClaw: ollama launch openclaw --model minimax-m2.7:cloud If you already have OpenClaw…
中文: MiniMax-M2.7 现已在 Ollama 的云端推出。 用于编程和代理任务 🖥️ 在《克劳德法典》中尝试一下: ollama 发布 云 - model minimax-m2.7:云 🦞 将其与 OpenClaw 一起使用: ollama 发布 openclaw -- model minimax-m2.7:云 如果你已经有 OpenClaw...
ollama
Nemotron 3 Nano 4B is now available to run via Ollama: ollama run nemotron-3-nano:4b Try it with Pi, the minimal agent runtime that powers OpenClaw: ollama launch pi --model nemotron-3-nano:4b This new addition to @nvidia's Nemotron family is a great fit for building and…
中文: Nemotron 3 Nano 4B 现已可通过 Ollama 运行: 奥拉马跑出 nemotron-3-nano:4b 使用为 OpenClaw 提供功能的最小代理运行时 Pi 来尝试: ollama 发布 pi -- 型号 nemotron-3-nano:4b @nvidia 的 Nemotron 家族新增的这款产品非常适合建筑和...
ollama
Ollama 0.18.1 is here! 🌐 Web search and fetch in OpenClaw Ollama now ships with web search and web fetch plugin for OpenClaw. This allows Ollama's models (local or cloud) to search the web for the latest content and news. This also allows OpenClaw with Ollama to be able to… https://twitter.com/ollama/status/2033993519459889505/photo/1
中文: 奥拉玛 0.18.1 来了! 🌐 在 OpenClaw 中进行网页搜索和获取 Ollama 现在已推出 OpenClaw 的网页搜索和网页取用插件。 这使得奥拉玛的模型(本地或云)能够搜索网页以获取最新内容和新闻。这也使Ollama的OpenClaw能够......
ollama
Ollama is now a provider inside CodexBar! Thank you @steipete for the awesome work!
中文: Ollama 现已在 CodexBar 内部提供服务! 感谢@steipete 的出色工作!
ollama
Ollama is now an official provider for OpenClaw. openclaw onboard --auth-choice ollama All models from Ollama will work seamlessly with OpenClaw. 🦞 Use it for the tasks you want, all from your chat app. Thank you @steipete for helping and reviewing. 🦞 https://twitter.com/ollama/status/2033339501872116169/photo/1
中文: Ollama 现在是 OpenClaw 的官方服务提供商。 openclaw 上载——选择奥拉马 Ollama 的所有型号都将与 OpenClaw 无缝配合使用。 🦞 将其用于您想要的任务,全部来自您的聊天应用。 感谢@steipete 的帮助和审核。🦞
ollama
Ollama's cloud is updated to use NVIDIA's latest data center hardware: B300 for Kimi K2.5 and GLM-5 models. This significantly improves the model performance with faster throughput and lower latency while maintaining reliable tool calls for integrations. All this works with… https://twitter.com/ollama/status/2032744927873077332/photo/1
中文: Ollama 的云已更新,使用 NVIDIA 最新的数据中心硬件:适用于 Kimi K2.5 和 GLM-5 型号的 B300。 这显著提高了模型性能,实现更快的吞吐量和更低的延迟,同时保持了对集成的可靠工具调用。 所有这些都与...... 配合使用
ollama
RT @parthsareen: I'll be speaking next week on March 19th at GTC! Will be covering a bunch about Ollama, building agent harnesses properly…
中文: RT @parthsareen:我将于下月3月19日在GTC发表演讲! 将大量报道奥拉玛的建筑代理安全带......
ollama
Are you attending @NVIDIAGTC? Ollama + NVIDIA are doing a developer session on local AI on RTX AI PCs Come learn how to build agent harnesses, and optimize them for your local use cases. 📄🌐🦞📱 Demos. 🕑 Thursday, March 19th, 2pm 🗺️ SJCC 230B (L2) We will be giving away… https://twitter.com/ollama/status/2032582008933855627/photo/1
中文: 你参加@NVIDIAGTC吗? Ollama + NVIDIA 正在RTX AI电脑上进行本地AI开发会话 快来学习如何构建代理线束,并针对本地使用情况进行优化。 📄🌐🦞📱 演示。 🕑 3月19日,星期四,下午2点 🗺️ SJCC 230B(L2) 我们将放弃......
ollama
RT @nutlope: Announcing AI Commits v2! ◆ CLI to write commit messages with AI in seconds ◆ Fully open source & powered by open models ◆ Mu…
中文: RT @nutlope:宣布人工智能承诺v2! ◆ 用AI在几秒钟内编写提交消息的CLI ◆ 完全开源和放大器;由开放型号驱动 ◆ 穆......
ollama
RT @simonguozirui: Running OpenJarvis on my  Mini with @ollama to profile Intelligence per Watt! https://twitter.com/simonguozirui/status/2032158365523329422/photo/1
中文: RT @simonguozirui:使用 @ollama 在我的迷你 上运行 OpenJarvis,以 @ollama 为每瓦人个人资料!
ollama
RT @JonSaadFalcon: Personal AI should run on your personal devices. So, we built OpenJarvis: a personal AI that lives, learns, and works on…
中文: RT @JonSaadFalcon:个人人工智能应运行在您的个人设备上。因此,我们构建了OpenJarvis:一种生活、学习和工作的个人人工智能......
ollama
NVIDIA Nemotron 3 Super is now available on Ollama. ollama run nemotron-3-super:cloud 🦞Try it with OpenClaw: ollama launch openclaw --model nemotron-3-super:cloud Run it locally on your device: ollama run nemotron-3-super > 120B mixture of experts model with 12B active >…
中文: NVIDIA Nemotron 3 Super 现已在 Ollama 上推出。 ollama 运行 nemotron-3-super:cloud 🦞 使用 OpenClaw 进行: ollama 发布 openclaw — model nemotron-3-super:cloud 在您的设备上本地运行: 奥拉马跑出 nemotron-3-super 采用12B活性的专家模型,采用120B混合 ......
ollama
RT @NVIDIARobotics: Need a quick guide to running OpenClaw locally on an NVIDIA Jetson? 🦞 Check out this walkthrough using @Ollama and a c…
中文: RT @NVIDIARobotics:需要快速指南,以便在 NVIDIA Jetson 本地运行 OpenClaw?🦞 使用 @Ollama 和 c... 来了解一下这个介绍
ollama
Ollama can now run prompts on a schedule in Claude Code. Stay on top of work by setting automated tasks or reminders. ollama launch claude /loop Give me the latest AI news every morning Examples in thread https://twitter.com/ollama/status/2031482512019759545/photo/1
中文: 奥拉玛现在可以按照“克劳德·科德”的时间表运行提示。通过设置自动化任务或提醒,保持工作的最佳性能。 欧拉玛发布会 /loop 每天早上给我最新的人工智能新闻 帖子中的示例