LATEST NEWS
Google introduces faster, cheaper AI chips for enterprises:
Google has unveiled its new Axion CPUs and Ironwood TPUs, designed to slash both training times and inference costs for large-scale business AI workloads. Rather than following the NVIDIA-dominated route, Google is pushing a “hypercomputer” model that lets organizations train and deploy larger AI systems with less downtime.
These chips are tuned for multi-model workflows, so teams can run several AI systems—like RAG pipelines, agents, or orchestration layers—simultaneously without hitting performance limits. If Google’s claims around cost and latency hold up, this could be one of the most impactful infrastructure upgrades enterprises see this year.
Google broadens Gemini and Opal AI tools
The company is also expanding Opal, its no-code AI app builder, to over 160 countries, giving global teams a way to create custom AI-powered tools without waiting on developer resources. Meanwhile, Gemini has been upgraded with new context and reasoning capabilities, improving how it handles lengthy documents and complex requests.
With the updated Opal stack, teams can quickly spin up dashboards, automate lead handling, or streamline workflows without touching code. On the other side, Gemini’s improved reasoning offers cleaner summaries, sharper insights, and more dependable output—particularly useful for enterprise sales and AI operations.
Tencent and Tsinghua introduce CALM for faster AI responses
In China, Tencent and Tsinghua University have unveiled CALM, a new modeling approach that ditches the traditional token-by-token text generation in favor of predicting larger chunks of text at once. The result: faster, cheaper AI responses for chatbots, copilots, and other real-time systems.
If CALM proves scalable, it could reshape how enterprise AI tools deliver live interactions. For product teams, that means quicker customer support, faster sales replies, and smoother internal assistant performance—showing a clear path toward the next wave of AI efficiency gains.