Thanks to OpenAI, there has been a tonne of AI news this week, including a contentious blog post from CEO Sam Altman, the widespread release of Advanced Voice Mode, reports of a 5GW data center, significant personnel changes, and ambitious restructuring plans.
However, the rest of the AI community moves at a different pace, pursuing its own interests and producing new AI models and research on a minute basis. A summary of some more noteworthy AI news from the previous week is provided here.
Google Gemini updates
On Tuesday, the Gemini-1.5-Pro-002 and Gemini-1.5-Flash-002, two new production-ready models that build upon previous releases, were among the improvements to Google’s Gemini model portfolio that were released. Overall quality improvements were claimed by the corporation, with noteworthy advances in math, lengthy context handling, and vision tasks. Google reports a 20 percent improvement in math-related workloads and a 7 percent boost in performance on the MMLU-Pro benchmark. However, as regular readers of Ars Technica are aware, AI benchmarks aren’t always as helpful as we would like them to be.
Google also announced significant pricing reductions for Gemini 1.5 Pro, lowering input token costs by 64% and output token costs by 52% for prompts with fewer than 128,000 tokens, in addition to model changes. “For comparison, GPT-4o is currently $5/[million tokens] input and $15/m output, and Claude 3.5 Sonnet is $3/m input and $15/m output,” AI researcher Simon Willison said on his blog. The Frontier model that was already the least expensive, the Gemini 1.5 Pro, is now even less expensive.
Additionally, Google raised the rate restrictions, enabling 2,000 requests per minute for Gemini 1.5 Flash and 1,000 requests per minute for Gemini 1.5 Pro. According to Google, the most recent models have three times less latency and twice the output speed of the prior iteration. Developers may find it simpler and more affordable to create apps with Gemini as a result of these modifications.
Meta launches Llama 3.2
On Wednesday, Llama 3.2, a big update to Meta’s portfolio of open-weight AI models that we have previously studied in-depth, was released. Large language models (LLMs) with vision capabilities in parameter sizes of 11 billion and 90 billion are included in the latest edition, along with lightweight text-only models with 1B and 3B parameters that are optimized for edge and mobile devices. While the smaller models purportedly outperform similar-sized competitors on various text-based tasks, Meta claims the vision models are competitive with top closed-source models on image recognition and visual understanding tasks.
Willison conducted studies using a few of the smaller 3.2 models, and his findings were remarkable considering the size of the models. AI researcher Ethan Mollick demonstrated how to use the PocketPal app to run Llama 3.2 on his iPhone.
Additionally, Meta unveiled the first official “Llama Stack” package, designed to make deployment and development easier in a variety of scenarios. Similar to other releases, Meta is offering the models for free download, subject to license limitations. Long context windows of up to 128,000 tokens are supported by the new models.
Faster chip design thanks to Google’s AlphaChip AI
On Thursday, AlphaChip, a product of Google DeepMind, looks to be a big step forward in AI-driven electronic chip design. It started out as a research project in 2020 and is currently a chip layout design technique based on reinforcement learning. According to reports, Tensor Processing Units (TPUs), which are GPU-like chips made to speed up AI processes, have been produced by Google using AlphaChip to construct “superhuman chip layouts” in the last three generations of these chips. Compared to weeks or months of human labor, AlphaChip, according to Google, can produce high-quality chip layouts in a matter of hours. (Nvidia reportedly used AI to assist in the design of its chips as well.)
Notably, Google has made the model weights available to the public on GitHub by releasing a pre-trained checkpoint of AlphaChip. According to the company, AlphaChip’s influence has already spread outside of Google, as chip design firms such as MediaTek have included and advanced the technology for their processors. AlphaChip, according to Google, has spurred a new direction in AI research for chip design, with the potential to optimize the entire chip design cycle, from computer architecture to manufacturing.
Although those are not the only noteworthy events, they are some of the more important ones. We’ll see how next week goes, as the AI sector is now growing and doesn’t seem to be slowing down.
Read More : Click Here