AI update compact: ChatGPT, Power Supply, Meta Apollo, Nvidia

Advertisement


OpenAI has unveiled a new phone feature as part of its Shipmas event series, initially available exclusively in the US. US users can call the chatbot for up to 15 minutes per month via the free number 1-800-ChatGPT. In the future, German users will instead have the option of reaching ChatGPT via WhatsApp.



Self-promotion specialist service called KI PRO

The WhatsApp service is based on GPT-4o mini and is purely text-based – image generation or processing is not possible. Communication takes place via WhatsApp Business, whereby users must agree to the conditions that conversations can be read for security reasons. The context of Meta AI, which offers similar services but is not available in Germany due to legal uncertainties, is also interesting.

OpenAI is currently negotiating with its non-profit parent company about its extensive control rights. OpenAI CEO Sam Altman and his colleagues have to find a fair price for giving up control – possibly in the billions, reports the New York Times. Altman's dual role as foundation board member and company boss makes the situation particularly delicate. Investors expect a restructuring within two years. The money from the latest round of financing is also tied to this commitment.

One option would be to convert to a “Public Benefit Corporation,” in which the non-profit organization retains a portion of the company. The negotiations also affect an important clause with Microsoft: The nonprofit organization has previously had the right to determine when OpenAI has achieved artificial general intelligence – a decision that could end the lucrative partnership with Microsoft. OpenAI is already said to be trying to remove this clause from the contracts. The development from a non-profit to a for-profit company recently attracted criticism from competitors such as Meta boss Mark Zuckerberg and tech billionaire Elon Musk.

In North America, the construction of new energy-intensive data centers for artificial intelligence and crypto mining is threatening energy bottlenecks. The North American Electric Reliability Corporation (NERC), a non-profit organization that is responsible for coordinating the electrical power grids under government supervision, warns of this. In its current ten-year outlook, it calls for an urgent expansion of energy generation and transmission capacities. There is a risk of power shortages in the Midwest as early as 2025. At the same time, electricity demand is growing faster than it has in over 20 years. In addition to AI data centers and crypto mining, the increasing number of electric vehicles and the increased use of heat pumps in households are also driving electricity demand.

If nothing is done, power supply safety buffers will fall below limits in almost all areas. In the worst case, there could be a risk of power outages. Big tech companies like Microsoft, Amazon and Meta are already planning to have the energy for their AI data centers generated themselves or by partners. Nuclear power plays a particularly important role here. There are plans to repair and reactivate old nuclear power plants or to build new ones.

Researchers at Meta and Stanford University have systematically studied how best to design AI models for video understanding. Although there have been rapid advances in AI models for speech and image processing, models for video tasks are still lagging behind. Although videos provide rich dynamic information, developing video AI models is difficult. This is due to higher computing requirements and many open design questions. This is where the research project comes in. The researchers make an important discovery: design decisions that have worked for smaller models also work for larger ones. This makes it possible to experiment more efficiently, without complex studies with huge models.

The integration of time stamps between video clips or step-by-step training have proven to be advantageous. A balanced composition of the training data is just as important. Based on these findings, researchers developed the Apollo family of AI video models. Apollo-3B outperforms most similarly sized models, while Apollo-7B even dwarfs many much larger models. Meta makes the code and weights of the models available open source. The researchers also note that many improvements in video models come primarily from advances in speech processing.

Google is rolling out its new Gemini 2.0 model in its own chatbot. Users with access to Gemini Advanced will now receive priority access to the latest experimental model “1206” on the desktop and mobile web. This could be the next larger Gemini “Pro” model, as the smaller version Gemini 2.0 Flash is already officially available.

The model is intended to help with more complex tasks such as sophisticated coding, solving mathematical problems, and reasoning and guidance. However, Google notes that the model is still in early preview stages and may not work as expected. It also does not have access to real-time information and is not compatible with all Gemini features.




How intelligent is artificial intelligence? What consequences does generative AI have for our work, our leisure time and society? In Heise's “AI update” we, together with The Decoder, bring you updates on the most important AI developments every weekday. On Fridays we examine the different aspects of the AI ​​revolution with experts.

The AI ​​start-up Perplexity has acquired Carbon, a company that develops connectors for connecting external data sources to large language models. This will soon allow users to connect apps like Notion and Google Docs directly to Perplexity.

The Carbon team moved to Perplexity to accelerate development. With the takeover, an interesting trend is emerging: the large AI platforms such as ChatGPT and Perplexity and the millions of software-as-a-service solutions for companies are, in a sense, connected to the same large solution, namely a chat interface with Internet access your own data. A cut-throat competition appears to be emerging.

Microsoft is Nvidia's largest AI accelerator customer, with 485,000 Hopper GPUs (H100 and H200) purchased in 2024. With an estimated investment volume of 15 billion US dollars, the company is well ahead of the Chinese companies Bytedance and Tencent, which have each acquired around 230,000 units.

At the same time, the big tech companies are increasingly relying on their own AI accelerators: Google and Meta each operate 1.5 million self-developed chips, Amazon 1.3 million and Microsoft 200,000 of their own AI accelerator “Maia”.

The European Data Protection Committee (EDPB) does not place any major obstacles in the way of the development and use of AI models. This emerges from the data protection commissioner's statement on the regulation of AI with regard to the General Data Protection Regulation (GDPR). According to data protection experts, Meta, Google, OpenAI & Co. can generally rely on a “legitimate interest” as the legal basis for the processing of personal data by AI models.

However, the EDPB attaches a number of conditions to this release. The national data protection authorities should use a 3-step test to assess whether a legitimate interest exists. First, it should be checked whether the right to data processing is legitimate. A “necessity test” then follows to determine whether data processing is necessary. Ultimately, the fundamental rights of the data subjects and the interests of the AI ​​providers must be balanced against each other. If unlawful personal information was used in the development of an AI model, the EDPB could be banned from using it altogether. Exceptions only apply if everything is properly anonymized. With the joint statement, the data protection officers want to ensure uniform legal enforcement in the EU.

That was the AI ​​update from heise online from December 19, 2024. A new episode is available every working day from 3 p.m.


Self-promotion specialist service called KI PRO

Self-promotion specialist service called KI PRO


(igr)

You may also like...