Today’s Cache | Intel plans thousands of layoffs; Meta’s new AI model adds special effects to objects in video; OpenAI starts rolling out advanced voice mode
by The Hindu Bureau · The Hindu(This article is part of Today’s Cache, The Hindu’s newsletter on emerging themes at the intersection of technology, innovation and policy. To get it in your inbox, subscribe here.)
Intel plans thousands of layoffs
Intel is reportedly planning thousands of job cuts as cost cutting measures as their market share keeps dwindling sources have said. The company is set to announce their quarterly results on Thursday. While the company is still a big player in the PC and server markets, it has been lagging behind as demand for AI chips has grown exponentially. CEO Pat Gelsinger has been trying to regain lost ground and shift towards investments in advanced AI chip tech and newer segments.
In October 2022, the company said it wanted to cut annual costs by $3 billion in 2023 by reducing their headcount to 124, 800 at the end of 2023 from 131, 900 a year earlier as seen from their regulatory filings. In February last year, Intel had said that their annual cost savings would be between $8 billion and $10 billion by 2025. Analysts expect that the company’s second-quarter revenue for the year will be around the same as last year with revenue from data centers and AI set to decrease by 23%. While the company has been known for designing and manufacturing their own chips, they have been focused on also expanding into the foundry business and manufacturing chips for other companies.
Meta’s new AI model adds special effects to objects in video
Meta has launched a new AI model called Segment Anything Model 2 or SAM 2, which has the ability to tell which pixels belong to which object in a video. The Mark Zuckerberg-led company had earlier released SAM last year which aided in developing features on Instagram like ‘Backdrop’ and ‘Cutouts.’ SAM 2 is meant for video content and will be able to segment any object in an image or a video and follow it consistently across all frames of a video in real-time. The company said that the previous AI model was also being used in oceanic research as well as cancer screening and disaster relief.
The successor could also be used to track objects and help with annotating data faster for computer vision systems that are also used autonomous vehicles.
OpenAI starts rolling out advanced voice mode
Microsoft-backed OpenAI has started rolling out an advanced voice mode to some ChatGPT Plus users, the AI firm announced on X. The company had pushed back the launch from June to July saying it needed more time to finetune it. The advanced feature will allow users to get a response from ChatGPT in real-time and users will also be able to interrupt the chatbot while it is speaking. This means that the chatbot will sound and speak in a more naturalistic way. OpenAI had previously said that it was improving the AI model’s ability to detect and refuse to respond to inappropriate content while also making the model more user-friendly and scalable.
During its launch event in May, the company had been criticised by actress Scarlett Johansson for mimicking her voice acting in the film ‘Her.’ OpenAI had refuted the allegations then saying that the voice hadn’t been modelled after the character.