Products You May Like
Current and former employees of OpenAI, Google DeepMind and Anthropic added their voices to the ongoing debate about whether generative AI will aid humanity or lead to its extinction, by signing an open letter on June 4 warning of the dangers ahead.
Saying they “believe in the potential of AI technology to deliver unprecedented benefits to humanity,” the group of 13 cautioned that AI companies developing gen AI systems haven’t been forthcoming in sharing information as they chase a share of what’s expected to be a $1.3 trillion market for chatbots and other AI tech by 2032. So they’re calling for AI companies to become more open with the public about “the risk levels of different kinds of harms,” since those companies all have “strong financial incentives to avoid effective oversight.”
Among the possible harms: entrenchment of existing inequalities, an increase in misinformation, and the “loss of control of autonomous AI systems potentially resulting in human extinction.”
Well, I say that calls for a deep breath, considering that the letter was also endorsed by two of the godfathers of AI — Yoshua Bengio and Geoffrey Hinton — and noted computer scientist Stuart Russell.
Given the lack of “effective” government oversight of AI companies and their technology — the US has lagged behind the European Union, which signed the world’s first AI legislation into law in May — the group also called on AI companies to encourage public debate. That includes adopting new policies that would allow current and former employees to speak publicly about risk-related concerns without fear of retaliation. They noted that nondisclosure and nondisparagement agreements make it difficult for employees, “among the few people who can hold them accountable to the public,” to speak out.
“AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm,” they wrote. “They currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.”
Liz Bourgeois, an OpenAI spokeswoman, told The Washington Post that the San Francisco-based startup agrees that “rigorous debate is crucial given the significance of this technology.”
But the open letter comes after several notable executives left OpenAI, citing safety concerns and saying the company had cut resources to teams studying the long-term risks of AI. Former OpenAI board member Helen Toner also spoke out recently, saying the board’s loss of confidence in CEO Sam Altman last year was because he wasn’t candid in his communications with the board.
What’s next? Well, that’s up to the AI companies and regulators. If tech history is any guide, I’m not expecting much. I will say, though, that the open letter prompted many discussions and reminded me of an AI term that generated a lot of buzz back in 2023: p(doom). That’s one of the metrics AI researchers use to calculate the probability that an AI intelligence will lead to doom.
Here are the other doings in AI worth your attention.
Apple, set to unveil AI plans at dev fest, is expected to add ChatGPT to its mix
Apple has been saying for months that it’s got big news coming around AI, with CEO Tim Cook expected to deliver that news during the keynote event for the company’s annual developers’ conference on June 10. (Here are the details about when and how to watch the WWDC keynote, and what CNET experts believe will be announced.)
Among the things expected are AI functionality built into iOS 18, the next version of the iPhone operating system. It’s typically released with a new model of the smartphone in September. That functionality could include voice memo transcriptions, faster search and a more conversational Siri. It could also introduce an AI feature that summarizes news stories, documents, and notifications, essentially giving a “smart recap” of what you’ve missed. (If you want to know what AI is already built into the iPhone, CNET’s Sareena Dayaram has the details.)
Also in the mix is a reported partnership deal that will give Apple users access to OpenAI’s popular ChatGPT chatbot from within iOS, which now powers 2.2 billion smartphones. That’s based on reports from Bloomberg that the two companies have been working toward an agreement for the past several months and inked a deal in May. According to Bloomberg, Apple is also in talks with Google about licensing its Gemini chatbot and may ultimately offer a range of third-party chatbots.
All of this is a big deal, because Apple is seen as playing catch-up in terms of AI with rivals including Google, Microsoft and Samsung, since it’s been quiet about sharing what it’s working on for the iPhone, iPad and Mac computers. So stay tuned for news out of the Worldwide Developers Conference (or WWDC, as it’s known in tech circles.)
AI-chip maker Nvidia draws investors — and antitrust scrutiny
For a day there last week, Nvidia, the chipmaker seen as the frontrunner in the market for AI processors, saw its shares soar, allowing it to pass Apple with a market value of $3 trillion (yes, trillion — that’s not a typo) to become the second-most-valuable company in the world behind Microsoft.
Why? CNBC explained: “The company has an estimated 80% market share in AI chips for data centers, which are attracting billions of dollars in spending from big cloud vendors.”
(If you’re interested in how AI companies are faring in the stock market, VC firm Flybridge launched its AI Index, which, it says, “tracks the performance of 28 publicly traded companies, including Nvidia, Google, Microsoft, and IBM, emerging AI-driven companies like C3.ai, Palantir, and UiPath, and others at the forefront of AI innovation.”)
The stock market can be fickle, which is why Nvidia shares fell back the following day. Still, the company is in a powerful position to capitalize on demand for AI tech, which is why the stock will be on a roller-coaster ride throughout the rest of the year.
Its dominant market position may also be spurring an antitrust investigation by the US Department of Justice, according to The New York Times, which cited unnamed sources. The paper reports that both the DOJ and the Federal Trade Commission will be looking into antitrust concerns with Nvidia, Microsoft and OpenAI as part of the government’s effort to get ahead of the AI industry and make sure the big players are, well, playing fair.
FTC Chair Lina Kahn said in a February interview with Harvard Law Today that the agency was working to identify “potential problems at the inception rather than years and years and years later, when problems are deeply baked in and much more difficult to rectify.”
Sounds a lot like what the AI employees, with their open letter, are saying as well.
Things are not looking up for Humane AI — charging case deemed fire hazard
Humane AI, whose wearable AI Pin was supposed to represent the launch of a new category of devices, continues to be in a world of hurt. And this time, early adopters eager to be on the cutting edge are feeling some pain as well.
First, the startup, which worked on its sleek wearable for five years and markets it on its website under the tagline, “Things are looking up,” suffered a wave of terrible reviews for its $699 device in April. The review by CNET’s Scott Stein was among the most charitable, describing the AI Pin as “too frustrating for everyday use.” The company reportedly sold 10,000 devices — below the 100,000 it was expecting, according to The New York Times.
Then came news that the company, founded by ex-Apple designers, had put itself up for sale for over $1 billion. The New York Times reported last week that HP was among the companies interested and described Humane’s “setbacks” as “part of a pattern of stumbles across the world of generative A.I., as companies release unpolished products. Over the past two years, Google has introduced and pared back A.I. search abilities that recommended people eat rocks, Microsoft has trumpeted a Bing chatbot that hallucinated and Samsung has added A.I. features to a smartphone that were called ‘excellent at times and baffling at others.'”
Now comes the news that Humane has emailed Pin owners, telling them to “immediately stop” using the Charge Case Accessory “out of an abundance of caution” after saying the battery in the case is a potential fire hazard.
“While we know this may cause an inconvenience to you, customer safety is our priority at Humane,” the company wrote in the email, which was sent June 5 and then published on the company’s website.
In his review, Stein noted that his AI Pin “suddenly needed cooldowns” after he used its laser-projected display and that the device got uncomfortably warm during use.
You don’t need to be a tech expert to know that having a charging case that might burst into flames is not a good thing.
Humane said it will give users two months of subscription to its wireless service, worth $48, as compensation, CNET reported. It didn’t say whether it would offer replacements.
Microsoft’s Recall AI tool called out as a security risk
As part of the introduction of new PCs with its Copilot AI assistant built in, Microsoft touted a new feature called Recall that captures data from all your applications — unless you exclude any — by taking snapshots of everything you’re doing on your Windows PC. It then stores those snapshots in a database so you can search for and find anything you’ve looked at or worked on by just using a natural language query to ask Recall to get it for you.
Recall runs locally and can function without an internet connection, and even when you’re not logged in to your Microsoft account, CNET reported.
What could possibly go wrong with a feature that creates a history of everything you’ve done on your PC? Do you really not know the answer?
Security experts are calling Recall a security disaster waiting to happen and saying the software should be recalled. At least one white-hat hacker has already created a tool that’s able to extract sensitive data from Recall. It’s called — wait for it — TotalRecall.
“Before you panic, Recall is only coming to new Copilot Plus PCs,” CNET noted. Those PCs go on sale June 18. Recall isn’t coming as an update to a PC you’ve already got.
Microsoft initially referred people to its Recall overview doc, but on June 7, the company put out an “Update on the Recall preview feature for Copilot+ PCs” in response to the security concerns. In the post, Pavan Davuluri, who oversees Windows and devices, announced several changes to how Recall will work — including backtracking on plans to have the feature automatically turned on when you buy a new CoPilot Plus PC.
“We are updating the set-up experience of Copilot+ PCs to give people a clearer choice to opt-in to saving snapshots using Recall. If you don’t proactively choose to turn it on, it will be off by default,” Davuluri wrote. Also, “Windows Hello enrollment is required to enable Recall. In addition, proof of presence is also required to view your timeline and search in Recall.”
Davuluri also said the company is adding additional layers of security protection, noting that “Recall snapshots will only be decrypted and accessible when the user authenticates.”
All nice to hear, though I’m sure security researchers and white-hat hackers will keep putting Recall through trials. If Microsoft had done more testing itself, though, Recall wouldn’t be the newest addition to the growing list of “unpolished products” being put out by tech companies.
What does this mean for you? If you plan on buying one of the new Copilot Plus PCs and don’t want Recall activated, you longer have to worry. Microsoft also reminds users that “you can pause, filter and delete what’s saved at any time. You’re always in control of what’s saved as a snapshot. You can disable saving snapshots, pause them temporarily, filter applications and websites from being in snapshots, and delete your snapshots at any time.”
How? By going to Windows settings and selecting Privacy & Security. Then go to Recall & Snapshots and use the settings to toggle off the feature and delete any data that’s already been collected.
You’re welcome.
Editors’ note: CNET used an AI engine to help create several dozen stories, which are labeled accordingly. The note you’re reading is attached to articles that deal substantively with the topic of AI but are created entirely by our expert editors and writers. For more, see our AI policy.