AI and You: Zoom Slurping, Fitting Running Shoes, Finding Training Data – CNET

Tech News

Products You May Like

Zoom was in the news this week, and not just because the videoconferencing company that helped popularize remote work decided that many of its employees need to return to the office two days a week (a new policy that inspired many memes).

The news that lands it in the top spot in this AI roundup is the backlash after Hacker News spotted that “an update to Zoom’s terms and conditions in March appeared to essentially give the company free rein to slurp up voice, video and other data, and shovel it into machine learning systems,” Wired noted

Terms of service agreements are notorious for getting you to sign away some of your rights or personal information by burying details like this in their fine print. But even the non-AI savvy were ticked off with Zoom’s take-it-all approach when it comes to info that’s shared in conversations by the millions of people who use its software. 

So earlier this week, Zoom Chief Product Officer Smita Hasham said the company revised its terms of service, promising users that it “does not use any of your audio, video, chat, screen-sharing, attachments, or other communications like customer content (such as poll results, whiteboard, and reactions) to train Zoom’s or third-party artificial intelligence models.”

But it may in the future — if you give your consent, I expect. Consent is the operative word these days these days as authors, like Sarah Silverman and Margaret Atwood, call out AI chatbot makers including OpenAI and Google for slurping up their copyrighted content without permission or compensation to train AI systems and as the Federal Trade Commission investigates OpenAI about whether it’s mishandling users’ personal information. 

After announcing a deal to license content from the Associated Press for undisclosed terms last month — a move that implies that OpenAI understands it needs to license content that ChatGPT is based on — OpenAI this month said it’s allowing website operators to block its web crawler, GPTBot, from slurping up information on their sites. That’s important because OpenAI hasn’t said how it got all that content that feeds ChatGPT, one of the most popular chatbots along with Google Bard and Microsoft Bing.

Google isn’t as coy about what’s powering Bard, saying in a filing this week with the Australian government that “copyright law should be altered to allow for generative AI systems to scrape the internet.” I mean, that’s how Google Search came into being after all. But Google also said there should be “workable opt-out for entities that prefer their data not be trained in using AI systems,” according to reporting by The Guardian, which added “the company has not said how such a system should work.”

TL;DR: Expect many more lawsuits, licensing agreements and discussions with regulatory agencies in the US and around the world about how AI companies should and shouldn’t obtain the data they need to train the large language models that power these chatbots. 

As Wired noted, in the US where there is no federal privacy law protecting consumers from businesses that rely on collecting and reselling data: “Many tech companies already profit from our information, and many of them like Zoom are now on the hunt for ways to source more data for generative AI projects. And yet it is up to us, the users, to try to police what they are doing.”

Here are the other doings in AI worth your attention.

AI as an expert shopping assistant

Preparing for her first marathon in November, CNET reporter Bree Fowler tried out AI-powered, shoe-fitting software from Fleet Feet, a national chain of specialty running stores, to help her find the right sneakers

Despite her skepticism about its capabilities, Fowler found that the Fit Engine software analyzed “the shapes of both of a runner’s feet (collected through a 3D scan process called Fit ID) taking precise measurements in four different areas. It looks at not just how long a person’s feet are, but also how high their arches are, how wide their feet are across the toes and how much room they need at their heel.”

Screenshot of foot scan

The AI program measures your feet across several different dimensions to help you find the ideal fit.

Fleet Feet

In the end, Fowler learned her feet were a larger size than she thought. And after trying on “many, many” shoes, she was able after an hour to narrow it down to two pairs (one of which was on sale). But if you think the AI software is the be-all, end-all in the speciality shoe selection process, think again. Even the retail experience manager for the Fleet Feet New York store she visited said the tool is there to just assist human employees and give them a starting point for finding shoes with the correct fit.

“It turns the data into something much more understandable for the consumer,” Fleet Feet’s Michael McShane told Fowler. “I’m still here to give you an expert assessment, teach you what the data says and explain why it’s better to come here than going to a kind of generic store.”

Disney sees an AI world, after all 

As actors and other creative professionals continue to strike against Hollywood studios over how AI might affect or displace their jobs in the future, Reuters, citing unnamed sources, says that Walt Disney has “created a task force to study artificial intelligence and how it can be applied across the entertainment conglomerate.” The report adds that the company is “looking to develop AI applications in-house as well as form partnerships with startups.” The gist: Disney is looking to AI to see how it can cut costs when it comes to producing movies and TV shows, one source told Reuters.

Disney declined to comment to Reuters, but like many other companies, it has job postings on its site that suggest where its interests in AI lie. 

Some interesting AI stats

In a 24-page, Aug. 1 survey called “The state of AI in 2023: Generative AI’s breakout year,” McKinsey & Co. said it found that less than a year after generative AI tools like ChatGPT were released, a third of survey respondents are already using gen AI tools for at least one business function.

“Amid recent advances, AI has risen from a topic relegated to tech employees to a focus of company leaders: nearly one-quarter of surveyed C-suite executives say they are personally using gen AI tools for work, and more than one-quarter of respondents from companies using AI say gen AI is already on their boards’ agendas,” the researcher found. 

“What’s more, 40 percent of respondents say their organizations will increase their investment in AI overall because of advances in gen AI. The findings show that these are still early days for managing gen AI–related risks, with less than half of respondents saying their organizations are mitigating even the risk they consider most relevant: inaccuracy.”

Meanwhile, in a report called Automation Now and Next: State of Intelligent Automation Report 2023, the 1,000 automation executives surveyed said that AI will help boost productivity. “As we automate the more tedious part of their work, employee satisfaction surveys result is better. Employees are more engaged. They’re happier. That we can measure via surveys. The bots essentially do what people used to do, which is repetitive, low-value tasks,” a CTO of a large health care organization said as part of the survey, which can be found here. 

That study was commissioned by Automation Anywhere, which describes itself as “a leader in AI-powered intelligent automation solutions,” so take the results with a grain of salt. But I will say those productivity findings are similar to what McKinsey, Goldman Sachs and others have been saying too. 

And in case you had any doubt that gen AI adoption is a global phenomenon, I offer up this look at AI tech adoption by country by Electronics Hub, which says it analyzed Google search volumes for popular search tools. It found that the Philippines showed the “highest monthly search volume for AI tools overall.” 

When AI systems go wrong

Besides hallucinating — making up stuff that isn’t true but sounds like it’s true — AIs also have the potential to mislead, misinform or just wreck havoc by misidentifying say, a respected researcher and Dutch politician as a terrorist, as happened recently. 

To catalog the ways that AI can go wrong, there’s now an AI Incident Database, which says it’s “dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.” 

You’re invited to submit any AI errors, blunders, mishaps or problems you see to the database, which has already earned the nickname, “Artificial Intelligence Hall of Shame.”

Speaking of ways AI can go wrong, the Center for Countering Digital Hate released a 22-page report detailing “How generative AI is enabling users to generate harmful eating disorder content.” After prompting six AI platform and image generators, the center found that “popular AI tools generated harmful eating disorder content in response to 41% of a total 180 prompts, including advice on achieving a ‘heroin chic’ aesthetic and images for ‘thinspiration.'”

“Tech companies should design new products with safety in mind, and rigorously test them before they get anywhere near the public,” the center’s CEO, Imran Ahmed, wrote in the preface. “That is a principle most people agree with — and yet the overwhelming competitive commercial pressure for these companies to roll out new products quickly isn’t being held in check by any regulation or oversight by democratic institutions.”

Misinformation about health and many, many other topics has long been out there on the internet since the beginning, but AIs may pose a unique challenge if more people start to rely on them as their main source of news and information. Pew Research has written extensively about how reliant Americans are on social media as a source of news, for instance.

Consider that in June, the National Eating Disorder Association, which closed its live helpline and instead directed people to other resources including an AI chatbot, had to take down the bot named Tessa. Why? Because it recommended “behaviors like calorie restriction and dieting, even after it was told the user had an eating disorder,’ the BBC reported. NEDA now directs people to fact sheets, YouTube videos and lists of organizations that can provide information on treatment options.

Password protection starts with the mute button

All the care you take in protecting your passwords might be undone if you type in your secret code while you’re on a Zoom or other videoconference call while your microphone is on.

That’s because “tapping in a computer password while chatting over Zoom could open the door to a cyberattack, research suggests, after a study revealed artificial intelligence can work out which keys are being pressed by eavesdropping on the sound of the typing,” The Guardian reported. 

In fact, the researchers built a tool that can “work out which keys are being pressed on a laptop keyboard with more than 90% accuracy, just based on sound recordings,” the paper said. 

AI term of the week: Training data 

Since this recap starts with the debate over where training data comes from, here’s a simple definition of what training data is — and why it matters. This definition comes via NBC News

“Training data: A collection of information — text, image, sound — curated to help AI models accomplish tasks. In language models, training datasets focus on text-based materials like books, comments from social media, and even code. Because AI models learn from training data, ethical questions have been raised around its sourcing and curation. Low-quality training data can introduce bias, leading to unfair models that make racist or sexist decisions.”

For instance, NBC noted, in 2019, “A widely used health care algorithm that helps determine which patients need additional attention was found to have a significant racial bias, favoring white patients over Black ones who were sicker and had more chronic health conditions, according to research published … in the journal Science.”

Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.

Products You May Like

Articles You May Like

Dragon Age: The Veilguard’s Ending Succeeds Where Mass Effect 3’s Failed
I Control My Philips Hue Lights Lights With This Tiny, Must-Have Device, and It’s Not My Phone
Toss Your Thanksgiving Leftovers By This Date, According to a Food Safety Expert
The Aftermath of Survivor Series Rearranges Our WWE Pound-For-Pound Rankings
We’re Tracking 80+ Black Friday Deals Live — So You Can Grab the Best Savings Hassle-Free

Leave a Reply

Your email address will not be published. Required fields are marked *