AI and You: Big Tech Goes to DC, Google Takes On ‘Synthetic’ Political Ads – CNET

Tech News

Products You May Like

Sen. Chuck Schumer invited Big Tech leaders to an AI Insight Forum in Washington, DC, as the US works to figure out how to regulate artificial intelligence. Closed-door meetings set for Sept. 13 will focus on the risks and opportunities ahead as the public continues to embrace tools like Open AI’s ChatGPT and Google’s Bard.

Executives expected to attend make up a who’s who of tech’s (male) leaders.The CEOs include OpenAI’s Sam Altman, Meta’s Mark Zuckerberg, Microsoft’s Sayta Nadella, Alphabet/Google’s Sundar Pichai, Tesla’s Elon Musk and Nvidia’s Jensen Huang, according to Reuters. Schumer said the forum will be the first in a series of bipartisan discussions to be hosted this fall and that the talks will “be high-powered, diverse, but above all, balanced.”

“Legislating on AI is certainly not going to be easy,” Schumer said in Sept. 6 remarks posted on the Senate Democrats’ website. “In fact, it will be one of the most difficult things we’ve ever undertaken, but we cannot behave like ostriches sticking our heads in the sand when it comes to AI.”

“Our AI Insight Forums,” Schumer said, “will convene some of America’s leading voices in AI, from different walks of life and many different viewpoints. Executives and civil rights leaders. Researchers, advocates, voices from labor and defense and business and the arts.”

While the United Kingdom and European Union move forward with efforts to regulate AI technology, the White House last year offered up a blueprint for an AI Bill of Rights, which is worth a read if you haven’t already seen it. It was created by the White House Office of Science and Technology and has five main tenets. Americans, it says: 

  • Should be protected from unsafe or ineffective systems.

  • Should not face discrimination by algorithms, and systems should be used and designed in an equitable way.

  • Should be shielded from abusive data practices via built-in protections, and should have agency over how data about them is used.

  • Should know that an automated system is being used and understand how and why it contributes to outcomes that impact them.

  • Should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems they encounter.  

Here are some other doings in AI worth your attention.

Google wants ‘synthetic content’ labeled in political ads

With easy-to-use generative AI tools leading to an uptick in misleading political ads, as CNET’s Oscar Gonzalez reported, Google this week updated its political content policy to require that election advertisers “prominently disclose when their ads contain synthetic content that inauthentically depicts real or realistic-looking people or events.” 

Google already bans “deepfakes,” or AI-manipulated imagery that replaces one person’s likeness with that of another person in an effort to trick or mislead the viewer. But this updated policy applies to AI being used to manipulate or create images, video and audio in smaller ways. It calls out a variety of editing techniques, including “image resizing, cropping, color or brightening corrections, defect correction (for example, “red eye” removal), or background edits that do not create realistic depictions of actual events.” The new policy is spelled out here

What does all that actually mean? Given how easy it is to use tools like OpenAI’s ChatGPT and Dall-E 2 to create realistic content, the hope here is that by forcing content creators to straight out say their ad contains fake imagery, text or audio, they might be more careful in how far they take their manipulations. Especially if they want to share them on popular Google sites, including YouTube, which reaches more than 2.5 billion people a month.

Having a prominent label on an AI-manipulated ad — the label needs to be clear and conspicuous and in a place where it’s “likely to be noticed by users,” said Google — might help you and me suss out the truthfulness of the messages we’re seeing. (Though the fact that some people still think the 2020 election was stolen even though that’s untrue suggests humans want to believe what they want to believe, facts aside.)  

“The policy update comes as campaign season for the 2024 US presidential election ramps up and as a number of countries around the world prepare for their own major elections the same year,” CNN reported about the Google policy update. “Digital information integrity experts have raised alarms that these new AI tools could lead to a wave of election misinformation that social media platforms and regulators may be ill-prepared to handle.”

Google says it’s going after two things: First, it’s trying to stop political ads that make it seem “as if a person is saying or doing something they didn’t say or do,” and second, it’s aiming to prevent any ad “that alters footage of a real event or generates a realistic portrayal of an event to depict scenes that did not actually take place.” I think any reasonable person would agree that those aren’t good attributes of a political ad.

Critics may say this is just a small step in combating misinformation, but at least it’s a forward step. 

How AI will change the future of jobs

There’ve been many, many reports highlighting how genAI will lead to the end of certain jobs, rewrite other jobs, and create whole new categories of jobs, as I’ve noted in recapping numerous reports on the topic.

Well, here’s a July 16 assessment from McKinsey and Co. that looks at “Generative AI and the figure of work in America” through 2030, including which jobs will be in demand and which will go away. The 76-page report identifies “resilient and growing occupations,” as well as occupations that workers have already shifted away from (like customer service, sales and food services).

Two takeaways: 30% of the hours that are worked today could be automated by 2030, which means job descriptions will need to be modified to highlight how workers might use their time instead. And the new skill sets required in an AI-oriented world, at a time when the overall workforce in the US is shrinking, mean “employers will need to hire for skills and competencies rather than credentials, recruit from overlooked populations (such as rural workers and people with disabilities), and deliver training that keeps pace with their evolving needs,” McKinsey said.

Pitting performers against AI

No doubt you’ve read stories about how an AI might replace a job or do it differently than a human, especially when it comes to writing or to creating images. In August, The New York Times examined how AI is faring against comedians. TL;DR: Comedians don’t have to worry yet, since most AIs are mimics rather than creators and comedy “depends on novelty and surprise.”

That’s what CNET’s Katie Collins found as well after attending the world’s largest arts festival. Around 20% of the 40 shows she watched at Scotland’s Edinburgh Fringe festival were either “about AI, included references to it or were in some way co-created by it.”

“I went into the Edinburgh Fringe festival this year wondering if shows about AI could make use of the technology in novel ways that would impress me or give me a different perspective on the technology,” Collins wrote. 

“The short answer: AI has a long way to go before it can keep pace with human performers in delighting and amusing audiences … How we, as mere mortals, skewer AI, examine it, admire it and laugh at it will continue to evolve, too — and that’s important. For how can AI ever be expected to learn what we find truly funny unless we do?”

For the fun of it (pun intended), I asked ChatGPT what the funniest joke it’s ever heard was. Here’s what I got:

As an AI language model, I don’t have personal experiences or opinions, so I don’t have a “funniest” joke that I’ve heard. However, I can generate a random joke for you:

Why don’t scientists trust atoms?

Because they make up everything!

Hey ChatGPT, atoms aren’t the only ones who make things up.   

OpenAI is pulling in a billion, Apple is spending billions on AI

The popularity of OpenAI’s ChatGPT is putting the company on pace to hit $1 billion in annual sales — even as visitors to the chatbot declined for the third month in a row in August. 

The startup, which is backed by Microsoft, Khosla Ventures, A16z, Sequoia Ventures, investors Reid Hoffman and others, is taking in about $80 million of revenue each month after earning $28 million for all of 2022 and losing $540 million developing GPT-4 and ChatGPT, according to The Information. The news site said OpenAI declined to comment. 

Where’s that money coming from? OpenAI makes money by licensing its AI technology to businesses and by offering ChatGPT subscriptions to individuals, who pay $20 a month for a “Plus” version the company says is faster and more secure than the free offering. The Information reported that as of March, OpenAI has between 1 million and 2 million individual subscribers.   

But the popularity of ChatGPT doesn’t necessarily mean big profits for OpenAI, Fortune noted. “Even if it does begin to turn a profit, OpenAI won’t be able to fully capitalize on its success for some time,” Fortune said. “The terms of its deal earlier this year with Microsoft give the company behind Windows the right to 75% of OpenAI’s profits until it earns back the $13 billion it has invested to date.”

Meanwhile, Apple is “expanding its computing budget for building artificial intelligence to millions of dollars a day,” The Information reported, adding that Apple has been working on developing a genAI large-language model for the past four years.

“One of its goals is to develop features such as one that allows iPhone customers to use simple voice commands to automate tasks involving multiple steps, according to people familiar with the effort,” The Information said. “The technology, for instance, could allow someone to tell the Siri voice assistant on their phone to create a GIF using the last five photos they’ve taken and text it to a friend. Today, an iPhone user has to manually program the individual actions.”

Right now I’d just be happy for Siri to understand what I’m saying the first time around.

Heart on My Sleeve’s Ghostwriter wants a record deal

Back in April, the music industry — and songwriters — were ringing their hands over a track called Heart on My Sleeve put together by an unknown creator called Ghostwriter using faked, AI versions of Drake’s and The Weeknd’s voices. Called a brilliant marketing move, the song racked up millions of plays before it was pulled down from streaming services. At issue wasn’t the musical quality of the song (meh), but the copyright and legal implications of who would get royalties for this AI-generated kind of copycat piece, which analysts at the time said was one of “the latest and loudest examples of an exploding gray-area genre: using generative AI to capitalize on sounds that can be passed off as authentic.”  

Now comes word that Ghostwriter and team have been meeting with “record labels, tech leaders, music platforms and artists about how to best harness the powers of A.I., including at a virtual round-table discussion this summer organized by the Recording Academy, the organization behind the Grammy Awards,” The New York Times reported this week.  

Ghostwriter posted a new track, called Whiplash, which uses AI vocal filters to mimic the voices of rappers Travis Scott and 21 Savage. You can listen to it on Twitter (now known as the service called X) and watch as a person draped in a white sheet sits in a chair behind the message, “I used AI to make a Travis Scott song feat. 21 Savage… the future of music is here. Who wants next?”

“I knew right away as soon as I heard that record that it was going to be something that we had to grapple with from an Academy standpoint, but also from a music community and industry standpoint,” Harvey Mason Jr., who leads the Recording Academy, told the Times. “When you start seeing AI involved in something so creative and so cool, relevant and of-the-moment, it immediately starts you thinking, ‘OK, where is this going? How is this going to affect creativity? What’s the business implication for monetization?'”

A Ghostwriter spokesperson told the Times that Whiplash, like Heart on My Sleeve, “was an original composition written and recorded by humans. Ghostwriter attempted to match the content, delivery, tone and phrasing of the established stars before using AI components.”

TL;DR: That gray-area genre may turn green if record companies, and the hijacked artists, take the Ghostwriter team up on their ask to release these songs officially and work out a licensing deal.  

A who’s who of people driving the AI movement

Time magazine this week released its first-ever list of the 100 most influential people working around AI. It’s a mix of business people, technologists, influencers and academics. But it’s Time’s reminder about humans in the loop that I think is the biggest takeaway. 

Said Time, “Behind every advance in machine learning and large language models are, in fact, people — both the often obscured human labor that makes large language models safer to use, and the individuals who make critical decisions on when and how to best use this technology.”   

AI phrase of the week: AI ethics 

With questions about who owns what when it comes to AI-generated content, how AI should be used responsibly, and determining the guardrails around the technology to prevent harm to humans, it’s important to understand the whole debate around AI ethics. This week’s explanation comes courtesy of IBM, which also has a handy resource center on the topic: 

“AI ethics: Ethics is a set of moral principles which help us discern between right and wrong. AI ethics is a multidisciplinary field that studies how to optimize AI’s beneficial impact while reducing risks and adverse outcomes. Examples of AI ethics issues include data responsibility and privacy, fairness, explainability, robustness, transparency, environmental sustainability, inclusion, moral agency, value alignment, accountability, trust, and technology misuse.”

Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.

Products You May Like

Articles You May Like

The Aftermath of Survivor Series Rearranges Our WWE Pound-For-Pound Rankings
We’re Tracking 80+ Black Friday Deals Live — So You Can Grab the Best Savings Hassle-Free
We Found 80+ Best Cyber Monday Deals on TVs, Headphones, Small Appliances and More
CNET Shopping Experts Found 80+ Sizzling Cyber Monday Deals to Shop Before It’s Too Late
Amazon Black Friday Deals: 55+ Discounts Worth Shopping From Samsung, Breville, Apple and More

Leave a Reply

Your email address will not be published. Required fields are marked *