Wall Street says buy the AI dip, Facebook paying creators for bizarre AI pics: AI Eye

by skolnes


Voiced by Amazon PollyVoiced by Amazon Polly

This week:
Research on what AI can do well
Wall Street buys the AI dip
Otter.AI transcription research scandal
Recruiters and candidates both using AI
Facebook’s bizarre ‘AI slop’ industry in developing countries

It’s fair to say that the initial hype bubble around AI has burst, and people are starting to ask: What has AI done for me lately?

Too often, the answer is not much. A study in the Journal of Hospitality Marketing & Management found that products described as using AI are consistently less popular. The effect is even more pronounced with high-risk purchases like expensive electronics or medical devices, suggesting consumers have serious reservations about the reliability of current AI tech.

“When AI is mentioned, it tends to lower emotional trust, which in turn decreases purchase intentions,” said lead author and Washington State University clinical assistant professor of marketing Mesut Cicek.

Workplaces are also finding that AI technology’s enormous potential is not yet being delivered. A study from the Upwork Research Institute found that 77% of workers who use AI say the tools have decreased their productivity and added to their workload in at least one way.

And that’s among the businesses who actually use it: according to the US Census Bureau, only about 5% of businesses have used AI in the past fortnight.

RuneRune
Rune Christensen has had to row back on his big AI plans.

Rune Christensen’s grand Endgame plans (as outlined in Magazine earlier this year) to make MakerDAO genuinely autonomous by handing much of the coordination to AI has also been shelved. Just one of four new subDAOs scheduled to launch this year will  — SparkDAO — because AI governance isn’t up to the workload.

“AI is really great most of the time, but it also has a lot of hidden errors and small issues that make it unreliable,” Christensen told DL News.

This is the issue in a nutshell. AI may be correct 97% of the time, but that isn’t reliable enough for most critical operations. You wouldn’t get on a plane that only lands successfully 97% of the time, just as you wouldn’t risk a mission-critical business with it.

So AI may have entered the “trough of disillusionment” phase of the Gartner Hype Cycle* where interest wanes as experiments and implementations fail to deliver on the hype. 

(*Research suggests most new technologies don’t play out as the hype cycle suggests. Think of it as a nice metaphor or something.)

GarntnerGarntner
Gartner Hype Cycle has been overhyped, but is still interesting. (X)

Human translators and AI’s reliability issues in critical fields

A good example of how minor errors undermine the technology can be seen in the translation industry, where human translators were thought to be going the way of the dodo. 

Despite computer translation being available for a decade and AI improving it substantially, the US Census Bureau found the number of people employed as interpreters and translators grew by 11% between 2020 and 2023. The US Bureau of Labor Statistics projects that number to grow another 4% over the next decade.

Massachusetts Institute of Technology economist Daron Acemoglu tells NPR that translation is “one of the best test cases” for AI’s ability to replace human employees, but the technology is just “not that reliable.”

And 100% reliability is crucial when translating legal or medical texts and in many other fields. 

“I don’t think you wanna fully rely on a computer if you’re a translator for the army and you’re talking to an enemy combatant or something like that,” says Duolingo CEO Luis von Ahn. “It’s still the case that computers make mistakes.” 



AI’s current strengths: Cost-effective solutions for simple tasks

While 97% correct isn’t good enough for life-or-death related tasks, it’s plenty good enough for a whole host of applications. Social media companies couldn’t survive without AI content moderation or ad recommendations — and near enough is good enough, considering the task would be 100% impossible otherwise.

New research for Model Evaluation and Threat Research (METR) found that AI has many limitations, but it can replace humans doing busy work for a series of less complicated tasks that generally require humans 30 minutes or less to complete.

AI gets noticeably worse the more complex the task: the research suggests it can complete around 60% of the tested tasks that humans took less than 15 minutes to do, but only 10% of tasks that took humans four hours.

But when it works, it costs just 1/30th as much as hiring a human. 

METRMETR
Research suggests AI can do some things well (METR)

So part of the issue is that we just haven’t figured out how best to use this new technology just yet. It’s reminiscent of the early days of the internet when there were lots of hobby webpages and email, but the enormous revenue from online shopping was still a long way off.

Anthropic founder Jack Clark recently argued that even if we stopped AI development completely tomorrow, there would still be years, or decades, worth of further improvements via capability overhang, applications and integration, efficiencies and learnings.

Ben Goertzel, founder of Singularity.net, made a similar point this week when he argued that behind-the-scenes businesses are developing all sorts of innovative new use cases. 

Revenue won’t be driven by chatbot subscriptions, he said, but more “centrally about back-end integration of GenAI into all sorts of other useful software and hardware applications…. [that’s[ happening now, all over the place.”

So is the AI bubble bursting?

Since ChatGPT was first released, people have been waiting for the AI bubble to burst, and those calls have picked up considerably in the wake of this week’s global stock market plunge.

The AI-heavy Magnificent Seven stocks (Microsoft, Amazon, Meta, Apple, Alphabet, Nvidia and Tesla) saw $650 billion wiped from their value on Monday, with $1.3 trillion erased across three trading sessions. This led The Guardian, CNN, The Atlantic, the Financial Times and Bloomberg to all run pieces about the possible end of the bubble.

Read also


Features

Championing Blockchain Education in Africa: Women Leading the Bitcoin Cause


Features

Programmable money: How crypto tokens could change our entire experience of value transfer

However the main catalyst for the plunge appears to be the collapse of the yen carry trade, which caused a bunch of forced selling, with recession fears in the US a secondary factor. 

Admittedly, some of the air has come out of the AI bubble, too, mainly due to concerns that revenue does not justify expenditure. A Goldman Sachs report at the end of June called Too Much Spend, Too Little Benefit set the tone with its argument that current AI tech can’t solve the sort of complex problems that could justify the planned $1 trillion in capital expenditure over the coming years.  

As Morgan Stanley analyst Keith Weiss said on Microsoft’s earnings call: “Right now, there’s an industry debate raging around the (capital expenditure) requirements around generative AI and whether the monetization is actually going to match with that.”

The idea was given credence by a report in The Information suggesting that despite OpenAI making up to $4.5 billion in revenue this year, it will still run up $5 billion in losses.

But even if OpenAI can’t raise enough money to survive — and AI critic Ed Zitron suggests it “needs to raise more money than anybody has ever raised” — its tech is not going to disappear, it will likely just get gobbled up by Microsoft. Character.ai is reportedly considering a similar deal with Google.

Any revenue shortfall affecting AI startups is likely to benefit the big companies with the deepest pockets.

Google, Microsoft and Meta spend big on AI

Meanwhile, Google, Microsoft and Meta remain all in on AI. Meta is spending up to $40 billion on capex this year, Microsoft is spending $56 billion — and expects to increase that in 2025 — while Google is spending at least $48 billion. If we see them start to reign those figures in, the bubble might be over.

The big guys see it as a long-term play. Microsoft Chief Financial Officer Amy Hood expects the data center investments in AI to pay off “over the next 15 years and beyond,” and Meta Chief Financial Officer Susan Li tips “returns from generative AI to come in over a longer period of time.”

The earnings reports suggest the big companies are doing just fine in the meantime. 

Alphabet reported 14% revenue growth and 29% growth in its AI-driven Google Cloud business. Microsoft saw 15% revenue growth, with 29% growth in its Azure cloud business. (To be fair to AI critics, cloud revenues are at risk if the bubble bursts.) Meta saw 22% revenue growth, noting that its AI recommendation systems are getting better at determining which ads to show to whom.

“This is enabling us to drive revenue growth and conversions without increasing the number of ads or, in some cases, even reducing ad load,” Li said on Meta’s earnings call.

So while the AI stock market bubble may deflate, the eventual winners are already laying the groundwork for their success. Google CEO Sundar Pichai said:  

“The risk of underinvesting is dramatically greater than the risk of overinvesting.”

Wall Street: Stock market correction is AI buying opportunity

Plenty of Wall Street firms see the current correction as a buying opportunity. On Monday BlackRock Investment Institute put out a research note saying it wasn’t bothered by the sell-off or recession fears:

“We keep our overweight to US equities, driven by the AI mega force, and see the sell-off presenting buying opportunities.”

Evercore ISI’s Julian Emanuel likened the current correction to the 1994-1999 dotcom boom bull market pullbacks. “The rationale for AI, in a world where the global workforce is aging rapidly, and efficiency will be critical to drive productivity enhancements, is greater than ever,” Emanuel wrote on Monday. 

“We view the current ‘AI Air Pocket’ as an opportunity to gain exposure to a long term secular theme.”

Goldman Sachs US equity strategist David Kostin said that while we’ve seen sharp drops in the valuations of Big Tech companies (around 13% since July 10) earnings estimates are moving higher. “Valuations continue to reflect AI optimism despite investor concerns about the likely timing,” he told clients this week.

Read also


Features

The real risks to Ethena’s stablecoin model (are not the ones you think)


Asia Express

Asia Express: China’s NFT market, Moutai metaverse popular but buggy…

Otter.AI is a snitch

Two Yale researchers caused a storm of controversy for mocking and disparaging a Harlem community group leader who had different views to them on supervised drug use centers.

After Harlem’s Shawn Hill left the Zoom call interview for their research project, one of the researchers said: “Let’s try to get some more interviews of people who suck,” and suggested it would be better for the outcome of their research if they could “find someone who we can give enough rope to hang themselves with.”

Unfortunately for the researchers, the Otter.ai transcription service was still recording and sent all participants the transcript. Hill then publicized the remarks, which caused considerable controversy about how objective the research produced by the pair really was. The pair apologized, and Yale School of Medicine is reviewing the incident.

LinkedIn’s AI recruiters, AI candidates, AI reply guys

Recruiters have started using AI for all stages of the hiring process, from candidate sourcing to resume screening. On the other hand, the Institute of Student Employers says that more than 90% of graduates are now using AI for their job applications.

This builds on research earlier this year from Canva and Sago, which found that 45% of job seekers are using the technology. Most hiring managers (90%) are fine with applicants using AI to help with the process, although about half (45%) believe it should be used “minimally.” 

In related news, Nilan Saha, who AI Eye interviewed regarding his AI reply guy service on LinkedIn (it makes anodyne but positive comments, so you don’t have to), has been kicked off the platform.  

“LinkedIn sent a cease and desist forcing us to stop. I emailed all my customers letting them know of the recent events. Obviously, there were a lot of folks who used the feature, and they are all churning.” 

You can still use his Magic Reply service on X, which apparently has no problems with bots.

AI-generated song charts in Germany

An AI-composed song, “Verknallt in einen Talahon,” has hit number 72 on the German music charts. Reportedly created using text-to-music generator Udio, it’s a jaunty mid-1970s-sounding pop song with shades of Abba. The lyrics satirize “Talahons,” a subculture of teen gangsters often of Arab descent who are apparently a thing on TikTok. 

The song is also charting even better on Spotify, reaching number 27.

Facebook’s AI-slop-generated images are paid for by Facebook

A bizarre cottage industry creating weird AI-generated images like Shrimp Jesus, heart-rending pictures of starving people, and ludicrously perfect dream homes has sprung up in India, Vietnam and the Philippines.

404 Media reports that the industry is comprised of relatively poor people in developing countries who teach each other how to use Bing’s image generator to churn out dozens of AI images a day across as many accounts as possible to earn $400 to $1000 a month from the Creators Bonus program.

One man who created a viral image of a train made from leaves received $431 from the image’s engagement. He said in a YouTube video, “People don’t even make this much money in a month.” 

AI slopAI slop
“Made it with my own hands” is a telltale phrase used by AI Slop creators. (Facebook)

Some of the most viral bizarre images are created by chance. Prompts are passed around in Telegram groups and then badly translated into English for the image generator, which results in some super weird pictures.

According to the report Facebook seems fine with paying people to generate “AI Slop” as long as it improves their engagement metrics. Most of the images also don’t contain any disclaimers that they are AI-generated. 

Andrew Fenton

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.

Read also


Hodler’s Digest

BTC drops, XRP is ‘toxic waste,’ Facebook’s Diem in trouble: Hodler’s Digest, Dec. 6–12

by
Editorial Staff

9 min
December 12, 2020

The best (and worst) quotes, adoption and regulation highlights, leading coins, predictions and much more — one week on Cointelegraph in one link!

Read more


Hodler’s Digest

SBF ordered to jail, Bitcoin ETF delayed and SEC to appeal Ripple case: Hodler’s Digest, Aug. 6-12

by
Editorial Staff

6 min
August 12, 2023

Sam Bankman-Fried has bail revoked, ordered to jail; the SEC delays decision on a spot Bitcoin ETF; and the SEC moves to appeal on Ripple’s case.

Read more



Source Link

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.