Dive in — Google’s Sergey Brin says AI can synthesize top 1,000 search results |> Pokemon Go is ditching gaming for AI. |> The people who think AI might become conscious, and more
Expand your understanding of artificial intelligence daily, with a curated, daily stream of significant AI breakthroughs, compelling insights, and emerging trends that will shape tomorrow.
Google’s Sergey Brin says AI can synthesize top 1,000 search results
Google co-founder Sergey Brin says AI is transforming search from a process of retrieving links to one of synthesizing answers by analyzing thousands of results and conducting follow-up research. He explains that this shift enables AI to perform research tasks that would take a human days or weeks, changing how people interact with information online.
Another interesting insight he shared was that algorithms are converging into a single model. In the past, Googlers have described a search engine as multiple engines, multiple algorithms, thousands of little machines working together on different parts of search.
What Brin shared is that machine learning algorithms are converging into models that can do it all, where the learnings from specialist models are integrated into the more general model.
Read more | SEARCH ENGINE JOURNAL
Pokemon Go made Niantic billions. Now it’s ditching gaming for AI.
In March, Niantic made a bombshell announcement: the developer of Pokemon Go — once the biggest mobile game ever in the U.S. — is abandoning games to go all-in on AI.
It has sold off its game development business to Saudi-owned game maker Scopely in a $3.5 billion deal and rebranded itself as Niantic Spatial. Instead of building augmented reality games for mobile phones, it will develop artificial intelligence models that analyze the real world for enterprise clients.
“It's kind of unusual for a successful company to do this cellular division — form two companies,” cofounder and CEO John Hanke told Forbes. “It became clear to us that the way to maximize the opportunity for both was to let each of them go and pursue its future.”
Now, Niantic is doubling down on its nascent Spatial platform, announced in November, which provides AI mapping tools that companies can use to chart out routes for robots or power augmented reality glasses. Just as large language models allow AI to generate text, Niantic’s Large Geospatial Models (LGMs) help AI understand, navigate and interact with physical spaces as a human would.
Read more | FORBES
The people who think AI might become conscious
The "Dreamachine", at Sussex University's Centre for Consciousness Science, is just one of many new research projects across the world investigating human consciousness: the part of our minds that enables us to be self-aware, to think and feel and make independent decisions about the world.
By learning the nature of consciousness, researchers hope to better understand what's happening within the silicon brains of artificial intelligence. Some believe that AI systems will soon become independently conscious, if they haven't already.
But what really is consciousness, and how close is AI to gaining it? And could the belief that AI might be conscious itself fundamentally change humans in the next few decades?
Read more | BBC
At Amazon, some coders say their jobs have begun to resemble warehouse work
Companies seem to be persuaded that, like assembly lines of old, AI can increase productivity. A recent paper by researchers at Microsoft and three universities found that programmers’ use of an AI coding assistant called Copilot, which proposes snippets of code that they can accept or reject, increased a key measure of output more than 25 percent.
At Amazon, which is making big investments in generative A.I., the culture of coding is changing rapidly. In his recent letter to shareholders, Andy Jassy, the chief executive, wrote that generative A.I. was yielding big returns for companies that use it for “productivity and cost avoidance.”
Those changing norms have not always been eagerly embraced. Three Amazon engineers said that managers had increasingly pushed them to use A.I. in their work over the past year. The engineers said that the company had raised output goals and had become less forgiving about deadlines. It has even encouraged coders to gin up new A.I. productivity tools at an upcoming hackathon, an internal coding competition. One Amazon engineer said his team was roughly half the size it had been last year, but it was expected to produce roughly the same amount of code by using A.I.
Read more | NEW YORK TIMES
AI is ‘breaking’ entry-level jobs that Gen Z workers need to launch careers, LinkedIn exec warns
In addition to an economy that’s slowing amid tariff-induced uncertainty, artificial intelligence is threatening entry-level work that traditionally has served as stepping stones, according to LinkedIn’s chief economic opportunity officer, Aneesh Raman, who likened the shift to the decline of manufacturing in the 1980s.
For example, AI tools are doing the types of simple coding and debugging tasks that junior software developers did to gain experience. AI is also doing work that young employees in the legal and retail sectors once did. And Wall Street firms are reportedly considering steep cuts to entry-level hiring.
Meanwhile, the unemployment rate for college graduates has been rising faster than for other workers in past few years, Raman pointed out, though there isn’t definitive evidence yet that AI is the cause of the weak job market.
Read more | FORTUNE
AI is "changing the game" in the influencer space.
Sabri Suby, founder of digital marketing agency King Kong, told Yahoo Finance AI is "changing the game" in the influencer space.
"I would say that 30 to 40 per cent of the short-form form content that people are consuming now, they're not even aware that it is AI-generated," he said.
"Five months ago, it was very easily detectable, specifically with the lip syncing technology.
"But now a few companies can create these hyper-realistic AI influencers, and that's been the missing piece of the puzzle."
Read more | YAHOO FINANCE
Walmart unveils Sparky, a new AI shopping assistant, expanding personalized shopping experience
Sparky, accessible via the Walmart app, brings a cheerful and intuitive interface to shoppers, embodying the familiar yellow smiley face in a digital avatar.
Designed to streamline the shopping process, Sparky allows customers to search for products using natural language queries, such as “Find me a budget-friendly TV for my living room” or “What’s the best deal on running shoes?”
The AI quickly sifts through Walmart’s vast inventory to deliver tailored product recommendations, complete with pricing, availability, and customer reviews. Sparky also integrates with users’ shopping histories to offer personalized suggestions, making it easier to reorder household staples or discover new items based on past purchases.
Sparky introduces advanced features like real-time price monitoring and compatibility checks. For instance, if a customer is shopping for electronics, Sparky can confirm whether a device is compatible with existing products or suggest complementary accessories.
Additionally, Sparky’s conversational tone, infused with a touch of humor inspired by its smiley face persona, aims to make shopping feel more engaging and less transactional.
Read more | CORD CUTTERS
Some signs of AI model collapse begin to reveal themselves
In an AI model collapse, AI systems, which are trained on their own outputs, gradually lose accuracy, diversity, and reliability. This occurs because errors compound across successive model generations, leading to distorted data distributions and "irreversible defects" in performance. The final result? A Nature 2024 paper stated, "The model becomes poisoned with its own projection of reality."
Model collapse is the result of three different factors. The first is error accumulation, in which each model generation inherits and amplifies flaws from previous versions, causing outputs to drift from original data patterns. Next, there is the loss of tail data: In this, rare events are erased from training data, and eventually, entire concepts are blurred. Finally, feedback loops reinforce narrow patterns, creating repetitive text or biased recommendations.
In a recent Bloomberg Research study of Retrieval-Augmented Generation (RAG), the financial media giant found that 11 leading LLMs, including GPT-4o, Claude-3.5-Sonnet, and Llama-3-8 B, using over 5,000 harmful prompts would produce bad results.
Read more | THE REGISTER
Google is burying the Web alive
From the very first use, however, AI Mode crystallized something about Google’s priorities and in particular its relationship to the web from which the company has drawn, and returned, many hundreds of billions of dollars of value. AI Overviews demoted links, quite literally pushing content from the web down on the page, and summarizing its contents for digestion without clicking.
Meanwhile, AI Mode all but buries them, not just summarizing their content for reading within Google’s product but inviting you to explore and expand on those summaries by asking more questions, rather than clicking out. In many cases, links are retained merely to provide backup and sourcing, included as footnotes and appendices rather than destinations.
This is typical with AI search tools and all but inevitable now that such things are possible. In terms of automation, this means companies like OpenAI and Google are mechanizing some of the “work” that goes into using tools like Google search, removing, when possible, the step where users leave their platforms and reducing, in theory, the time and effort it takes to navigate to somewhere else when necessary.
Read more | NY MAG
Almost half of young people would prefer a world without internet, UK study finds
The research reveals that nearly 70% of 16- to 21-year-olds feel worse about themselves after spending time on social media. Half (50%) would support a “digital curfew” that would restrict their access to certain apps and sites past 10pm, while 46% said they would rather be young in a world without the internet altogether.
A quarter of respondents spent four or more hours a day on social media, while 42% of those surveyed admitted to lying to their parents and guardians about what they do online.
While online, 42% said they had lied about their age, 40% admitted to having a decoy or “burner” account, and 27% said they pretended to be a different person completely.
Read more | THE GUARDIAN
Unmissable AI
Your daily dose of curated AI breakthroughs, insights, and emerging trends. Subscribe for FREE to receive new posts, thanks!