Kentucky woman declines $26M proposal to convert her farm into a data center

Kentucky woman declines $26M proposal to convert her farm into a data center

For many years, Ida Huddleston and her family have maintained a farm in northern Kentucky, turning down at least one multimillion-dollar proposal to keep it intact.

According to a recent WKRC report, a “significant artificial intelligence corporation” proposed $26 million for a portion of their farm to establish a planned data center. Huddleston and her family rejected the offer, expressing their disinterest in having a data center constructed nearby or on any of their 1,200 acres of farmland located outside Maysville, Kentucky.

“They label us as foolish old farmers, but that’s not who we are,” Huddleston, at 82 years old, stated to Local 12 WKRC. “We recognize when our food sources are vanishing, our lands are vanishing, and we lack sufficient water — and that poison. Well, we’re aware of what we’ve been encountering,” seemingly referencing recent reports of water shortages and soil contamination near data center sites.

[embedded content]

In an interview with the news outlet, Huddleston expressed her skepticism that the data center would result in job creation or economic development for Mason County. “It’s a fraud,” she claimed.

The unnamed company, as noted by WKRC, has amended its plans and submitted a zoning application to rezone over 2,000 acres in northern Kentucky, suggesting that the AI corporation may still proceed with plans to construct its data center adjacent to Huddleston’s property.

Anthropic provides Claude Code with increased control, yet maintains tight restrictions.

Anthropic provides Claude Code with increased control, yet maintains tight restrictions.

For developers leveraging AI, “vibe coding” at present involves closely monitoring every action or risking the model operating uncontrolled. Anthropic states that its latest enhancement to Claude seeks to remove that dilemma by enabling the AI to determine which actions are safe to undertake independently — within certain constraints.  

This initiative signifies a wider transition across the sector, as AI tools are progressively crafted to function without awaiting human consent. The challenge involves finding a balance between speed and oversight: too many restrictions can hinder progress, while too few might render systems dangerous and unpredictable. Anthropic’s new “auto mode,” currently in research preview — indicating it is available for experimentation but not yet a finalized offering — represents its most recent effort to navigate this balance. 

Auto mode employs AI-driven safeguards to evaluate each action prior to execution, assessing for any risky behavior that the user did not authorize and for indications of prompt injection — a method of attack where harmful instructions are concealed within the content that the AI is processing, leading it to execute unintended actions. Any actions deemed safe will proceed automatically, while the risky ones will be blocked.

It essentially expands upon Claude Code’s existing “dangerously-skip-permissions” command, which delegates all decision-making to the AI, but incorporates an additional safety layer.

This feature builds upon a trend of autonomous coding solutions from firms like GitHub and OpenAI, which can carry out tasks on behalf of developers. However, it advances this concept by transferring the decision-making of when to seek permission from the user to the AI itself. 

Anthropic has yet to disclose the precise criteria utilized by its safety layer to differentiate safe actions from risky ones — an aspect that developers will likely wish to grasp more thoroughly before broadly implementing the feature. (TechCrunch has reached out to the company for more insights on this matter.)

Auto mode follows Anthropic’s introduction of Claude Code Review, its automatic code reviewer designed to identify bugs before they affect the codebase, and Dispatch for Cowork, which empowers users to delegate tasks to AI agents for work management on their behalf.  

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Auto mode will be available to Enterprise and API users shortly. The company mentions that it currently functions only with Claude Sonnet 4.6 and Opus 4.6, and advises using the new feature in “isolated environments” — sandboxed setups that are separated from production systems, minimizing potential harm if something goes awry.

Spotify trials a new feature to prevent AI-generated content from being credited to actual artists

Spotify trials a new feature to prevent AI-generated content from being credited to actual artists

In an era where AI-generated content is saturating music streaming services, Spotify is testing a new “Artist Profile Protection” feature in beta that enables artists to vet their releases prior to them being published on their profiles. The purpose of this tool is to provide artists with enhanced authority over the tracks that are linked to their names on the platform.

“Tracks have been appearing on incorrect artist pages across streaming platforms, and the surge of easily produced AI tracks has exacerbated the issue,” Spotify mentioned in a blog entry. “That’s not the kind of experience we aim to provide artists on Spotify, which is why we’ve prioritized safeguarding artist identity for 2026. Today, we are unveiling a pioneering solution to a longstanding issue in streaming.”

Artists participating in the beta are granted the power to review and either sanction or reject releases sent to Spotify. Only those releases that receive their approval will be featured on their artist profile, contribute to their statistics, and appear in user recommendations.

Spotify’s disclosure comes just one week after Sony Music announced it has demanded the removal of over 135,000 AI-generated songs falsely impersonating its artists on streaming platforms.

Image Credits:Spotify

Spotify indicates that although open distribution has made music releases easier for independent artists, it also opens doors for errors and imposter activity. Tracks may mistakenly appear on an incorrect artist’s profile because of metadata issues, confusion with similarly named artists, or intentional attempts to misassociate music with a specific artist.

“When this occurs, it can affect your catalog, your statistics, your Release Radar, and how audiences discover your music,” Spotify clarifies. “We understand how annoying this situation can be for both artists and their fans, and one of the primary requests we’ve received from artists over the past year is for enhanced visibility before music is attached to their name.”

Spotify emphasizes that while the new capability may not be essential for every artist, it is crafted for those who have faced frequent incorrect releases, share a common artist name, or desire more influence over what is showcased on their profile.

Included artists in the beta will find the feature in their “Spotify for Artists” settings on both desktop and mobile web. By activating the “Artist Profile Protection”, they will receive email notifications when music associated with their name is sent to Spotify. Subsequently, they can choose to approve or refuse the request.

Playing with AI: My Sweet 16 remains quite delightful

(NOTE: This piece is included in an ongoing series chronicling an experiment that utilizes AI to populate the NCAA brackets and assess its performance against years of human expertise. The initial article is as follows.) Following the first two rounds, I hold the leading position in both my friends’ pool with sixty participants and my […]