'The Audacity' Is the Broligarchy Fall You've Anticipated

‘The Audacity’ Is the Broligarchy Fall You’ve Anticipated

“Cheaters never face defeat, and those who lose never resort to cheating.”

This perverse guidance is what ultra-wealthy tech CEO Duncan Park (Billy Magnussen) imparts to his teenage daughter at the conclusion of the second episode of The Audacity, the incisive new AMC series examining the sociopaths in Silicon Valley, debuting April 12. It’s abysmal parenting, yet it encapsulates the rhetoric of Duncan’s specific universe: It may seem ingeniously counterintuitive but is utterly misguided—a flawed notion spawned by a privileged mediocre striving to be perceived as brilliant.

Duncan embodies a well-known archetype. At this point, numerous films and television shows satirize and critique the One Percent as they devise increasingly contemptible methods of treating their peers and subordinates. Jonathan Glatzer, the creator of The Audacity, also served as a producer and writer for Succession, whose viewers will find similar excitement here.

In a similar vein, Mike Judge’s satire of startups Silicon Valley may come to mind when someone on the streets of Palo Alto insults Duncan for driving a Hummer, and he defiantly replies, “It’s an EV! I’m part of the solution! Bitch!”

However, within Glatzer’s tale, paired with Magnussen’s explosive portrayal, there emerges something perhaps fresh and distinctive. Could this mark television’s inaugural genuine broligarch?

Duncan dons the puffer vest that has been the industry standard for years, although his Zoomer haircut mirrors the younger demographic aligned with Elon Musk’s DOGE. When a significant deal for his company, Hypergnosis, with an Apple-like conglomerate falls through, he schedules a session with an on-demand ayahuasca shaman. He becomes offended when a diagnostic indicates he is neurotypical—he had always presumed he was on the spectrum. In his childishness and overstepping bounds, his conviction in market manipulation as the rational approach to business, and his intensifying doubt that his deceased ex-partner aided his ascent, Duncan embodies the crisis of masculinity prevalent in American billionaire culture.

In contrast to some of its forerunners, The Audacity emphasizes the human toll resulting from this volatile blend of emotional ignorance and vast power.

Central to the narrative is a high-stakes complication between Duncan and his therapist, JoAnne Felder (Sarah Goldberg of Barry acclaim). One might anticipate a familiar scenario akin to Tony Soprano and Dr. Melfi, with the unhealed narcissist unloading his issues on a professional hired to care. Nevertheless, convinced JoAnne might divulge damaging details about his business tactics, Duncan coerces an employee to utilize an AI surveillance platform to spy on her and discovers she is engaging in insider trading during her sessions with high-profile clients.

Both Duncan and JoAnne have significant concerns aside from the rapidly intensifying blackmail plot. Their children, for example. Duncan’s status-obsessed wife is pressuring their daughter to attend Stanford despite her lackluster credentials, while scolding her for eating. JoAnne has recently reconnected with a painfully shy son who barely knows her. With the parents absorbed in their game of cat and mouse, the children are left lost in a cutthroat private school where suicide is a common topic.

This is one of the many ways The Audacity addresses the repercussions of permitting individuals like Duncan to steer the world. It’s not solely about mergers and acquisitions here—in fact, money often plays a secondary role, except in how he believes it entitles him to destroy or manipulate at will. Lacking those resources, JoAnne quickly obtains a handgun, not greatly exaggerating the desperation of someone burdened with student loan debt in contrast to a Fortune 500 executive.

Apple is allegedly evaluating four concepts for its forthcoming smart glasses.

Apple is allegedly evaluating four concepts for its forthcoming smart glasses.

Apple is set to introduce its inaugural smart glasses in 2027, with a potential reveal by the close of this year, as per Bloomberg’s Mark Gurman.

Gurman has consistently covered the development of the firm’s smart glasses strategy, but he now has additional insights into their appearance — he mentioned that Apple is evaluating four designs and may possibly release some or all of them.

Reportedly, those designs feature a large rectangular frame, a sleeker rectangular frame (reminiscent of the glasses worn by CEO Tim Cook), a larger oval or circular frame, and a compact oval or circular frame. Apple is also exploring a variety of colors including black, ocean blue, and light brown.

In some respects, these glasses represent a retreat from an ambitious vision that initially envisioned Apple releasing a range of mixed and augmented reality devices — a strategy that has already faced challenges with product delays and the tepid response to the Vision Pro.

These glasses, on the other hand, seem to align more closely with Meta’s Ray-Ban glasses. They will not feature any displays but will enable users to capture photos and videos (Apple is reportedly using oval camera lenses), answer calls, listen to music, and engage with the long-awaited Siri upgrade.

X states it is decreasing payments to clickbait accounts

X states it is decreasing payments to clickbait accounts

According to its head of product Nikita Bier, X is reducing payments to accounts that are “flooding the timeline” with clickbait and rapid news aggregation.

Bier stated on Saturday that “[a]ll aggregators had their payouts decreased to 60% this cycle” and mentioned that there will be an additional 20% cut in the upcoming payment cycle. He also noted that the social media platform owned by Elon Musk will be scaling back payments for “frequent bait posters who label every post as ‘🚨BREAKING’.”

“It became increasingly obvious: saturating the timeline with 100 stolen reposts and clickbait every day pushed out genuine creators and hindered the growth of new authors,” Bier remarked, adding, “X will never violate free speech or reach — but we will not pay for the manipulation of our program or users.”

Bier’s statements followed reports from several conservative news accounts indicating that they had received notifications from X about their accounts being demonetized.

Dominick McGee, known as Dom Lucre, commented, “🔥🚨BREAKING […] I was the first creator to be demonetized on this platform, and it lasted a whole year. I regained it, only to lose it again without any explanation. How could this happen? I am one of the most dedicated creators on X.”

McGee boasts 1.6 million followers on X. He gained fame by sharing conspiracy theories related to the 2020 presidential election, and although he was temporarily banned from X in 2023 and demonetized in 2024, he told The New York Times last year that he was earning $55,000 annually from the platform.

In reaction to Bier’s statement, McGee expressed that X appeared to be catering to “the grievances of individuals who have no intention of creating on this app.” While he acknowledged that labeling every post as breaking news would qualify as “clickbait,” he insisted, “I post hundreds of times, and very few are BREAKING.” (Some users on X seemed to disagree, providing a community note linking to 91 instances where he had used the term “BREAKING” in the previous week.)

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Other users asserted that they were affected by X’s crackdown, with an account named PoliMath stating, “I appreciate what Nikita is attempting to achieve, but I just received my lowest payout in a long time, so I’m a bit anxious that I may have inadvertently fallen into the ‘aggregators’ category.” The account claimed they are “not an ‘aggregator’ by any means,” although they recognized having a paid partnership with Kalshi.

Bier’s remarks also came amidst renewed discussion regarding the value of the X platform, with data analyst and commentator Nate Silver lamenting the increased difficulty in redirecting traffic from X to other sites. He highlighted the prevalence of right-leaning accounts on X, stating, “I guess I had some intuition about how bad it was, but wow, this is the outcome of a broken ecosystem.”

Bier contended that Silver’s data is inaccurate, and Musk referred to his posts as “bullshit,” despite other analyses supporting Silver’s assertions.

TechCrunch Mobility: Who is siphoning off all the autonomous vehicle expertise?

TechCrunch Mobility: Who is siphoning off all the autonomous vehicle expertise?

Welcome back to TechCrunch Mobility, your destination for the future of transport and how AI is increasingly influential. To receive this in your inbox, sign up here for free — just click TechCrunch Mobility!

Typically, I provide an analysis and then some insights (my insider information specially collated for you). However, today I am merging these because I have an abundance of insights regarding the ongoing talent battles.

Approximately seven years ago, a founder of a self-driving vehicle enterprise informed me that competing against giants like Waymo for talent was “like a knife fight.” Now it appears that a new hiring war is unfolding, as reported by several sources. This is driving base salaries (excluding equity and other perks) to a range of $300,000 to $500,000. 

Here’s what’s transpiring. The dynamic physical AI sector is crowded with robotics and defense technology firms seeking individuals with a distinct skillset (to borrow a phrase from Liam Neeson). These individuals are primarily employed at companies creating self-driving trucks and robotaxis. 

As these professionals are drawn to other fields — including defense — automotive manufacturers and startups are compelled to increase compensation or risk losing talent to higher-paying “physical AI” positions.

The ideal candidate for a self-driving vehicle company possesses hybrid abilities, combining traditional robotics with AI expertise, as stated by one founder. It’s this specialized knowledge of integrating AI into hardware such as humanoid robots, industrial machinery, and autonomous forklifts, along with construction, mining, and agricultural equipment, that has firms vying for skilled workers. 

Startups in defense technology are reportedly the most generous regarding salary, thanks to the Department of Defense’s ample funding. Positions seeking an applied researcher or AI enablement engineer (or similar roles) are currently in high demand. 

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

This is unlikely to adversely affect Waymo. As noted by one founder, Waymo is not sensitive to pricing. However, startups and established automakers, which have heavily invested in autonomous technologies, are likely to be most impacted, as a few sources have indicated. 

I foresee a two-fold subsequent effect. Automakers will struggle to retain engineers engaged in automated driving, likely leading to a departure. Concurrently, startups will need to secure more funding or become significantly more strategic in utilizing their resources.

A little bird

blinky cat bird green
Image Credits:Bryce Durbin

Well, you’ve already captured this week’s little bird. Scroll up! But I’m keeping this adorable graphic as a reminder for you to reach out, call me, or email with any tips!

Have a tip for us? Email Kirsten Korosec at [email protected] or my Signal at kkorosec.07, or email Sean O’Kane at [email protected].

Deals!

money the station
Image Credits:Bryce Durbin

Recall in 2016 when the mention of “self-driving” on a pitch deck seemingly guaranteed a term sheet? As the vibes of 2016 have flowed into 2026, founders and investors have transitioned. Now, as you’ve likely observed, the focus is on physical AI, an expansive category that encompasses more than just robotaxis and autonomous trucks.  

The venture firm Eclipse, based in Palo Alto, has positioned itself at the forefront of the physical AI movement and has secured an additional $1.3 billion for investment. This fresh $1.3 billion is divided between a $591 million fund for early-stage incubation and a fund directed towards growth startups. 

I spoke with Eclipse partner Jiten Behl regarding the fund and the likely direction of those investments. I was especially curious about his perspective on Eclipse’s role in nurturing startups. Eclipse has yet to issue new checks, but Behl mentioned the firm is planning to incubate additional startups and expressed, “We’re definitely working on a couple of really exciting ideas.”

So, stay tuned. And you can read the full story here.

Other deals that caught my eye …

Candela, a Swedish electric hydrofoil firm, secured a 20-boat deal with Norwegian operator Boreal. Meanwhile, Candela founder and CEO Gustav Hasselskog is stepping down; Sofia Graflund is the new CEO, while Hasselskog will take on the role of executive chairman. 

Hermeus, a Los Angeles-based defense startup focusing on unmanned aircraft, raised $350 million at a $1 billion valuation. This funding consists of $200 million in equity led by Khosla Ventures, with the remaining $150 million sourced as debt.

Sora Fuel, a startup based in Cambridge, Massachusetts, specializing in sustainable aviation fuel, raised $14.6 million in a funding round co-led by Spero Ventures and Inspired Capital, as reported by Axios. 

Transportation Secretary Sean Duffy mentioned during a CNBC interview that there is potential for airline mergers in the United States.

Notable reads and other tidbits

Image Credits:Bryce Durbin

Avride has become the latest autonomous vehicle company to draw criticism from residents unhappy with the behavior of its robotaxis. This incident involved an autonomous vehicle (which had a human safety operator) that ran over and killed a mother duck in the Austin, Texas, neighborhood of Mueller Lake. “It didn’t slow down or hesitate at all, just steamrolled through,” stated a witness. Read the full account to discover how Avride is responding. 

Fuel prices are not the sole driver behind the surge in used EV sales. 

John Deere has settled for $99 million to resolve litigation regarding “right to repair” currently pending in the U.S. District Court for the Northern District of Illinois. Wired offers a solid analysis on the subject and its implications. 

If you missed the announcement, startups along with Big Tech companies are significantly focused on physical AI and automation. Mariana Minerals, which is centered on the mining sector, is among them. Senior reporter Sean O’Kane interviewed founder Turner Caldwell, a former Tesla engineer who established the startup in 2024, about the company’s recent collaboration with autonomous vehicle tech firm Pronto (and indeed, this is the Pronto founded by Anthony Levandowski that was recently purchased by Uber co-founder Travis Kalanick’s startup Atoms). 

Recall when Elon Musk remarked that a smaller, more affordable $25,000 EV was pointless and frivolous? Well, according to sources from Reuters, Tesla is now developing an entirely new smaller, budget-friendly electric SUV.

Volkswagen will cease the production of the all-electric ID.4 at its U.S. facility in Chattanooga, Tennessee. Its substitute? High-volume models like the forthcoming petrol-operated Atlas SUV.

The ID.4 will remain available to U.S. consumers until current stock is depleted. VW informs me that it should extend into 2027.

Meanwhile, Volkswagen subsidiary MOIA America is making progress in the autonomous vehicle arena. MOIA America and Uber have begun testing autonomous microbuses in Los Angeles in anticipation of a robotaxi service they plan to launch by late 2026. Caveat! At the service inception, it will not be driverless. The firm anticipates removing the human safety operator from the vehicles in 2027. Additionally, the term “microbus” might be an exaggeration; these vehicles will only seat four passengers.

Waymo and Waze have initiated a data-sharing pilot project that will transfer pothole data gathered by robotaxis to a free Waze platform designed for municipalities. Any city or state (or standard Waze user) where Waymo operates will be able to access this data as the program expands.

In other Waymo developments, the Alphabet-owned company has launched its robotaxi service to the public in Nashville. Eleven cities and counting.

From LLMs to illusions, here’s an easy reference to familiar AI terminology

From LLMs to illusions, here’s an easy reference to familiar AI terminology

The domain of artificial intelligence is intricate and multifaceted. Researchers in this area often depend on specialized terminology to articulate their projects. Consequently, we often find it necessary to incorporate these technical phrases into our reporting on the AI sector. That’s why we aimed to compile a glossary elucidating some of the key terms and expressions we utilize in our content.

This glossary will be consistently updated to include new terms as experts persistently unveil innovative techniques to advance artificial intelligence while recognizing budding safety concerns.


Artificial general intelligence, abbreviated as AGI, is an ambiguous concept. It typically pertains to AI that exhibits greater proficiency than the average human across various, if not all, tasks. Sam Altman, the CEO of OpenAI, recently characterized AGI as the “equivalent of a typical worker you could employ.” In contrast, OpenAI’s charter delineates AGI as “highly self-sufficient systems that surpass humans in most economically valuable endeavors.” Google DeepMind’s interpretation varies slightly from these descriptions; the lab perceives AGI as “AI that is at least as competent as humans in most cognitive functions.” Feeling perplexed? No need to be concerned — the leading experts in AI research find it confusing as well.

An AI agent refers to a mechanism that employs AI technologies to execute a sequence of tasks on your behalf — surpassing the capabilities of a basic AI chatbot — including activities like processing expenses, securing tickets or reservations at a restaurant, or even writing and managing code. Nonetheless, as we have previously articulated, this evolving space has many components, meaning “AI agent” may signify various things to different individuals. The infrastructure is still under development to fulfill its intended functionalities. However, the fundamental idea suggests an autonomous system that can utilize multiple AI frameworks to perform multistep tasks.

Faced with a straightforward question, a human brain can respond without much deliberation — queries like “which creature is taller, a giraffe or a cat?” However, in numerous instances, it necessitates writing things down to find the correct solution due to intermediary steps. For instance, if a farmer has chickens and cows totaling 40 heads and 120 legs, writing a simple equation may be required to deduce the answer (20 chickens and 20 cows).

Within the AI framework, chain-of-thought reasoning for large language models involves deconstructing a problem into smaller, intermediate steps to enhance the quality of the final result. Although it generally takes longer to arrive at an answer, the likelihood of accuracy is higher, especially in logical or coding scenarios. Reasoning models are derived from traditional large language models and refined for chain-of-thought processing through reinforcement learning.

(See: Large language model)

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Though somewhat of an ambiguous phrase, compute generally alludes to the essential computational power facilitating AI models’ functionality. This form of processing energizes the AI sector, empowering it to train and deploy its potent models. The term often serves as a shortcut for the types of hardware supplying this computational capacity — such as GPUs, CPUs, TPUs, and various infrastructure that constitute the foundation of contemporary AI.

A subdivision of self-enhancing machine learning where AI algorithms are constructed with a layered, artificial neural network (ANN) design. This enables them to establish more intricate correlations compared to simpler machine learning models, like linear models or decision trees. The architecture of deep learning algorithms takes cues from the interconnected pathways of neurons in the human brain.

Deep learning AI models can autonomously identify significant features in data, eliminating the need for human programmers to outline these characteristics. The design also accommodates algorithms capable of learning from mistakes and, through repetition and modification, refine their outputs. However, deep learning approaches necessitate vast data sets to produce favorable outcomes (millions or more). Additionally, they typically require longer training periods compared to basic machine learning techniques — leading to increased development costs.

(See: Neural network)

Diffusion is the technology central to many artistic, musical, and text-generating AI models. Drawing inspiration from physics, diffusion systems gradually “destruct” data structures — for instance, images, songs, etc. — by incorporating noise until they become unrecognizable. In physics, diffusion is spontaneous and irreversible — sugar dispersed in coffee cannot revert to crystalline form. However, diffusion mechanisms in AI strive to master a “reverse diffusion” technique to recover the obliterated data, acquiring the capability to retrieve information from noise.

Distillation is a methodology employed to extract knowledge from a large AI model using a ‘teacher-student’ framework. Developers send inquiries to a teacher model and log the responses. Outputs are occasionally assessed against a dataset for accuracy. These results are subsequently utilized to instruct the student model, which learns to emulate the teacher’s behavior.

Distillation can facilitate the creation of a more compact, efficient model rooted in a larger model with minimal distillation loss. This method is likely how OpenAI crafted GPT-4 Turbo, a quicker variant of GPT-4.

While all AI enterprises utilize distillation internally, some may have used it to keep pace with leading models. Distillation from a competitor typically infringes upon the terms of service of AI APIs and chat assistants.

This denotes the additional training of an AI model to enhance performance for a more defined task or domain than what was initially prioritized in its training — usually by introducing new, specialized (i.e., task-oriented) data. 

Numerous AI startups are adopting large language models as a foundation to develop a commercial product while striving to enhance utility for a specific industry or task by augmenting preliminary training cycles with fine-tuning based on their unique domain-specific knowledge and skills.

(See: Large language model [LLM])

A GAN, or Generative Adversarial Network, is a type of machine learning framework fundamental to significant advancements in generative AI concerning the generation of realistic data — including (but not limited to) deepfake technology. GANs employ a duo of neural networks, where one utilizes its training data to generate an output that is evaluated by the other model. This secondary discriminator model effectively classifies the generator’s output — enabling improvement over time. 

The GAN setup functions as a competition (hence “adversarial”) — with both models essentially programmed to outdo one another: the generator aims to pass its output unnoticed by the discriminator, while the discriminator strives to detect artificially created data. This organizational contest can refine AI outputs to appear more authentic without requiring extra human intervention. While GANs are most effective for narrower applications (like realistic image or video creation), they are not well-suited for general-purpose AI.

Hallucination is the term favored by the AI industry to describe instances where AI models generate erroneous information — literally fabricating incorrect data. This poses a significant challenge to the quality of AI. 

Hallucinations yield generative AI outputs that can mislead users and may even lead to real-world dangers — with potentially harmful effects (consider a medical inquiry that returns dangerous advice). For this reason, most generative AI tools now include disclaimers urging users to verify AI-generated responses, even though these notes are usually less conspicuous than the information the tools provide with a simple request.

The issue of AIs producing fictitious information is believed to stem from inadequacies in training data. This is particularly challenging for general-purpose generative AI — also known as foundational models — as it appears tough to resolve. There simply isn’t enough data available to train AI models to accurately answer every conceivable question. TL;DR: we have not yet created a deity. 

Hallucinations are driving a shift towards more specialized and/or vertical AI models — meaning domain-specific AIs that necessitate narrower expertise — as a means to diminish the chances of knowledge gaps and curb misinformation risks.

Inference is the act of executing an AI model. It involves setting a model to generate predictions or conclusions based on previously encountered data. To clarify, inference cannot occur without training; a model must learn patterns within a data set before it can effectively extrapolate from this training data.

Various types of hardware can conduct inference, ranging from smartphone processors to robust GPUs to custom-built AI accelerators. However, not all can run models with equal efficiency. Very large models would require an inordinate amount of time to produce predictions on, for example, a laptop compared to a cloud server with specialized AI chips.

[See: Training]

Large language models, or LLMs, are the AI frameworks employed by prominent AI assistants, such as ChatGPT, Claude, Google’s Gemini, Meta’s AI Llama, Microsoft Copilot, or Mistral’s Le Chat. When you interact with an AI assistant, you are engaging with a large language model that processes your inquiry directly or with assistance from various available tools, such as web browsing or code analysis.

AI assistants and LLMs may bear different names. For instance, GPT is OpenAI’s large language model whereas ChatGPT is the AI assistant software.

LLMs are intricate neural networks composed of billions of numerical parameters (or weights, as elaborated below) that discern the relationships between words and phrases, creating a representation of language, a sort of multidimensional map of terms.

These models originate from encoding the patterns they detect in countless books, articles, and transcriptions. Upon prompting an LLM, the model generates the most likely pattern suitable for the query. It then assesses the most probable next word following the previous one based on the preceding context. Repeat, repeat, and repeat.

(See: Neural network)

Memory cache refers to a crucial process that enhances inference (which is the mechanism through which AI generates responses to user inquiries). Essentially, caching is an optimization strategy, intended to improve inference efficiency. AI inherently relies on intensive mathematical computations, and each time those calculations are made, they expend more power. Caching aims to minimize the number of calculations a model may need to perform by retaining specific computations for future user inquiries and tasks. There exist various types of memory caching, one of the more recognized being KV (or key value) caching. KV caching operates within transformer-based models, enhancing efficiency, and expediting results by decreasing the time (and algorithmic effort) required to generate responses to user inquiries.   

(See: Inference)  

A neural network signifies the multi-layered algorithmic framework that supports deep learning — and more broadly, the entire surge in generative AI tools following the introduction of large language models. 

Although the concept of taking cues from the densely interconnected pathways of the human brain as a design paradigm for data processing algorithms dates back to the 1940s, it was the much more modern advent of graphical processing hardware (GPUs) — prompted by the video gaming industry — that truly unlocked the potential of this theory. These chips proved highly suitable for training algorithms with a density of layers previously unattainable — allowing neural network-based AI systems to achieve significantly improved performance across various domains, such as speech recognition, autonomous driving, and drug development.

(See: Large language model [LLM])

RAMageddon is the playful new term for a rather serious trend affecting the tech industry: an escalating scarcity of random access memory, or RAM chips, which power nearly all tech products we utilize in our everyday lives. As the AI sector has flourished, prominent tech firms and AI research labs — all competing for the most powerful and efficient AI — are procuring so much RAM to support their data centers that supplies for the rest of us are dwindling. This supply bottleneck consequently leads to increased prices for what remains.

This encompasses sectors like gaming (where major companies have had to elevate prices on consoles due to the difficulty of sourcing memory chips for their products), consumer electronics (where the memory shortage could result in the largest decline in smartphone shipments in over a decade), and general enterprise computing (as those businesses struggle to acquire enough RAM for their data operations). The price surge is only expected to stabilize once the dreaded shortage concludes but, unfortunately, there’s currently little indication that this will happen in the near future.  

Crafting machine learning AIs involves a procedure known as training. In basic terms, this pertains to feeding data into the model so that it can learn patterns and generate valuable outputs.

The situation can become somewhat philosophical at this stage of the AI development process — since, prior to training, the mathematical framework that serves as the foundation for establishing a learning system consists merely of layers and arbitrary numbers. It is only through training that the AI model truly takes form. Essentially, it entails the mechanism through which the system responds to data characteristics that enable it to modify outputs towards a desired goal — whether that goal is recognizing images of cats or generating a haiku upon request.

It’s crucial to highlight that not all AI necessitates training. Rules-based AIs, programmed to adhere to manually defined instructions — like traditional chatbots — do not undergo a training phase. However, such AI systems are likely to be more limited than those that are well-trained and self-learning.

Nonetheless, training can be costly because it necessitates substantial input data — and typically, the amount of data required for such models has been on an upward trajectory.

Hybrid methodologies can at times be employed to expedite model development and control costs. Such as data-driven fine-tuning of a rules-based AI — meaning that development can require less data, computing power, energy, and algorithmic intricacy than if the developer had initiated building from scratch.

[See: Inference]

In the realm of human-machine interaction, there are evident hurdles. Humans communicate using natural language, while AI systems execute tasks and respond to inquiries through complex algorithmic processes informed by data. In their most basic definition, tokens represent the fundamental components of human-AI communication, as they comprise discrete data segments that have been processed or produced by an LLM. 

Tokens are generated through a process called “tokenization,” which disassembles raw data and refines it into discrete units that are comprehensible to an LLM. Comparably to how a software compiler converts human language into binary code suitable for a computer, tokenization interprets human language for an AI system via user queries, enabling it to formulate a response. 

There are various categories of tokens — including input tokens (which must be generated in response to a human user’s inquiry), output tokens (which are generated as the LLM replies to the user’s request), and reasoning tokens, which involve longer, more intensive tasks and processes associated with user requests. 

In enterprise AI, token usage also determines expenses. Since tokens equate to the volume of data being processed by a model, they have also become the method by which the AI sector monetizes its offerings. Most AI companies operate on a per-token cost for LLM usage. Thus, the more tokens a business utilizes while employing an AI program (like ChatGPT, for instance), the greater the financial obligation it incurs to its AI service provider (OpenAI). 

A technique whereby a previously trained AI model serves as the initial point for developing a new model aimed at a different but typically related task – allowing previously acquired knowledge to be reapplied. 

Transfer learning can foster efficiency gains by streamlining model development. It can also be advantageous when data for the specific task that the model is being built for is somewhat scarce. However, it is vital to recognize that this approach has its limitations. Models depending on transfer learning to acquire generalized capabilities will likely necessitate further training on additional data to perform effectively in their specific area of focus.

(See: Fine tuning)

Weights are fundamental to AI training, as they dictate the significance (or weight) assigned to various features (or input variables) in the data utilized for training the system — thereby shaping the AI model’s output. 

In other words, weights are numerical parameters that delineate what’s most pertinent in a dataset for the designated training task. They fulfill this role by applying multiplication to inputs. Model training typically initiates with randomly assigned weights, but as the process continues, the weights are modified as the model aims to achieve an output that closely aligns with the target.

For instance, an AI model intended to forecast housing prices based on historical real estate data for a designated location might include weights for attributes such as the count of bedrooms and bathrooms, whether a property is detached or semi-detached, if it features parking, a garage, and other relevant factors. 

Ultimately, the weights the model attaches to each of these inputs reflect their influence on property valuation, according to the provided dataset.

This article is regularly updated with fresh information.

At the HumanX conference, all discussions revolved around Claude

At the HumanX conference, all discussions revolved around Claude

At the HumanX AI conference taking place in San Francisco this week, thousands of tech enthusiasts gathered at the Moscone Center, where the focus was on how agentic AI is transforming business practices. Agents that streamline business and programming tasks are now being implemented across various sectors — primarily through chatbots aimed at both enterprises and consumers.

Naturally, I inquired about which chatbot was leading in popularity, and I repeatedly heard the same name: Claude.

Anthropic received mention in numerous panels throughout the week, and it was also a topic during my conversations with vendors while exploring the exhibition floor. The chatbot that didn’t come up frequently? ChatGPT. One vendor emphasized that he and his team relied heavily on Claude, expressing his sentiment that ChatGPT and OpenAI had seen a decline — or, as commonly phrased online, “fell off.”

Recently, this doesn’t seem to be an uncommon viewpoint. In fact, it remains uncertain what could change the belief that, despite a recent $122 billion funding influx and its impending IPO, OpenAI may have lost its edge—or at least, appears increasingly uncertain about its next steps.

Part of the issue could be a belief that the company lacks a clear direction. Last month, OpenAI discontinued several long-standing side projects (including its AI video generator Sora and a controversial plan to introduce a “sexy” version of ChatGPT), instead concentrating on business and coding solutions. Meanwhile, various developments, including a recent New Yorker article questioning the trustworthiness of the company’s CEO, Sam Altman, have sparked some negative attention surrounding the firm. The organization’s affiliations with the Trump administration haven’t garnered favor either, nor has its decision to incorporate advertising into ChatGPT.

During one of HumanX’s discussions, Sierra co-founder and CEO Bret Taylor (who also serves as the chairman of OpenAI’s board) defended Altman when questioned by Alex Heath about the New Yorker article. “I believe Sam is one of the most prominent leaders and executives globally,” Taylor remarked. “If you seek out critics of him, you will find them, and they will be quite vocal,” he continued, adding: “I find Sam to be outstanding. I consider him an exceptional AI leader, and I greatly trust his character as a person who has collaborated with him.”

The controversies and shifts may make OpenAI appear reactive rather than proactive, as if it is merely reacting to circumstances rather than influencing them. However, in terms of visibility and revenue, OpenAI and Anthropic seem to be closely matched — or at least, that’s the impression, with some data indicating that Anthropic is gaining traction among business clients. The Wall Street Journal recently assessed their financials, revealing that the two firms were “the fastest-growing enterprises in tech history.” In that light, perhaps “falling off” for OpenAI simply indicates it’s no longer the undisputed leader. It now faces competition — which is standard in most sectors.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

If anything, it remains evident that OpenAI is committed to maintaining its dominance. This week, the company revealed a new $100 subscription plan for ChatGPT that offers considerably more access to Codex, its coding utility. This initiative clearly appears aimed at enhancing the widespread use of the tool while hopefully attracting users away from Claude Code.

During a HumanX discussion with Bloomberg’s reporter Rachel Metz, OpenAI CTO of B2B applications Srinivas Narayanan highlighted how rapidly the technological landscape has been evolving.

“We are in a remarkable moment in technology, where each month, and sometimes daily, we all anticipate something new,” Narayanan stated. Highlighting agentic coding as an example, he added, “We knew AI would influence software engineering; people have been using assistive coding for the past year, but even in just the last few months, the entire field has shifted dramatically.”

Agentic achievements seem to be a focal point for the tech community at present since other AI applications (such as creative uses) have yet to fully materialize. Nevertheless, the extent of tasks that companies have started to delegate to their new automated assistants is somewhat astonishing—and, as Narayanan noted in his comments, all of this has occurred in a relatively brief time frame. In such an unpredictable landscape, the future remains open-ended.

Slate Auto: All you should understand regarding the Bezos-supported EV startup

Slate Auto: All you should understand regarding the Bezos-supported EV startup

In April 2025, a new venture named Slate Auto emerged from secrecy, making waves in the automotive sector. This startup aimed to develop an affordable, highly customizable electric pickup truck, backed by Jeff Bezos, while secretly operating for three years in Troy, Michigan—the domain of significant automakers like Ford and General Motors.

TechCrunch was the first to report on this revelation, sharing details in early April regarding the company’s existence, its connections with the Amazon founder, and its intriguing business model. The period between our initial report and Slate’s official unveiling in late April was filled with activity, as prototypes of the startup’s truck started appearing across California.

Slate represents a deviation in the U.S. electric vehicle landscape, where failures, bankrupted companies, and product revisions have become prevalent. Although its current investors, leadership team, initial offering, and business plan present a persuasive future, the path ahead is still fraught with possible challenges as production aims for a late 2026 rollout. 

Here is a timeline detailing everything essential about Slate Auto, from its inception and financial backers to its product offerings, business strategy, and production goals.

Inside the EV startup secretly backed by Jeff Bezos

April 8 – Following a year-long investigation, TechCrunch published an article disclosing that a clandestine EV startup, Slate Auto, had been functioning for three years with the financial support of Jeff Bezos and LA Dodgers owner Mark Walter. 

In contrast to other electric vehicle startups, Slate focused on creating a remarkably low-cost electric pickup truck priced around $25,000. This vehicle would offer significant customization, utilizing the expertise of numerous former employees from Harley-Davidson and Chrysler, both of which have considerable accessory and aftermarket parts segments.

Slate Auto’s pickup truck spotted in the wild

April 10 – The following day, an image of an unremarkable electric truck began circulating on the r/whatisthiscar subreddit, with Reddit users speculating it could potentially be Slate’s secret EV. 

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

TechCrunch confirmed that the image depicted a prototype of Slate’s truck parked outside the company’s design center in Long Beach, California.

An EV that can change like a ‘Transformer’

April 21 – Slate began showcasing conceptual versions of their EV on public roads to create marketing excitement prior to its anticipated launch event on April 24. Interestingly, some models seemed to resemble SUVs or hatchbacks, rather than solely pickup trucks. 

TechCrunch confirmed that the company designed the EV to possess “Transformer-like” modular features, and that this initiative served as a promotional tease of such customization.

The analog EV pickup truck that is decidedly anti-Tesla

April 24 – Slate officially introduced its customizable electric pickup truck at a launch event in Long Beach, California. During this event, it was also disclosed that the truck would be priced below $20,000 when factoring in the $7,500 federal EV tax rebate. 

The entry-level model of the truck was disclosed to be quite basic, featuring just 150 miles of range, no power windows, a lack of a central infotainment display, and even no paint. Slate pledged that virtually every aspect of the truck would be customizable, including the seating arrangement and overall design.

A former Indiana printing plant eyed for EV truck production

April 25 – TechCrunch reported that Slate had pinpointed a former printing facility in Warsaw, Indiana as the site for its truck manufacturing. The 1.4 million-square-foot plant was constructed in 1958 and had remained inactive for approximately two years. 

Slate Auto crosses 100,000 refundable reservations in two weeks

May 12 – Slate informed TechCrunch that it had surpassed 100,000 refundable $50 reservations for its budget-friendly EV truck. This indicated that the company’s concept resonated with a broad audience, despite most people being unaware of Slate just two months earlier. 

Slate Auto drops ‘under $20,000’ pricing after Trump administration ends federal EV tax credit

July 3 – The Trump administration pushed through significant tax legislation that, among other changes, established a September expiration for the $7,500 federal EV tax rebate. Consequently, Slate’s truck would no longer be able to utilize that rebate to achieve the “under $20,000” pricing that the startup had been promoting. Thus, Slate removed that phrasing from its website before the legislation was officially signed.

Why this LA-based VC firm was an early investor in Slate Auto

July 8 – Slate’s 2023 funding round included at least 16 investors, with Bezos among them. While most of these investors remain unnamed, Los Angeles-based Slauson & Co. discussed with TechCrunch the reasons behind their early investment in the EV startup during that initial funding stage, as well as Slate’s Series B.

Slate Auto appears on the TechCrunch Disrupt main stage

October 30 – Slate Auto’s CEO Chris Barman participated in an interview on the main stage at TechCrunch Disrupt 2025, addressing Jeff Bezos’ involvement, the obstacles of establishing a new automaker, and how the company intends to create a marketplace for customization.  

Slate passes 150,000 reservations

December 16 – While the growth of EVs has slowed in the U.S., Slate Auto has reached 150,000 refundable reservations for its truck and SUV, demonstrating substantial interest in the vehicle in spite of the loss of the federal tax incentive. Furthermore, with fewer EVs anticipated to enter the U.S. market, it seems that the startup will face limited competition at the lower end of the segment. 

2026 

A surprise CEO swap

 March 9 – Slate executed an unexpected move, appointing a new CEO: former Amazon Marketplace VP Peter Faricy. Chris Barman, the former CEO and Slate’s first employee, will remain with the company, transitioning to a “President of Vehicles” position. Slate selected Faricy to prepare the startup for its year-end commercial launch, beginning with the conversion of the reservation list into as many completed orders as feasible. 

Flipkart, owned by Walmart, and Amazon are putting pressure on India's rapid commerce startups.

Flipkart, owned by Walmart, and Amazon are putting pressure on India’s rapid commerce startups.

India’s rapid commerce sector is experiencing significant growth, with demand more than doubling for certain participants. However, the aggressive delivery initiatives by Flipkart and Amazon are increasing competition in an already saturated market where profitability is still a challenge.

Flipkart, a major player in India’s e-commerce landscape, entered the quick commerce arena later than local competitors like Blinkit, Swiggy, and Zepto. Yet, it has surpassed 800 dark stores (online shopping distribution centers) this week, according to information obtained by TechCrunch, and aims to double that figure by the close of 2026, as per UBS.

This expansion aligns with the intensified competitive phase of India’s quick commerce sector. Noteworthy recent events include the resignation of a co-founder at Swiggy this week. Companies in the sector are also reassessing their strategies in light of increasing competition and costs.

The Walmart-owned business made its quick commerce debut with Flipkart Minutes in August 2024, providing delivery services across various categories in as little as 10 minutes. Since then, the sector has grown quickly. More than 6,000 dark stores are now operational, resulting in considerable overlap among competitors in major urban areas and heightening competition, according to a Bernstein report released earlier this week.

Beyond major cities

Flipkart’s footprint in India is still smaller than that of market leader Blinkit, which boasts over 2,200 dark stores, according to Bernstein. Nonetheless, Flipkart is focusing on expanding beyond major cities to stimulate growth. This contrasts with Blinkit’s plan to scale to 3,000 dark stores by 2027 while concentrating on its leading 10 cities.

“Flipkart embodies the Walmart ethos,” stated Satish Meena, founder of Gurugram-based consumer insights firm Datum Intelligence. “Walmart’s approach is always about enlarging the overall market potential to dominate by expanding the market.”

Flipkart is already observing traction outside major urban centers, with sources informing TechCrunch that 25–30% of its quick commerce orders are now originating from smaller towns. Orders per dark store have also increased by approximately 25% month-over-month, the source revealed.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

However, the expansion in quick commerce remains predominantly centered in larger cities. According to Bernstein, most demand still stems from major urban areas, where denser populations foster quicker deliveries and better utilization of dark stores, even as growth into smaller towns accelerates.

This dynamic is also crucial for profitability. The eight largest cities in India account for over 3,800 dark stores managed by the top five players, with around 3,600 of them having the capability to be profitable, as reported by Bernstein.

“Metro areas clearly offer superior return ratios and profitability due to enhanced throughput,” remarked Karan Taurani, executive vice president at Elara Capital, a London-based investment bank and brokerage. “This business thrives on higher throughput, and for the moment, that largely comes from metro regions.”

Nonetheless, some analysts perceive a long-term opportunity beyond metropolitan areas. “Non-metros (smaller towns) can provide a boost if businesses broaden their offerings beyond groceries and present a wider selection of products at quicker speeds,” stated Datum’s Satish Meena. “Flipkart is placing its bets on that.”

Nonetheless, scaling up outside urban hubs will require time. Currently, quick commerce is feasible in roughly 125 cities, with dark stores typically needing six to twelve months to achieve maturity and profitability, as mentioned by Aditya Soman, a senior research analyst at CLSA, a Hong Kong-based brokerage. Many of the newer stores in smaller towns are still in a ramp-up phase, he emphasized.

Amazon, which entered India’s quick commerce sector in late 2024 shortly after Flipkart’s launch, is also accelerating its footprint. The e-commerce giant has established around 450–500 dark stores to date, with between 330–370 currently operational, according to UBS, as it seeks to capitalize on the growing demand for quicker deliveries.

Pressure mounting on incumbents

Flipkart is not solely dependent on dark-store expansion to stay competitive but is also implementing aggressive pricing strategies. The company is providing some of the most substantial discounts in the sector — about 23–24% across various categories, based on an analyzed sample basket by Jefferies last month — as it aims to draw customers in a market where pricing and convenience are key demand factors.

The impact of such strategies appears to be effective. The brokerage firm JM Financial recently cautioned that Swiggy’s quick commerce operation is trapped in a “growth-versus-profitability predicament” and risks undermining shareholder value, suggesting that a takeover by a larger, well-capitalized entity may be the optimal resolution for investors.

Eternal, the parent company of Blinkit, has seen its shares drop by about 15% this year, while Swiggy’s shares have declined over 29%, even as Zepto gears up for an IPO on Indian stock exchanges later this year.

The arrival and growth of major players like Flipkart and Amazon are transforming the competitive environment. “Quick commerce has transitioned out of a startup phase — it has evolved into a game for major players,” commented Ankur Bisen, a senior partner at retail consultancy Technopak Advisors.

He added that the sector’s financial dynamics and limited differentiation may eventually lead to consolidation, as companies vie for the same customer base in a heavily discount-driven market.

Requests for comments from Amazon, Flipkart, and Swiggy went unanswered. Eternal opted not to provide a statement, while Zepto indicated it could not comment due to a silent period following its IPO application.