Fancy Butt Cushions Are Essential at the Musk v. Altman Trial

Fancy Butt Cushions Are Essential at the Musk v. Altman Trial

The final witnesses provided their testimony on Wednesday in the Musk v. Altman trial. The statements were rather uneventful, save for the revelation that Microsoft has poured over $100 billion into its collaboration with OpenAI. More captivating, though, is a point my colleague Maxwell Zeff and I have been reflecting on after nearly three weeks of following the trial.

The courtroom features a variety of seat cushions.

On the right side of US District Judge Yvonne Gonzalez Rogers’ courtroom, a number of wooden benches are designated for the attorneys, executives, and team members from OpenAI and Microsoft. Roughly 10 individuals, including OpenAI CEO Sam Altman and general counsel Che Chang, have utilized plush black cushions, the most luxurious being from the Purple brand, retailing for $120 at Target. These cushions come in different shapes, with some having rounded edges while others are square. On Wednesday, Chang adjusted one behind his back, a rare adjustment during such proceedings.

OpenAI President Greg Brockman and his wife, Anna, have been present for much of the trial, consistently using pristine white pillows. The pillows, recognized by their tags, seem to be from Coop, a brand specializing in sleep products that offers a two-pack for $35.

On Wednesday, an OpenAI security guard brought a purple handbag into the courtroom, holding a pillow for each Brockman. Anna swiftly handed her husband a pillow before arranging her own. At the same time, OpenAI chief futurist Joshua Achiam took Brockman’s seat later on but initially did not have a pillow until he eventually acquired a standard black cushion.

OpenAI has not responded to requests for comment from WIRED.

A seasoned technology attorney informed WIRED that cushions aren’t “typical” but noted, “it’s not out of left field.” He personally hasn’t witnessed lawyers utilizing cushions or pillows in his cases, although he has never been part of a trial this extensive.

The main litigators enjoy fairly comfortable leather chairs, though they show signs of wear, hinting that the cushioning may not be as supportive as it seems.

During my last significant time in the courtroom in 2021 for parts of the Epic Games v. Apple trial, Covid-related capacity restrictions allowed for ample space. This time, however, the courtroom is almost at its capacity of 150, with bench seating for around 90 attendees.

About an hour into my initial trial day in late April, I thought of bringing my own cushion because of the rigid benches but hesitated for fear of appearing weak. None of the regular reporters, around two dozen including one who was pregnant, initially used cushions. I withstood six days of growing discomfort.

After an uncomfortable morning last week, I decided to try a “cooling” cushion from the Tokyo Olympics. It was too small and thin to provide any real relief. My back particularly ached as I typed notes on the Musk-themed jackass trophy, which allegedly once had its own pillow.

In the end, I gave up on the cushion. However, one reporter from the New York Times eventually gave in to using one, and the courtroom artist, equipped with a colorful cushion, continued to utilize theirs. Perhaps I’ll discover a more suitable solution by next week when Gonzalez Rogers considers potential penalties.

Maxwell Zeff contributed to this report.

This article is part of Maxwell Zeff’s Model Behavior newsletter. Find previous editions here.

Notion has recently transformed its workspace into a center for AI agents.

Notion has recently transformed its workspace into a center for AI agents.

The productivity software creator Notion is entering the era of agentic capabilities.

During a live-streamed product launch on Wednesday, the organization, best recognized for its collaborative note-taking application, unveiled a new developer platform that enhances the functionalities of its tailored AI agents, integrates external agents, and enables teams to create automated multi-step workflows that can extract data from any database.

By establishing an orchestration layer — a framework that synchronizes AI tasks across various tools and data sources — Notion aims to redefine itself from merely a note-taking service with AI capabilities into a central hub for collaboration among individuals and agents across different tools and databases.

In February, Notion initially launched its Custom Agents — AI team members responsible for handling repetitive tasks such as responding to FAQs, compiling status reports, and automating workflows. Since their introduction, Notion reports that more than one million agents have been created by its users.

Nonetheless, these agents faced certain restrictions. They were unable to connect to outside data or apply custom logic. Additionally, external agents utilized by companies lacked a means to interact with the Notion workspace, forcing teams to navigate these challenges through third-party automation platforms or custom scripts operated on their own infrastructure.

“Historically, it’s accurate that Notion hasn’t been the most developer-centric platform,” remarked Ivan Zhao, co-founder and CEO of Notion, during the livestream. “But that’s changing.”

Image Credits:Notion

Now, Notion will empower teams to implement their own custom code. With the introduction of its new Workers, a cloud-based infrastructure for executing custom code, clients can craft their logic and deploy it within a secure sandbox (a contained space that prevents code from affecting other systems). This enables teams to perform actions like synchronizing their data with Notion, creating tailored tools, and activating workflows through webhooks — automated prompts that initiate actions whenever a specific event occurs in another application — without having to depend on external systems.

You might not even need to write the code yourself. The company highlights that your chosen AI coding assistant can handle that for you.

The Workers will operate on the same credit system as Custom Agents, but Notion is offering this at no cost until August, encouraging developers to explore.

Synchronizing external data sources is also included in the Notion Developer Platform. Driven by Workers, the database synchronization feature can retrieve data from any API-enabled database. This means you could integrate information from resources like Salesforce, Zendesk, Postgres, and others directly into your Notion databases — ensuring the data remains up-to-date.

Zhao emphasized that this allows Notion users to “utilize your Notion database as a versatile canvas to drive both your workflows and your agents.”

Image Credits:Notion

Workers are also capable of creating agent tools with custom logic for instances when third-party integration via MCP — short for Model Context Protocol, an emerging standard for connecting AI tools to external data and services — is insufficient.

Additionally, a new feature permits Notion users to interact directly with external AI agents they utilize, assign tasks to them, and monitor their progress, akin to Notion’s own custom agents. At launch, Notion confirms compatibility with partner agents such as Claude Code, Cursor, Codex, and Decagon, with plans to incorporate more in the future.

An External Agent API is also available for teams wishing to link their internal agents with Notion, particularly those tailored for their specific organizational requirements.

Image Credits:Notion

Developers and agents engage with Notion’s innovative Developer Platform via the Notion CLI, a command-line interface designed for developers, accessible through the company’s Business and Enterprise Plans.

The Developer Platform signifies a strategic pivot for Notion, evolving it into a more programmable framework rather than just an application, positioning it to compete against other workflow automation services. As organizations increasingly seek to automate knowledge work and construct internal AI systems, a platform consolidating agents, custom code, and real-time data begins to resemble core infrastructure rather than just a productivity tool.

This direction aligns with the broader movement among AI firms, which have been advancing beyond chatbot functionalities to provide agentic tools capable of executing actions across varied software platforms.

“Any data, any tool, any agent — that’s the overarching vision for the Notion Developer Platform,” Zhao concluded.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Musk's xAI operates almost 50 gas turbines unregulated at its Mississippi data center

Musk’s xAI operates almost 50 gas turbines unregulated at its Mississippi data center

Elon Musk’s xAI is operating nearly 50 natural gas turbines at its data center in Mississippi, as these power plants currently evade state regulation due to a loophole.

The state of Mississippi classifies these power plants as “mobile” since they are positioned on flatbed trailers, allowing them to bypass air pollution regulations for a year. The NAACP, representing local residents, claims that the unregulated emissions from the turbines are deteriorating air quality in an already polluted area. This week, they sought a court injunction against xAI.

The controversy centers around the “mobile” classification of the turbines. The Southern Environmental Law Center, which represents the NAACP in the lawsuit, argues that the turbines are being operated contrary to federal law, which stipulates that plants mounted on trailers can still be regarded as stationary and must adhere to air pollution regulations.

XAI has received permits for 15 of its turbines. A press release from the Greater Memphis Chamber of Commerce previously stated that “approximately half” of the 35 turbines operational in May 2025 would stay on-site. However, xAI has continued to add more. Currently, reports indicate that it is running 46 turbines.

Cat Wu from Anthropic asserts that AI will be able to foresee your needs in the future even before you are aware of them.

Cat Wu from Anthropic asserts that AI will be able to foresee your needs in the future even before you are aware of them.

As the tech world concentrates solely on AI models, Anthropic is experiencing an extraordinarily prosperous year.

The firm is on the verge of outpacing its primary rival, aiming to gather tens of billions in a funding cycle which could elevate its valuation to around $950 billion (OpenAI was valued at $854 billion during its March funding round), with business clients increasingly favoring Claude over ChatGPT. A recent study indicated that Anthropic has surpassed OpenAI with business clients, quadrupling its market share since May 2025.

Cat Wu, who leads the product team for Claude Code and Cowork at Anthropic, has played a pivotal role in that success. Since becoming part of the company in August 2024, Wu has guided Claude through a significant phase, enhancing it from a purely informational chatbot to a more advanced coding tool. Wu, who manages the creation of new functionalities, often collaborates with Boris Cherny, a vital member of Anthropic’s technical team and the developer of Claude Code, leading them to be dubbed Anthropic’s “Batman and Robin.”

Wu met with me during last week’s second annual Code with Claude conference in San Francisco, where she shared her insights on product strategy and her vision for how the use of Claude will evolve in the future. 

This interview has been condensed for brevity and clarity.

When considering product strategy, how much of it is influenced by your peers or rivals? Is that a concern for you?

Our primary focus is on remaining at the cutting edge, so I believe we instill in our team the understanding that AI will consistently improve. For us, the goal is to remain at this forefront. We do not concentrate on competitors. I think if you start considering competitors, you risk being perpetually a few weeks, or even a month, behind your capacity to execute. Therefore, this approach usually doesn’t keep you at the forefront.

Anthropic introduced at least six models last year and has nearly matched that number this year. Do you foresee this pace of development persisting?

We hope it persists (laughs). I believe the models continue to advance at a steady rate, allowing us to share them with our users consistently. The deployment may vary—like our approach with Glasswing—but as much as feasible, we want this intelligence to be advantageous to as many individuals as possible, and it must be managed very safely, which is why we approached Glasswing [the way we did].

[Glasswing is an initiative that Anthropic launched in April, inviting a select group of partner organizations—including firms like Amazon, Apple, CrowdStrike, and Microsoft—to access its new cybersecurity model, Mythos. Unlike many of Anthropic’s other AI models, Mythos is not set for a general public release. The company has expressed concerns that the model—designed to scan codebases for software vulnerabilities—might be misused by malicious actors.]

You previously mentioned that the future of work involves staff overseeing fleets of agents. This could eventually lead to situations where agents perform jobs better than humans, right?

Managing agents is incredibly challenging if you lack the skills for the job. Managers still need to be experts in their field. It’s a novel skill set that many will need to acquire, but managing agents is actually quite similar to managing people, in that you need to grasp why an agent made a mistake. Did it misunderstand my instructions? Was my request ambiguous? You need the ability to troubleshoot.

It appears that the long-term vision is to reduce team size. If agents can handle jobs, then does an intern become unnecessary?

Ideally, the notion is that everyone can achieve much more. For every role, there’s always a portion that is quite tedious. For me, that’s dealing with emails. Everyone has some aspect of their life similar to this… So, I hope that the AI agents can take on those tasks, freeing up time for everyone to pursue more exciting projects [in their free time].

What are you most enthusiastic about in the upcoming six months?

I believe the next significant development is proactivity. Last year was focused on synchronous development. Currently, people are transitioning to routines, like automating responses to customer support inquiries. I think the next phase is that Claude will understand your work patterns and autonomously set up some of these automations for you.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

This is how certain of the globe's biggest malware banks appear when arranged as hard drives

This is how certain of the globe’s biggest malware banks appear when arranged as hard drives

The malware research organization vx-underground, claiming to have the most extensive collection of malware source code, announced in a post on X that its data archive is approximately 30 terabytes.

In a response, Bernardo Quintero, the founder of VirusTotal, a web-based service that scans files for malware using various antivirus engines simultaneously, mentioned that his platform has around 31 petabytes of malware samples contributed by users so far. (One petabyte is roughly 1,000 times larger than one terabyte.)

In both instances, that’s a substantial quantity of data. To provide context, cybersecurity companies, artificial intelligence researchers, and threat intelligence firms regard collections like these as essential for training detection models and comprehending the evolution of attacks. This led us to question: How would these vast datasets actually appear when stacked as hard drives, both vertically and horizontally? And how would they stack up against something like the Eiffel Tower?

Someone in our news team posed this question to an AI chatbot, which provided a remarkably incorrect response.

Instead, we performed some basic calculations to estimate how tall these data reserves would be. Since both vx-underground and VirusTotal report “about” this much data each, “about” suffices for us in this scenario. 

Assuming we utilize 1 terabyte internal hard drives, as these are typically manufactured to have the same physical dimensions to fit in any computer. These standardized 3.5-inch internal hard drives measure 1 inch in height, which is precisely what we need to know for stacking purposes.

We are also assuming that the hard drives in this example are exactly 1 terabyte, as the actual usable capacity of a hard drive is usually slightly less. 

Using an online conversion tool, it appears that vx-underground’s 30 terabytes of malware data could fill 30 hard drives stacked together, reaching a height of 30 inches, or about 2.5 feet.

For perspective, this reporter stands at 6 feet tall. (Refer to the visual below, and yes, I acknowledge the poor operational security.)

By the same reasoning, VirusTotal’s 31 petabytes of collected data would require 31,744 hard drives, which when stacked, would reach approximately 2,645 feet high.

The Burj Khalifa in Dubai, the tallest building in the world, is a bit taller at 2,722 feet.

The Eiffel Tower stands at 1,083 feet tall. Therefore, VirusTotal possesses data equivalent to approximately two and a half Eiffel Towers.

a screenshot featuring a stack of hard drives from left-to-right in descending order, starting with: Burj Khalifa (2,722 feet); VirusTotal (2,645 feet); One World Trade Center (1,792 feet); the Eiffel Tower (1,083 feet); Zack Whittaker, who is 6 feet tall; and vx-underground's malware repository is about 2.5 feet worth of hard drives.
Image Credits:Zack Whittaker / TechCrunch

When you buy through links in our articles, we may receive a small commission. This does not influence our editorial independence.

Geothermal company Fervo Energy surges 33% in IPO launch driven by AI data center needs

Geothermal company Fervo Energy surges 33% in IPO launch driven by AI data center needs

Fervo Energy, the geothermal energy startup, achieved a market valuation exceeding $10 billion during its initial public offering, a surge fueled by the demand for AI data centers — and the energy necessary to operate them.

On Wednesday, Fervo raised $1.89 billion in an expanded initial public offering, initially placing its valuation at around $7.6 billion. The interest in Fervo shares was so pronounced that the company and its underwriters increased the offering multiple times, selling an extra 14.6 million shares while adjusting the price range upward twice, ultimately arriving at $27 per share.

The stock, trading under FRVO on the Nasdaq, surged another 33% at the start of trading on Wednesday, pushing its valuation above $10 billion.

“We were asked several times during the roadshow, ‘Why aren’t you raising more capital?’” Sarah Jewett, Fervo’s senior vice president of strategy, shared with TechCrunch. “As we observed the demand increasing, there were just enough indications suggesting that upsizing was not only possible, but also advisable.”

Similar to many other energy firms, Fervo has been supported by a spike in demand from data centers and AI enterprises, which are eager to secure power for their operations. It marks the second energy stock offering to receive a positive response recently, with nuclear startup X-energy securing $1 billion in its own expanded IPO. 

The fundamental idea behind geothermal energy — harnessing the Earth’s warmth for power — has existed for years, yet Fervo belongs to a new wave of startups pioneering enhanced geothermal, which drills deeper to access hotter rock formations. To maximize the effectiveness of a favorable geothermal site, Fervo employs directional drilling methods developed by the oil and gas sector.

“We’re replicating the strategy used in the shale energy industry, but with the answer key,” Jewett remarked.

Fervo’s IPO yielded the company $500 million more than it had expected, providing a financial cushion that grants the company more flexibility as it works on its Cape Station power plant in Utah, set to begin operations this year. Ultimately, the firm aims to produce 500 megawatts once Cape Station’s initial phase is completed, projected to take approximately three years.

The 500 megawatt capacity of Cape Station was determined by the size of the grid interconnection the company secured, but Fervo is authorized to develop up to 2 gigawatts of geothermal energy there, and it has applied for an increase in the size of its interconnection as well. However, even that might be a conservative projection. Jewett stated that a third-party engineer indicated sufficient heat on-site for as much as 4 gigawatts of capacity.

The additional electricity could be directed to the grid if the interconnection capacity expands. If not, Fervo has been responding to inquiries from companies interested in direct connections. “We’re observing a growing amount of behind-the-meter commercial interest,” Jewett noted.

Fervo is at an earlier stage of development on another initiative. Corsac Station in Nevada, from which Google will procure 115 megawatts of power.

One of geothermal’s attractions is its ability to deliver so-called baseload power, a source capable of generating electricity around the clock, irrespective of weather conditions. Data center operators, who prioritize high uptime, are currently willing to pay a premium for reliable power. This has transformed geothermal from merely another clean energy technology competing for grid space into a preferred choice among tech firms and, now, investors.

The Houston-based company has been racing to lower expenses by shortening the duration of drilling new wells. Fervo’s initial wells took many days to complete and cost over $1,000 per foot. Now, after drilling 14 wells, the company has decreased both drilling time and cost per foot by two-thirds.

This IPO was likely overdue, yet with growing interest in energy, its timing could not have been more opportune.

Fervo disclosed in December that it had concluded a $462 million funding round, and climate tech and energy investors that TechCrunch consulted with late last year almost universally anticipated the company’s IPO. Interest from hyperscalers, in conjunction with data from its Cape Station project, suggested that the company had successfully navigated the “valley of death.” With the IPO behind it, it appears Fervo is now firmly on stable ground.

Purchases made through links in our articles may yield a small commission for us. This does not impact our editorial freedom.

X introduces a History section for bookmarks, likes, videos, and articles.

X introduces a History section for bookmarks, likes, videos, and articles.

X is evolving into a more “save-it-for-later” platform with its new History tab that aggregates your bookmarks, likes, videos, and articles into one easily accessible location.

First introduced on iOS, X’s product head, Nikita Bier, refers to this new feature as an improved method for monitoring all your preferred content, enabling you to revisit items you want to finish reading or watching at a later time.

Image Credits:X

With this update, the Bookmarks button in X’s left-side menu on the mobile app has been renamed to History. The new section categorizes your saved content into four distinct tabs — Bookmarks, Likes, Videos, and Articles — allowing for seamless navigation at any moment. While Bookmarks and Likes are intentional saves, the Videos and Articles tabs will auto-fill based on your viewing or reading activities on X. The History area keeps your data private,
Bier’s announcement states.

This enhancement gives X a more browser-like feel, where users can revisit previously viewed content even without consciously saving it. It brings together functionalities that were scattered across the app, with bookmarks located in the main menu and likes tucked away in a tab on the user profile.

This initiative may also promote increased utilization of X’s long-form article feature, which the company has been promoting as a means for businesses and creators to share updates that extend beyond the typical post limit of 280 characters. X users can keep track of the articles they discover while scrolling, crafting a customized news reader experience within the app.

The update arrives as web publishers face a downturn in referral traffic from platforms such as Facebook and Google, influenced by evolving algorithms and AI-driven experiences that have diminished external site clicks. X regards this evolution as a chance to attract more publishers and creators to produce content directly on its platform, where distribution and discovery are inherently integrated.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Instagram’s latest ‘Instants’ feature merges aspects from Snapchat and BeReal.

Instagram’s latest ‘Instants’ feature merges aspects from Snapchat and BeReal.

On Wednesday, Instagram revealed its global rollout of “Instants,” a novel feature designed for sharing genuine, transient photos after trialing it with a limited audience. This feature allows users to send disappearing photos to their close friends or mutual followers that can be viewed just once and will stay accessible for 24 hours.

The application draws inspiration from social media platforms like Snapchat, Locket, and BeReal, emphasizing genuine and fleeting content.

In contrast to Instagram, which centers on curated and polished content, Instants caters to quick, real-life snapshots. With Instants, you can take a photo using Instagram’s built-in camera with no editing permitted. The format restricts uploads from your camera, and although you can include text in your “instants,” no further modifications are allowed. Meta shared in a blog entry that the intent behind this concept is to share genuine moments in real-time.

Additionally, it’s important to note that Meta is also evaluating the Instants format as an independent application in select locations, including Spain and Italy.

To capture an Instant, tap the mini photo stack in the lower right corner of your Instagram inbox. Once you share your Instant, those who receive it can respond with emojis, reply, and send an Instant back. Meta emphasizes that recipients are unable to screenshot or record the Instants you’ve shared.

Image Credits:Instagram

Instagram keeps your shared Instants in a private archive accessible for up to one year. You can also assemble Instants from this archive into a recap and share it via Instagram Stories.

If you accidentally share an Instant, you can tap the “undo” button and delete the Instant from your archive to retract it from friends who haven’t viewed it yet.

If you prefer not to receive Instants, you can press and hold the stack of Instants in your inbox and swipe right to temporarily halt their delivery. You also have the option to block or mute particular users.

While Instagram originally served as a means for friends to exchange moments, the platform has increasingly been inundated with influencer content and advertisements. Through Instants, the company appears to be returning to a focus on more relaxed, private interactions that revolve around photo sharing among friend groups.

Nonetheless, Instagram might be slightly late to seize the trend of casual, authentic photo sharing, as BeReal is no longer as widely favored as it used to be, and numerous users already utilize Instagram Stories for quick, informal updates, potentially questioning the necessity for a distinct app and feature.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Rivian offshoot Mind Robotics secures an additional $400M

Rivian offshoot Mind Robotics secures an additional $400M

Rivian’s offshoot, Mind Robotics, has secured an additional $400 million, merely two months after acquiring $500 million, as it aims to enhance factory automation through the development of industrial robotics.

This funding round, initially disclosed by the Wall Street Journal, was spearheaded by Kleiner Perkins. Contributions also came from the venture branches of Volkswagen, which collaborates with Rivian on a software joint venture, along with investments from Salesforce.

Rivian’s CEO RJ Scaringe, also the chairman of Mind Robotics, informed TechCrunch in March that he founded the company due to his belief that other startups were not adequately prepared to automate industrial tasks. He launched the initiative — originally called “Project Synapse” — as an endeavor to create “robotics with human-like capabilities.”

Mind Robotics had earlier secured $115 million from Eclipse after its establishment in 2025. The recent funding round elevates the total raised to over $1 billion, valuing Mind Robotics at more than $3 billion, as reported by the Wall Street Journal.

Scaringe was also instrumental in establishing and spinning out a micromobility venture named Also, which has garnered over $300 million thus far.

Who has faith in Sam Altman?

Who has faith in Sam Altman?

In May 2023, Sam Altman, the CEO of OpenAI, was sworn in and presented his views on artificial intelligence regulation before Congress. Senator John Kennedy from Louisiana listened to his thoughts on licensing advanced models and inquired if Altman would be suitable to head a theoretical AI regulatory agency.

“I enjoy my present role,” Altman responded, eliciting laughter.

“You get paid well, don’t you?” Kennedy questioned.

“No, my salary is enough to cover health insurance. I don’t have any equity in OpenAI,” Altman clarified.

“You ought to consult a lawyer,” Kennedy advised.

Currently, Altman has numerous lawyers, who observed as their client underwent an intense questioning session during a federal court appearance in California on Tuesday. They were examining a similar issue as Kennedy — is Altman capable of managing the most sophisticated AI models?

“You failed to inform the U.S. Senate that you held an interest in OpenAI through a share in a Y Combinator fund, didn’t you?” Steve Molo, the aggressive attorney heading Elon Musk’s campaign to halt OpenAI’s for-profit operations, demanded.

Altman acknowledged that he indeed had financial ties to OpenAI due to his limited partner position in the Y Combinator fund. “I didn’t bring it up in that testimony, but, once more, I believe it’s well understood what being a passive owner of multiple venture funds implies,” Altman stated.

“You assumed Senator Kennedy was a highly experienced investor when he posed that question, didn’t you?” Molo retorted.

Altman’s choice to disclose that he had no equity when he could have easily avoided the question was intriguing. It is technically accurate, but Altman — who stressed his background in investing in early-stage startups — must have been aware of his financial stakes in OpenAI through Y Combinator and investments in other AI firms collaborating with OpenAI.

On Tuesday, Altman’s credibility was under scrutiny, at least from the viewpoint of the plaintiffs. OpenAI’s lawyers contended that little progress was made in support of Musk’s case, accusing the other side of defamation. Nonetheless, the jury and Judge Yvonne Gonzalez Rogers are evaluating Altman’s reliability as a crucial figure in the events being reviewed.

Molo highlighted a series of individuals who alleged that Altman lied or deceived them — including sworn statements in court from former OpenAI board members Helen Toner and Tasha McCauley, Elon Musk, and OpenAI co-founder Ilya Sutskever. He also referenced a recent New Yorker article outlining concerns regarding his integrity.

The “incident” — during which OpenAI’s board temporarily dismissed Altman and removed Greg Brockman as board chair for a perceived lack of transparency — has been a focal point of discussion at this trial. Then-board members Toner and McCauley testified that Altman had misled them, with McCauley describing “a toxic culture of dishonesty.”

“I do have reservations that was the complete reason” for his dismissal, Altman remarked. When asked again to acknowledge that the board indicated he had not been transparent, Altman responded, “They asked me to return the following morning.”

The emphasis on his firing is not solely about challenging Altman’s trustworthiness. A significant inquiry of the trial revolves around whether OpenAI’s structure aligns with its mission, especially whether the non-profit board can genuinely exert control over the for-profit entity. From Musk’s legal team’s perspective, the 2023 incident serves as evidence that Altman’s power within the company surpassed that of its board of directors.

Witnesses summoned by OpenAI and Microsoft have insisted that the existing non-profit board does possess authority over the for-profit. Microsoft CEO Satya Nadella described Altman’s dismissal as “amateur city.”

Bret Taylor, who assumed the role of chair on OpenAI’s board following Altman’s reinstatement, claimed he found nothing justifying his termination and stated that Altman has been “open with me.” Dr. Zico Kolter, the OpenAI board member focused on AI safety, affirmed that no one had hindered that initiative since he began in 2024.

However, Taylor also made it clear that the decision to bring Altman back in 2023 stemmed from the belief that his exit would have virtually shut down OpenAI, as most employees were prepared to leave with him. Now, as the jury and judge contemplate whether the present structure aligns with the organization’s mission, they will consider whether the board genuinely has the power to dismiss or discipline its CEO.

When asked if he would ever dismiss himself as CEO, Altman responded that he had no intention of doing so. When queried about his trustworthiness, he replied, “I consider myself an honest and reliable businessperson.”

When you buy through links in our articles, we may receive a small commission. This does not impact our editorial independence.