InScope secures $14.5M to address the challenges of financial reporting

InScope secures $14.5M to address the challenges of financial reporting

Without any prior experience in accounting, anyone who has taken a moment to view a 10-K or 10-Q can recognize that the creation of financial statements is a complicated and monotonous task.

Although traditional platforms like Workiva and Donnelley Financial Solutions strive to simplify financial reporting, veteran accountants Mary Antony (shown right) and Kelsey Gootnick (shown center) found themselves worn out by numerous manual obstacles within these platforms (co-founder and CTO Jared Tibshraeny is shown left).

The two met seven years prior at Flexport, where Gootnick held the position of controller and Antony worked as assistant controller. They maintained contact even after Antony transitioned to Miro and Gootnick to Hopin and subsequently Thrive Global.

Regardless of their workplace, Antony and Gootnick frequently encountered the same manual difficulties.

“The manner in which financial statements are compiled, it’s largely assembled from numerous spreadsheets, transferred into various Word documents, and circulated via email among individuals,” Antony mentioned to TechCrunch.  

Thus, in 2023, the two decided to establish InScope, an AI-driven financial reporting platform designed to assist companies and accounting firms in automating various facets of the financial statement preparation process. The startup recently secured $14.5 million in Series A funding, spearheaded by Norwest, with contributions from Storm Ventures and current investors Better Tomorrow Ventures and Lightspeed Venture Partners.

While InScope is not yet completely automating the creation of income statements and balance sheets, it does facilitate a substantial amount of the manual labor, ranging from math verification to formatting. Merely ensuring that dollar signs and commas are consistent and accurately positioned can save accountants nearly 20% of their time, as per Antony, InScope’s CEO.

Techcrunch event

Boston, MA
|
June 9, 2026

Throughout the last year, InScope has expanded its client base fivefold, drawing in prominent accounting firms such as CohnReznick, currently positioned in the national top 15.

Naturally, it might take some time before accountants — a field Antony characterizes as risk-averse — become comfortable with AI fully managing financial statement preparation. Nevertheless, this is still InScope’s ultimate aspiration.

Norwest partner Sean Jacobsohn informed TechCrunch that he chose to invest in InScope after receiving feedback from several clients stating that the startup’s solution saves them considerable time.

Jacobsohn believes that InScope differentiates itself because very few founders have the specialized knowledge necessary to innovate in the financial reporting technology sector.

“It’s a very intricate domain, and you need to have experienced being in the shoes of the buyer beforehand,” he stated.

Antony concurs that accountants are not usually the kind of individuals who start companies. Luckily, she and Gootnick cultivated their entrepreneurial instincts through years of engagement within the dynamic environments of other rapidly growing startups.

Great news for xAI: Grok is now quite effective at responding to inquiries regarding Baldur’s Gate

Great news for xAI: Grok is now quite effective at responding to inquiries regarding Baldur’s Gate

Various AI laboratories prioritize different objectives. OpenAI has generally concentrated on individual users, while its competitor Anthropic generally aims at corporate clients. Recently, we learned that Elon Musk’s xAI is focusing heavily on video-game walkthroughs.

On Friday, Grace Kay from Business Insider shared an extensive and significant report regarding xAI, the AI startup recently taken over by SpaceX, highlighting how Musk is complicating matters for his staff. However, one particular story caught attention:

In a situation last year, a model release was postponed for several days due to Musk’s displeasure with how the chatbot responded to specific inquiries about the video game “Baldur’s Gate,” as reported by sources familiar with the events. High-ranking engineers were reassigned from other projects to enhance the answers before the launch, they mentioned.

Naturally, it’s understandable the frustration felt by any esteemed and skilled engineer who arrives at work expecting to tackle essential issues of knowledge and machine intelligence, only to be diverted into assisting a 54-year-old man in winning at his video game. Yet, this story raises an even more critical inquiry: Did Musk acquire the gaming proficiency he sought?

To explore that inquiry, our in-house RPG aficionado Ram Iyer compiled five general questions regarding Baldur’s Gate, which we tested against xAI and the three primary models in a sort of quasi-benchmark that I’ve named “BaldurBench.”

In the spirit of journalistic honesty, I’ve made all chat transcripts accessible, which you can view here: Grok, ChatGPT, Claude, and Gemini.

First, the positive news: Grok actually delivers quite accurate information. Its answers were somewhat packed with gamer terms — “save-scumming” rather than saving and “DPS” instead of damage — but they were both practical and well-informed, assuming you understood the context. Grok also has a strong affinity for tables and theorycrafting, which is what one would expect.

Numerous guides for Baldur’s Gate are available, and the models were generally utilizing the same resources, so the main distinctions were stylistic. ChatGPT favors bulleted lists and fragments, while Gemini enjoys highlighting important terms in bold.

Techcrunch event

Boston, MA
|
June 9, 2026

The most unexpected result came from Claude, which was particularly worried about providing information that could spoil my gaming experience. When I inquired about effective party compositions, it concluded its guidance with, “Don’t worry too much and just enjoy what seems entertaining to you.” Thank you, Claude!

It’s crucial to remember that this is a domain where we know (thanks to Business Insider’s insights) that xAI has intentionally sought to achieve parity. Therefore, we shouldn’t interpret too much from the fact that, after the reported push, Grok’s recommendations were roughly equivalent to those of the other models. Nevertheless, it’s reassuring to see that xAI can deliver results when it puts in the effort.

Loading the player…

 

Ukrainian individual imprisoned for identity fraud that assisted North Koreans in obtaining employment at American corporations

Ukrainian individual imprisoned for identity fraud that assisted North Koreans in obtaining employment at American corporations

A federal court in the U.S. has imposed a five-year prison sentence on a Ukrainian individual for involvement in a prolonged identity theft scheme that enabled North Korean workers abroad to unlawfully secure jobs at numerous U.S. businesses.

In 2024, U.S. prosecutors filed charges against Oleksandr Didenko, 29, a Kyiv resident, for arranging employment for North Koreans using the stolen identities of U.S. citizens to obtain job positions and salaries. Earnings from this operation were sent back to Pyongyang, where the regime allocated them to its sanctioned nuclear weapons initiatives.

This case adds to a sequence of recent convictions of individuals linked to facilitating ongoing “IT worker” schemes associated with North Korea. Security analysts have labeled North Korean workers as a “triple threat” to businesses in the U.S. and the West, as they breach U.S. sanctions, help North Koreans in stealing sensitive information from companies, and subsequently threaten those corporations into secrecy regarding their proprietary information.

According to prosecutors, Didenko operated a site called Upworksell, which permitted overseas individuals, including North Koreans, to purchase or rent stolen identities for employment opportunities with U.S. corporations. The Justice Department reported that Didenko managed over 870 stolen identities.

The FBI took control of Upworksell in 2024 and redirected its traffic to their servers. Polish law enforcement apprehended Didenko, who was extradited to the U.S. and eventually admitted guilt.

Upsellwork's website, at the time of its seizure by the FBI in 2024.
A screenshot displaying Upworksell’s website at the moment of its seizure by the FBI (ImagE: TechCrunch/screenshot)

In a recent statement, the U.S. Department of Justice revealed that Didenko also compensated individuals to accept and maintain computers in their residences in California, Tennessee, and Virginia. These “laptop farms” consist of rooms equipped with racks of operational laptops, allowing North Koreans to carry out their tasks as though they were physically in the United States.

CrowdStrike, a leading security firm, indicated last year that there has been a significant increase in the influx of North Korean workers infiltrating businesses, frequently in the roles of remote developers or other technical software engineering positions. This operation is one of the many strategies the North Korean regime employs to enhance its resources while barred from utilizing the global financial system due to international sanctions.

North Koreans have also been known to pose as recruiters and venture capitalists in attempts to deceive unaware high-profile and high-net-worth targets into providing access to their computers, including cryptocurrency assets.

Tesla's attempt to reverse the $243M Autopilot judgment fails.

Tesla’s attempt to reverse the $243M Autopilot judgment fails.

A judge has rejected Tesla’s appeal to overturn a $243 million jury verdict that found the automaker partially liable for a deadly accident involving its Autopilot driving assistance system.

“The grounds for relief that Tesla relies upon are nearly identical to those Tesla presented previously during the trial and in their summaries on summary judgment — arguments that have already been evaluated and dismissed,” stated Hon. Judge Beth Bloom’s ruling. “Moreover, Tesla fails to introduce new arguments or relevant law that convinces this Court to change its prior decisions or the jury’s verdict.”

In August of last year, a jury awarded a $243 million verdict against Tesla concerning its responsibility in a 2019 fatal incident in Florida that took the life of Naibel Benavides and critically injured Dillon Angulo. The jury placed two-thirds of the fault on the driver and one-third on Tesla. Importantly, punitive damages were exclusively assessed against Tesla.

Tesla’s legal team contended, in their request to overturn the ruling, that the responsibility lay with the driver, who contributed to the occurrence of the crash.

Threads posts can now be directly shared to your Instagram Story without exiting the app.

Threads posts can now be directly shared to your Instagram Story without exiting the app.

Threads has surpassed 400 million monthly users, but Meta aims to increase that figure further. With a newly released Threads feature this week, the company is simplifying the process for Threads users to share posts on the app to their Instagram Stories — a strategy that could leverage Instagram’s wider audience to attract more users to Meta’s competitor to X.

On Thursday, the company introduced a new feature allowing you to share a Threads post to your Instagram Story without exiting the Threads app, giving a preview of how the post would appear on your Story directly within the Threads environment.

Previously, the app permitted users to share anyone’s Threads post to their Instagram Story in a manner akin to resharing an Instagram post. It already provided tools for sharing posts to your Instagram Feed or Direct Messages.

Image Credits:Meta

Meta’s text-centric app, Threads, debuted in July 2023 and leveraged its connection with Instagram to swiftly grow its initial user base. Users needed to authenticate using their Instagram credentials, which allowed Threads to fill in details like username, bio, photo, verification status, and followers.

With a single tap, users could instantly follow the accounts they were already following on Instagram — while those not yet on Threads would receive a notification that they had been added.

In the subsequent months and years, Meta has relied on its other, larger social networks to keep expanding Threads, including featuring popular Threads posts on Facebook and incorporating a similar carousel of Threads posts for Instagram users. The company also simplified the process for users to cross-post from Instagram and Facebook to Threads, which aided in increasing adoption.

These strategies have proven successful. Data from the market intelligence provider Similarweb indicated last month that Threads is now experiencing more daily usage than Elon Musk’s X on mobile devices. (X still leads on the web, though.) Threads’ overall user numbers have been consistently increasing, with usage doubling from 200 million monthly active users in August 2024 to 400 million monthly users as of August 2025. The company revealed in October that Threads had reached 150 million daily active users as well.

‘Toy Story 5’ targets unsettling AI toys: ‘I’m constantly hearing’

‘Toy Story 5’ targets unsettling AI toys: ‘I’m constantly hearing’

When the inaugural Toy Story film was released in 1995, Google was not yet in existence and Apple was nearing financial disaster. No one could foresee that more than 30 years later, Pixar would continue to produce Toy Story films, nor could anyone have anticipated that the newest addition to the series would feature Buzz Lightyear and a balding Woody facing off against a malevolent AI tablet named Lilypad.

Indeed, “Toy Story 5” pits classic toys like Mrs. Potato Head, Rex, and Slinky Dog against the looming menace of technology.

The preview reveals Bonnie, the young girl who received Andy’s toys when he departed for college in “Toy Story 2,” playing outdoors with her toys when an unexpected package containing the Lilypad tablet is delivered to her. She becomes utterly captivated by the tablet, failing to even glance away from the screen when her parents inform her that her screen time has ended.

In the “Toy Story 5” trailer, the Lilypad — or, Lily — is depicted as a malevolent antagonist. When Jessie questions the tablet regarding Bonnie’s health, Lily appears to ignore her, prompting the cowgirl to insist that the tablet pay attention.

[embedded content]

“I’m always listening,” Lily ominously states, mimicking Jessie’s passionate speech with a robotic voice… and subsequently translates it into Spanish.

“Technology has invaded our home,” Jessie tells Woody. “I’m losing Bonnie to this gadget.”

Woody responds, “Toys are for enjoyment, but tech is for everything else.”

Techcrunch event

Boston, MA
|
June 9, 2026

Will “Toy Story 5” tug at the emotions of young viewers and encourage them to reconsider the implications of excessive screen time? That could be unlikely. However, at the very least, it provides them with content that isn’t as dull as Cocomelon.

Meta's virtual universe departs from virtual reality

Meta’s virtual universe departs from virtual reality

On Thursday, Meta revealed a significant update for its immersive virtual environment, Horizon Worlds, indicating a departure from the metaverse concept. The tech behemoth announced that the focus for Horizon Worlds will shift to being “almost exclusively mobile” and that it is “explicitly separating” its Quest VR platform from the virtual environment.

Since 2020, Meta’s Reality Labs unit for VR and smart glasses innovation has incurred near losses of $80 billion. The adjustments to Horizon Worlds, along with other recent actions, suggest that Meta is thoroughly reassessing its VR goals.

Recently, the company reportedly laid off about 1,500 staff from its Reality Labs division — approximately 10% of the department’s workforce — and closed multiple VR gaming studios. Furthermore, it was reported that the VR fitness application Supernatural, which Meta acquired in 2023, will cease producing new content and will transition into “maintenance mode.”

Initially launched in 2021 as a VR platform, Horizon Worlds was later extended to web and mobile platforms. On Thursday, Meta stated that to “truly change the game and access a considerably larger market, we’re fully committing to mobile.”

By prioritizing mobile, Horizon Worlds aims to position itself in competition with well-known platforms like Roblox and Fortnite.

“We’re in an excellent position to provide synchronous social gaming experiences at scale, thanks to our unique capability to link these games with billions of individuals on the world’s largest social networks,” Samantha Ryan, VP of content at Reality Labs, remarked in the blog entry. “You began to see this strategy take shape in 2025, and now, it’s our primary focus.”

Ryan also emphasized that Meta continues to prioritize VR hardware.

Techcrunch event

Boston, MA
|
June 9, 2026

“We possess a strong roadmap for upcoming VR headsets designed for various audience segments as the market evolves and matures,” Ryan noted.

Meta has effectively forsaken its metaverse aspirations in favor of AI. After redirecting its Reality Labs investments from the metaverse, Meta is now concentrating on creating AI wearables and enhancing its proprietary AI models.

In last month’s earnings call, Meta CEO Mark Zuckerberg remarked, “It’s difficult to envision a world in several years where the majority of glasses that individuals wear aren’t AI glasses.”

The executive also mentioned that sales of Meta’s glasses have tripled over the past year, dubbing them “some of the fastest-growing consumer electronics in history.”

Lucid Motors reduces its workforce by 12% in its pursuit of profitability.

Lucid Motors reduces its workforce by 12% in its pursuit of profitability.

Lucid Motors plans to reduce its workforce by 12% to “enhance operational efficiency and allocate our resources effectively as we progress towards profitability,” based on an internal memo acquired by TechCrunch.

The memo states that hourly workers in manufacturing, logistics, and quality assurance are exempt from these layoffs. While the exact number of layoffs remains uncertain, it is expected to be in the hundreds. As of the end of 2024, Lucid Motors reported a global full-time employee count of 6,800.

“Parting ways with colleagues is always challenging,” interim CEO Marc Winterhoff mentioned in the memo. “We appreciate the efforts of those affected by these decisions, and we are offering severance packages, bonuses, ongoing health benefits, and transition assistance to support them during this time.” The company did not offer an immediate response to a comment request.

These layoffs occur as the firm is actively increasing its production and distribution of the Gravity SUV. Following initial production and quality challenges with the Gravity, Lucid Motors has regained momentum and successfully doubled its output for 2024 compared to the previous year.

Additionally, the company is set to unveil a more budget-friendly mid-size electric vehicle later this year, projected to retail around $50,000. It is also working with Uber and Nuro, a self-driving vehicle firm, to launch a robotaxi service in the San Francisco region this year. Lucid Motors will disclose its financial results for 2025 next week.

“Crucially, today’s measures do not alter our strategy,” Winterhoff detailed in the memo. “Our fundamental priorities remain intact, and our attention is still on commencing production of our Midsize platform. With careful execution, we also aim for further entry into the robotaxi market, continuing advancements in ADAS and software, and boosting sales of Lucid Gravity and Air in both current and new regions.”

Lucid Motors has nearly completed a year without a permanent CEO. Peter Rawlinson, who served as chief executive and chief technology officer, unexpectedly resigned on February 25, 2025. Since his departure, Lucid Motors has experienced considerable changes within its executive team, including the exit of the chief engineer, who filed a lawsuit against the company in December for wrongful dismissal and discrimination. (The claims have been described by Lucid Motors as “absurd.”)

Techcrunch event

Boston, MA
|
June 9, 2026

AI’s pledge to independent filmmakers: Quicker, less expensive, more solitary

AI’s pledge to independent filmmakers: Quicker, less expensive, more solitary

In rural Hawai’i, a Filipino man treads through the garden of his formative years, his steps rustling the grass. The chorus of chirping birds adds to the tropical symphony as he nears a shrine situated at the foot of a starfruit tree. He stoops to examine a black-and-white image of a woman, her hair styled in a side part typical of the 1950s. 

Abruptly, a strong breeze rattles the branches of the tree, sending the shrine’s items tumbling. The man retreats, stumbles over a root, and strikes his head. Upon regaining consciousness, he finds himself in a shadowy, fog-laden forest, with a woman donning a clay mask looming over him, wielding a sword. 

“Who dares to slumber beneath the sacred tree?” she queries in Ilocano, a language prevalent in Hawaii’s Filipino community, while the sword hovers at his throat. He admits to feeling disoriented and makes an attempt to escape. She pursues him, alternating between sprinting and gliding through the air. He falls once more. She presses on, sword raised high. In an act of desperation, he throws a stone, shattering her clay mask and unveiling half of her face. 

“Mom?” he inquires. 

This marks the beginning of “Murmuray,” a short film crafted by independent director Brad Tangonan. Every aspect of this film resonated with his previous creations, from the richly tactile nature scenes to the dreamlike, softly muted highlights. 

The singular distinction? He produced it using AI. 

Tangonan was among ten filmmakers selected for Google Flow Sessions, a five-week initiative that provided artists with access to Google’s suite of AI tools to create short films, including Gemini, image generator Nano Banana Pro, and film generator Veo.   

Techcrunch event

Boston, MA
|
June 9, 2026

Each film showcased a distinct perspective. Hal Watmough’s “You’ve Been Here Before” mixed hyperrealistic, lifelike imagery with playful cartoon elements to whimsically delve into the significance of a morning routine, whereas Tabitha Swanson’s “The Antidote to Fear is Curiosity” presented a more abstract, philosophical dialogue regarding the interplay between AI and our identities. 

None of the short films, showcased at Soho House New York late last year, felt like mere AI creations. Every independent filmmaker I interviewed stated that in these instances, AI empowered them to narrate stories they wouldn’t have been able to share due to budgetary or temporal constraints. 

[embedded content]

“I perceive all these tools, whether it’s a camera or generative AI, as instruments for an artist to convey their vision,” Tangonan shared with me after the screenings. 

This viewpoint that AI is merely another instrument for creators is evidently the message that Google is keen to promote. Google is correct; as video generation technologies advance, AI will increasingly integrate into a creator’s toolkit. 

By 2025, firms like Google, Runway, OpenAI, Kling, Luma AI, and Higgsfield had evolved significantly beyond the uncanny, prompt-driven novelties of the preceding year. The AI video sector, backed by billions in venture capital, is transitioning from prototype phase to post-production.

This age of AI proliferation that promises to “democratize access” to the film industry simultaneously threatens to diminish jobs and creativity, smothering them beneath an avalanche of low-quality work. The existential repercussions have pitted creatives against one another. Those who embrace AI might be regarded as complicit; those who abstain face the risk of obsolescence. 

The dilemma isn’t whether these tools should be part of the toolkit — they are arriving, whether welcomed or not. The critical inquiry should be: What type of filmmaking will endure when the industry prioritizes speed and volume over quality? And what transpires when individual creators wield these tools to craft works of genuine significance?

But is it slop?

Filmmaker Keenan MacWilliam utilized AI to animate scanned depictions of plants and fish in her short film “Mimesis”Image Credits:Keenan MacWilliam

Numerous criticisms against AI in filmmaking have surfaced — even from some of the industry’s most renowned figures. 

Filmmaker Guillermo del Toro stated last October that he would prefer death over employing generative AI in filmmaking. James Cameron expressed in a recent CBS interview that the notion of generating actors and emotions through prompts is “terrifying,” suggesting that generative AI merely regurgitates a composite average of all that humanity has previously created. 

Werner Herzog remarked that the films he has observed generated by AI “lack soul.” He noted: “The common denominator, and nothing beyond this common denominator, can be identified in these creations.”

Cameron and Herzog contend that AI is seizing creative control from humans and cannot possibly depict their personal lived experiences. 

“It’s simple to harbor anger toward AI as an abstract concept, but it’s more challenging to resent an individual who has crafted something intimate,” Watmough commented to TechCrunch. 

Tangonan, who categorizes “Murmuray” as a “family narrative,” aligns with that viewpoint. 

“AI acts as a facilitator,” Tangonan expressed. “I’m the one making all the creative choices. When viewers come across ‘AI slop’ online, it’s often just the lowest common denominator material. And, yes, if you relinquish control to AI, that’s what you’ll receive. But if you retain your unique voice, perspective, and style, you’ll end up with something distinctive.” 

Utilizing AI in filmmaking goes beyond merely prompting a film into existence. For instance, Tangonan wrote the script for “Murmuray” independently and compiled visual references for his shot list. He then input that material into Nano Banana Pro to create images that aligned with his aesthetic, serving as a basis for video production.  

Filmmaker Keenan MacWilliam also took care to ensure her short film “Mimesis,” a fictional guided meditation, was a “true extension of [her] visual style, rather than a ‘blender’ of other creators’ works.”

MacWilliam scripted and recorded her own narration for the meditation, which was both relaxing and amusing. On-screen, against a dark, watery background, psychedelic visuals of flowers and plants merged, transformed into smoke, morphed into seahorses, and swam away.

All visuals were sourced from MacWilliam’s personal collection of scanned flora and fauna — she takes her scanner wherever she goes. 

“I dedicated significant time to mastering apps that utilized my own dataset, which I then referenced,” MacWilliam informed TechCrunch, noting that she collaborated with her long-time composer and sound designer on the project. “I opted to avoid using AI for anything that I could have filmed or could have asked my collaborators to animate. My objective was to unveil new forms of expression for my established themes and style, not to replace the roles of those I enjoy working with.”

This desire to leverage AI only when collaboration with other humans was unfeasible or when the peculiar nature of AI output complemented the story was a recurring theme among the filmmakers I spoke to at the Google Flow event.

For instance, Sander van Bellegem’s “Melongray” delved into life’s acceleration through mesmerizing visualizations. In one scene, a salamander transforms into a balloon. Although not part of his original script, he was inspired by AI’s capability to push the boundaries of imagination and physics. 

To be [efficient] or not to be?

[embedded content]

Contemporary film studio budgets are being strained by escalating filming expenses, the transition to streaming, and corporate consolidation marked by risk aversion. Consequently, substantial investments are reserved for safe revenue streams (consider: yet another Marvel movie), while original mid-budget films have nearly vanished. 

Integrating AI into the equation risks intensifying studios’ scarcity mindset to the extent that they may attempt to eliminate anything that can be — actors, sets, lighting — disregarding art and quality. However, the efficiencies afforded by AI could potentially lower barriers, making it feasible for film studios to create original works. 

Cameron himself acknowledged in his CBS interview that generative AI could reduce the costs of visual effects, potentially paving the way for more imaginative science fiction and fantasy films — ventures that are currently reserved for established intellectual properties like “Avatar.”

The scene in “Murmuray” featuring the woman soaring through the forest would have necessitated costly visual effects or complex rigging on set, both of which were beyond the budget of a short film, according to Tangonan. 

Nevertheless, even filmmakers who recognize the advantages of efficiency comprehend the potential threats to artistic expression. 

“In general, I believe that efficiency does not foster creativity,” MacWilliam remarked.  

Empowered and isolated

Hal Watmough’s short film “You’ve Been Here Before” humorously examines the significance of morning routinesImage Credits:Hal Watmough

For independent filmmakers, having access to such potent tools is both a blessing and a curse. It does “democratize access,” indeed, but it also results in solitary work. The more one can accomplish independently, the lesser the incentive to collaborate. 

“I recognize that I’m a one-man show, and I’ve created all this on my own…but that should never represent how anyone narrates a story or produces a film,” Watmough mentioned to TechCrunch, acknowledging that a friend who is an actor lent his voice for his short. “It ought to be a collaborative endeavor because greater involvement leads to broader accessibility and deeper connection with the audience.”

Directors make creative choices, but they don’t make all of them. The filmmakers I consulted found themselves unexpectedly taking on roles such as set designer, lighting director, and costumer — responsibilities requiring skills they lacked. This was exhausting and distracting, diverting them from the work they genuinely cared about. It was disconcerting to consider how an entire creative ecosystem could be so rapidly disrupted. 

The filmmakers I spoke to voiced their preference not to substitute actors with AI, although some acknowledged that AI-generated performers seem inevitable for smaller studios. The technology for generating actors, their emotions, and movements already exists and continues to improve. AI video startups like Luma AI, which raised a staggering $900 million Series C last November, are developing technologies that allow for an actor’s performance to be captured once, only to use AI to alter the character, attire, and setting. 

“Ideally, I would collaborate with live actors, cinematographers, department heads, and the full crew to create something extraordinary, utilizing AI to complement our efforts where on-set limitations arise, whether due to budget constraints or time,” Tangonan stated. 

If artists don’t define AI, studios will

[embedded content]

“Creating any artistic work that incorporates new technology necessitates a level of introspection and a readiness to engage in dialogue surrounding the work,” MacWilliam remarked.

“These are tools,” she added. “How will you wield the tool? Will you maintain ethical standards? Will you raise pertinent questions? Will you be open and share knowledge?”

However, many individuals do not perceive AI tools as neutral. Beyond labor replacement issues, copyright dilemmas persist. AI video generation startup Runway reportedly scraped thousands of hours of YouTube videos and copyrighted media, while entities like Google, OpenAI, and Luma AI have faced scrutiny for potentially using copyrighted films and stock footage without consent. (Some tools, like Moonvalley’s Marey, exclusively utilize openly licensed data.) Furthermore, the environmental implications are alarming — some estimates indicate that generating mere seconds of AI video could consume as much electricity as several hours of streaming. 

Unsurprisingly, numerous filmmakers I consulted mentioned facing stigma for exploring AI use. 

“Whenever I post content online, many of my filmmaking peers display an immediate, reflexive response advocating that we should all adhere to the principle of not using any of these tools,” Tangonan remarked. “I simply disagree with that.” 

If filmmakers shy away from discussing how AI can be employed ethically, the conversation may end up being dictated by those who prioritize efficiency over art, rather than by artists seeking responsible utilization.

“The film industry is struggling because innovation is lacking and costs are spiraling. We require tools like this for it to thrive,” Watmough asserted. “It’s crucial that individuals engage with AI because if we don’t, it will evolve into something unrecognizable, and that lack of sustainability is concerning.”

Correction: An earlier version of this article incorrectly identified Ilocano as a Hawaiian dialect of Filipino. Ilocano is a language originating from the northern Philippines and is commonly spoken amongst Filipino communities in Hawaii.

OpenAI deepens India push with Pine Labs fintech partnership

OpenAI deepens India push with Pine Labs fintech partnership

As India pitches itself as a global hub for applied artificial intelligence, OpenAI has partnered with Pine Labs to integrate AI-driven reasoning into the fintech firm’s payments stack, automating settlement and invoicing workflows in a move the companies say could help accelerate AI-led commerce in India.

The partnership will see Pine Labs embed OpenAI’s application programming interfaces — software tools that let companies plug AI into their existing systems — within its payments and commerce infrastructure, the companies said on Thursday, all with the aim of enabling AI-assisted settlement, reconciliation, and invoicing workflows.

The deal underscores OpenAI’s broader push to expand its footprint in India, one of its fastest-growing markets, as it looks to move beyond being known primarily as the maker of ChatGPT and embed its technology into education, enterprise, and infrastructure. Earlier this week, OpenAI partnered with leading Indian engineering, medical, and design institutions to bring AI tools into higher education, betting that India’s large developer base and more than a billion internet users will play a central role in the next phase of AI adoption.

Pine Labs is already using AI internally to automate parts of its settlement and reconciliation process, cutting the time it takes to clear daily settlements from hours to minutes, according to Chief executive B Amrish Rau. The Noida-based company previously relied on manual checks by dozens of employees to process funds from multiple banks before markets opened each day, a workflow that is now largely handled by AI-driven systems, he said in an interview.

For Pine Labs, the partnership is intended to extend those AI-driven efficiencies beyond internal operations to merchants and corporate clients, starting with business-to-business use cases such as invoice processing, settlements and payments orchestration, Rau told TechCrunch. He noted the company sees faster adoption in B2B workflows, where AI agents can handle large volumes of repetitive financial tasks under predefined rules, before similar capabilities reach consumer-facing payments.

“People talk about retail AI, but the bigger impact of all of this is really efficiency improvement, especially in B2B,” Rau said. “If you look at invoicing and settlement, those are workflows where agents can actually drive the process end to end, and that’s where adoption can happen faster.”

The rollout of more autonomous, agent-led payment workflows will move faster in overseas markets where regulations already allow such transactions, Rau said, while India is likely to see a more gradual adoption focused on AI-assisted commerce rather than fully agent-initiated payments. He said that Pine Labs is already prototyping agent-driven payments in parts of the Middle East and Southeast Asia, even as Indian regulations require tighter controls on how payments are authorized.

Techcrunch event

Boston, MA
|
June 23, 2026

For OpenAI, the partnership offers a route deeper into India’s payments and enterprise ecosystem as it looks to move beyond consumer-facing tools and embed its models into high-volume, regulated workflows. Rau said the collaboration is aimed at increasing merchant stickiness and expanding Pine Labs’ role from a payments processor to a broader commerce platform, with higher transaction volumes over time translating into incremental revenue.

Pine Labs says it works with more than 980,000 merchants, 716 consumer brands, and 177 financial institutions, and has processed over 6 billion cumulative transactions valued at over ₹11.4 trillion (about $126 billion), per its prospectus published last year. The fintech operates across 20 countries, including Malaysia, Singapore, Australia, parts of Africa, the UAE, and the U.S., giving the OpenAI partnership reach across both Indian and international markets.

Rau said the partnership does not involve revenue sharing between the two companies, with Pine Labs not taking a cut if its merchants choose to embed OpenAI’s tools. “We’ve kept it completely independent of each other — anything related to payment and payment services, we will get the benefit of it, and anything related to OpenAI revenues will go to them,” he said.

The arrangement, Rau added, is also non-exclusive. He compared it to OpenAI’s partnership with Stripe in the U.S. and said Pine Labs remains open to working with other AI providers.

Rau said Pine Labs is building additional security and compliance layers around AI-driven workflows to ensure that sensitive merchant and consumer transaction data remains protected, as the company integrates AI more deeply into its payments systems. He said the focus is on ensuring transactions remain secure and compliant even as more workflows are automated by AI.

Pine Labs’ interest in AI-driven commerce builds on earlier work through its Setu unit, which has experimented with agent-led bill payment experiences using chatbots including ChatGPT and Anthropic’s Claude. Separately, India also began piloting consumer payments directly through AI chatbots last year.

The new announcement comes as India hosts its AI Impact Summit in New Delhi, where global AI companies including OpenAI, Anthropic, and Google are showcasing their latest capabilities alongside Indian startups demonstrating AI applications aimed at large-scale deployment across sectors such as finance, healthcare, and education.