Defense company Shield AI achieves a valuation of $12.7B, increasing by 140%, following a contract with the US Air Force.

Defense company Shield AI achieves a valuation of $12.7B, increasing by 140%, following a contract with the US Air Force.

Shield AI, a company specializing in autonomous military aircraft, has secured $1.5 billion in Series G financing, achieving a post-money valuation of $12.7 billion, as the company declared on Tuesday. The funding round was spearheaded by the private equity firm Advent (which claims a $1 billion budget dedicated to defense technology) alongside a JPMorganChase investment consortium.

Moreover, Shield AI has sold $500 million in preferred shares to funds administered by Blackstone and has also arranged a $250 million loan for future use. This financial influx is facilitating Shield’s acquisition of Aechelon Technology, a developer of flight simulation technology used for training U.S. military aviators. The specifics of that acquisition remain undisclosed.

This latest funding round follows Shield’s previous raise of $240 million, which granted it a $5.3 billion valuation in March 2025. This signifies a remarkable 140% increase in value over a single year. The impetus for this surge is clear: Shield AI’s Hivemind autonomy software was chosen in February to collaborate with the U.S. Air Force on its Collaborative Combat Aircraft drone prototype initiative.

In a noteworthy development, Shield’s software has been picked to complement competitor Anduril’s “Fury” autonomous fighter jet. Anduril possesses its own software, known as Lattice, which controls its Fury aircraft. However, the Air Force evidently desires to avoid reliance on a single vendor for its complete next-gen warfighter drone lineup.

Nonetheless, Anduril is unlikely to be significantly troubled by this sharing of the rewards. The company recently secured $2.5 billion at a $30.5 billion valuation in June, and rumors suggest it aims to raise up to $8 billion at a $60 billion valuation.

Other participants in Shield’s Series G funding round include Snowpoint Ventures, InnovationX, Riot Ventures, Disruptive, and Apandion.

Apple has made progress with iOS 26 security; however, leaked hacking tools continue to put millions at risk of spyware attacks.

Apple has made progress with iOS 26 security; however, leaked hacking tools continue to put millions at risk of spyware attacks.

The prevailing belief among security specialists for iPhones has been that uncovering vulnerabilities and crafting exploits for iOS is a challenging endeavor, necessitating substantial time, resources, and skilled teams of researchers to penetrate its security layers. Consequently, iPhone spyware and zero-day vulnerabilities, which remain unknown to the software vendor until they are exploited, were uncommon and typically employed in restricted and targeted attacks, as stated by Apple.

However, in the past month, cybersecurity analysts at Google, iVerify, and Lookout have identified several extensive hacking initiatives utilizing tools known as Coruna and DarkSword, which have been indiscriminately targeting victims globally who are not operating Apple’s latest software. Some of the individuals behind these breaches include Russian intelligence agents and Chinese hackers, who target their victims through compromised websites or counterfeit pages, potentially enabling them to extract information from numerous victims.

Currently, some of these tools have surfaced online, allowing anyone to utilize the code and easily execute their own attacks against Apple users on older iOS versions.

Apple has dedicated extensive resources to new security and development technologies, such as implementing memory-safe code in its latest iPhone models and introducing features like Lockdown Mode specifically designed to combat potential spyware threats. The aim has been to enhance the security of modern iPhones and bolster the assertion that the iPhone is exceptionally difficult to compromise.

Nevertheless, many older iPhones that remain in use are now more accessible targets for spyware-wielding spies and cybercriminals.

Currently, there exist essentially two categories of iPhone users.

Users operating the latest iOS 26 on the newest iPhone 17 models unveiled in 2025 benefit from a new security feature termed Memory Integrity Enforcement, designed to prevent memory corruption vulnerabilities, among the most frequently exploited weaknesses in spyware and phone unlocking operations. DarkSword significantly relied on these memory corruption vulnerabilities, as indicated by Google.

In contrast, there are iPhone users who continue to operate the preceding version of Apple’s mobile operating system, iOS 18, or even earlier iterations, which have previously been susceptible to memory-based hacks and other exploits.

Get in Touch

Do you possess additional information regarding DarkSword, Coruna, or other government hacking and spyware tools? From a non-work device, you may reach out to Lorenzo Franceschi-Bicchierai securely on Signal at +1 917 257 1382, or via Telegram, Keybase and Wire @lorenzofb, or by email.

The emergence of Coruna and DarkSword suggests that memory-based assaults may persist in troubling users of older iPhones and iPads that are lagging behind the newer, more memory-secure models.

Experts from iVerify and Lookout, two cybersecurity firms with a commercial interest in selling mobile security solutions, assert that Coruna and DarkSword may also challenge the long-standing belief that iPhone hacks are infrequent.

iVerify’s co-founder Matthias Frielingsdorf mentioned to TechCrunch that mobile assaults are now “widespread,” though he also indicated that attacks leveraging zero-days against the latest software versions “will always be charged at a premium rate,” suggesting that these are not intended for large-scale hacks.

Patrick Wardle, a security expert at Apple, pointed out that one issue is the tendency for individuals to label attacks on iPhones as rare or sophisticated simply because they are infrequently documented. The reality, he stated, is that these attacks may exist but are not always detected.

“Characterizing them as ‘highly advanced’ is akin to calling tanks or missiles advanced,” Wardle expressed to TechCrunch. “It’s accurate, but it overlooks the essential point. That’s merely the standard capability at that level, and all (most) nations possess them (or can acquire them for the appropriate cost).”

Another issue brought to light by Coruna and DarkSword is the apparent growth of a “second-hand” market, creating a financial incentive “for exploit developers and individual brokers to essentially receive compensation twice for the same exploit,” according to Justin Albrecht, principal researcher at Lookout.

Particularly when the initial exploit is patched, it becomes logical for brokers to resell it prior to universal updates.

“This is not a one-off occurrence, but rather an indication of future trends,” Albrecht remarked to TechCrunch.

Google is introducing Search Live worldwide

Google is introducing Search Live worldwide

On Thursday, Google revealed that it is rolling out its AI-driven conversational search feature, Search Live, worldwide across all languages and regions where AI Mode is accessible. This rollout will provide access to the feature for individuals in over 200 countries and territories, according to Google.

Initially introduced in July 2025, Search Live empowers users to aim their phone camera at items to receive instant support, facilitating interactive dialogues that leverage the visual context from the camera stream. Before this global rollout, Search Live was limited to the U.S. and India.

The expansion is made possible by Google’s latest audio and voice model, Gemini 3.1 Flash Live. This model enhances conversations to be more natural and intuitive, as stated by the tech giant.

To activate the feature, users must launch the Google app on Android or iOS and select the Live icon beneath the Search bar. From that point, they can verbally pose a question to receive an audio reply, followed by additional inquiries to continue the dialogue. Users also have the choice to delve deeper by browsing web links.

“Search Live is intended for those instances when you require immediate assistance, and typing out a question simply won’t suffice,” Google mentioned in a blog entry. “If you wish to inquire about something in front of you, like how to assemble a new shelving unit, you can activate your camera to provide visual context. This enables Search to see what your camera perceives and offer valuable suggestions, along with links to further information online.”

Google highlights that Search Live can also be accessed if you are already using Google Lens by selecting the “Live” option at the bottom of the display.

The tech powerhouse additionally announced that Google Translate’s “Live Translate” feature is coming to iOS. This function, which allows you to hear immediate translations in your headphones, is also being expanded to additional countries, including Germany, Spain, France, Nigeria, Italy, the United Kingdom, Japan, Bangladesh, and Thailand.

Google asserts that this expansion allows users on both Android and iOS to receive real-time translations on any headphones in over 70 languages.

The two largest dramas in Silicon Valley have converged: LiteLLM and Delve

The two largest dramas in Silicon Valley have converged: LiteLLM and Delve

This incident is one of those real-life situations from Silicon Valley that could easily be part of an HBO parody. This week, a truly dreadful piece of malware was found in an open source initiative created by LiteLLM, a graduate of Y Combinator.

LiteLLM allows developers to effortlessly access numerous AI models and offers functionalities like expense management. It has become a significant success, reportedly downloaded up to 3.4 million times daily, according to Snyk, one of the multiple security analysts observing the situation. The project had garnered 40K stars on GitHub and thousands of forks (those who utilized it as a foundation to modify and personalize it).

The malware was identified, recorded, and made public by research scientist Callum McMahon from FutureSearch, a company providing AI agents for online research. The malware infiltrated through a “dependency,” which refers to other open source software that LiteLLM depended on. It subsequently stole the login information of everything it encountered. With those credentials, the malware accessed more open source packages and accounts to gather additional credentials, and so forth.

The malware caused McMahon’s computer to crash after he downloaded LiteLLM. This event led him to explore and uncover it. Ironically, a flaw in the malware led his device to malfunction. Because of the careless design of that malicious code, both he and renowned AI researcher Andrej Karpathy concluded it must have been coded without proper care.

The LiteLLM team has been tirelessly addressing the issue this week, and the good news is that it was detected relatively quickly, probably within hours.

There’s an additional aspect to this story that users on X are eagerly discussing. As of March 25, when we checked, LiteLLM was still proudly advertising on its site that it has achieved two significant security compliance certifications, SOC2 and ISO 27001.

However, it utilized a startup named Delve for those certifications.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Delve is the Y Combinator AI-powered compliance startup that has faced accusations of deceiving its clients regarding their actual compliance status by allegedly creating fabricated data and employing auditors who merely approve reports. Delve has rejected these claims.

LiteLLM website features security cert by Delve
LiteLLM website features security cert by Delve.Image Credits:LiteLLM

There is an aspect of nuance here that merits comprehension. Such certifications are meant to demonstrate that a company has robust security protocols established to reduce the likelihood of occurrences like this. Certifications do not inherently safeguard a company, like LiteLLM, from malware attacks. While SOC 2 is expected to address policies regarding software dependencies, malware can still infiltrate.

Nevertheless, as engineer Gergely Orosz remarked on X when he noticed individuals mocking it online, “Oh no, I thought this WAS a joke. … but no, LiteLLM *really* was ‘Secured by Delve.’”

Regarding LiteLLM, CEO Krrish Dholakia refrained from commenting on the use of Delve. He remains occupied rectifying the unfortunate situation from being a target of the attack.

“Our immediate focus is the ongoing investigation with Mandiant. We are dedicated to sharing the technical insights gained with the developer community once our forensic review is concluded,” he informed TechCrunch.

ByteDance's latest AI video creation model, Dreamina Seedance 2.0, arrives in CapCut.

ByteDance’s latest AI video creation model, Dreamina Seedance 2.0, arrives in CapCut.

While OpenAI appears to be scaling back its activities in the video generation arena with the closure of its Sora app, ByteDance announced on Thursday that its latest audio and video model, Dreamina Seedance 2.0, is currently being deployed in its editing platform, CapCut.

According to ByteDance, the model enables creators to compose, refine, and synchronize video and audio content using prompts, images, or reference videos.

The gradual rollout will commence with CapCut users located in Brazil, Indonesia, Malaysia, Mexico, the Philippines, Thailand, and Vietnam, with additional markets to be included over time.

The announcement regarding the launch in CapCut comes after a recent report indicated that the model’s worldwide rollout would be halted while resolving intellectual property concerns that had prompted criticism from Hollywood regarding suspected copyright violations. This likely accounts for the limited number of regions where the model is presently accessible within CapCut.

In China, the model is available to users of ByteDance’s Jianying app.

Image Credits:ByteDance

The video generation model operates without the need for reference images; even a brief description using a few words from the creator is sufficient, stated ByteDance in its announcement. CapCut excels at producing realistic textures, motion, and lighting from various visual perspectives and angles, which the company claims can be utilized to edit, enhance, or rectify creators’ original footage.

Another potential application would involve creators experimenting with possible ideas grounded in early concepts or drafts prior to filming the actual video.

Moreover, Dreamina Seedance 2.0 can cater to a broad spectrum of content types, including cooking recipes, fitness tutorials, business or product presentations, and action-intensive videos, where AI video models have traditionally encountered difficulties, according to the company.

Upon launch, the model accommodates clips lasting up to 15 seconds across six different aspect ratios.

Image Credits:ByteDance

In CapCut, the model will be introduced across various functionalities, encompassing editing tools like AI Video and content generation features such as Video Studio. It will also be incorporated into ByteDance’s AI generation platform, Dreamina, and its marketing hub, Pippit.

Considering its potential to produce realistic content, ByteDance has implemented safety measures to ensure the model cannot generate videos from images or videos containing actual faces. CapCut will also prohibit the unauthorized creation of intellectual property. (Nevertheless, if the safeguards were functioning effectively, the model would currently be available in the United States, suggesting that further adjustments might still be underway.)

The output created by Dreamina Seedance 2.0 will feature an unobtrusive watermark, which will assist in identifying content generated with the model when shared externally, ByteDance elucidated. This could facilitate actions such as takedown requests from rights holders should the model produce copyrighted material.

ByteDance stated that it plans to collaborate with professionals and creative communities as the model is launched to enhance and refine its functionalities.

Why employing the oddballs is effective

Why employing the oddballs is effective

In the fast-paced world of startups, securing a dependable team is essential for early-stage ventures. In this episode of Build Mode, Isabelle Johannessen converses with Isaiah Granet, the CEO and co-founder of Bland, a voice AI firm that ascended from pre-seed to Series B within a mere 10 months. Their workforce has expanded to 75 individuals, and Granet shares strategic insights on how the company uncovered hidden talent in unexpected arenas. 

With a founding team straight out of academia, Bland’s initial hires were chosen based on their enthusiasm rather than their backgrounds. 

“We spent a considerable amount of time searching for our founding engineer. The candidate we ultimately chose had only a few months of experience at an insurance firm in Iowa. Before that, he managed a Taco Bell, and prior to that, he worked on a factory floor,” Granet shared with Build Mode, noting that they discovered him via his GitHub profile. 

“What impressed me wasn’t his technical skills,” Granet remarked. “We probed him about his hobbies, and I’ve never seen someone grin so broadly. He replied, ‘I enjoy shipping code.’”

Following that hire, Bland began to prioritize individuals who were fervently dedicated to their interests and as dynamic as the company itself. The team now includes philosophy majors and beekeepers, all from outside the conventional tech landscape. 

“There are individuals who possess experiences that might not shine on résumés, but are incredibly fascinating. It highlights a level of dedication, which can be applied to any field,” Granet noted.

As the company has expanded over the past year, the leadership team has had to master not only the hiring process but also keeping the team engaged and satisfied. In the episode, Granet elaborates on how Bland established a fair compensation model and made sure all initial hires were clear about their equity. 

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

However, this recruitment strategy does come with challenges, he mentioned. Inexperienced scrappy talent often requires time to acclimate to their roles, which necessitates adjustments by the company. 

Bland believes that if extra resources are allocated to an employee, then that employee should reciprocate by investing effort into the company. “If outcomes aren’t being produced, we expect you to be at the office six days a week, 12 hours each day,” Granet stated. 

This hiring approach can also pose difficulties in scaling, particularly given Bland’s rapid growth. Granet explained that the co-founders maintain a very hands-on approach with the team to ensure they reach the high-performance levels needed. 

The founding team can significantly influence the fate of an early-stage startup, and Bland’s distinctive hiring practices alongside its remarkable growth illustrate the advantage of accessing unique talent. “For the most part, I believe early-stage startup founders should trust their instincts and everyone will develop their own effective hiring methods,” Granet concluded.

Loading the player…

Apply to Startup Battlefield: We are seeking early-stage companies with an MVP. Nominate a founder (or yourself), mentioning you learned about Startup Battlefield through the Build Mode podcast. Submit your application here.  

TechCrunch Disrupt 2026: Join us for TechCrunch Disrupt from October 13 to 15 in San Francisco, where the Startup Battlefield 200 will take the spotlight. If you wish to support them or network with countless founders, VCs, and tech aficionados, secure your tickets.

Isabelle Johannessen serves as our host. Build Mode is produced and edited by Maggie Nye. Audience Development is managed by Morgan Little. Special gratitude to the Foundry and Cheddar video teams.

A significant hacking application has surfaced on the internet, endangering millions of iPhones. Here’s what you should be aware of.

A significant hacking application has surfaced on the internet, endangering millions of iPhones. Here’s what you should be aware of.

Cybersecurity experts have revealed a series of cyber assaults aimed at Apple users globally. The methods employed in these hacking operations have been named Coruna and DarkSword, with both government operatives and cybercriminals utilizing them to extract information from individuals’ iPhones and iPads. 

It is uncommon to witness extensive hacks targeting users of iPhones and iPads. Over the past ten years, similar incidents have primarily involved attacks against Uyghur Muslims in China and individuals in Hong Kong.

Now, portions of these robust hacking instruments have surfaced online, potentially jeopardizing hundreds of millions of iPhones and iPads operating outdated software to data breaches.

We are dissecting the available information about these recent threats to iPhone and iPad security, as well as the protective measures you can undertake.

What are Coruna and DarkSword?

Coruna and DarkSword are two collections of sophisticated hacking toolkits that encompass various exploits capable of infiltrating iPhones and iPads to extract sensitive data, including messages, browsing history, geolocation, and cryptocurrency information.

The cybersecurity professionals who identified these toolkits report that Coruna’s exploits can compromise iPhones and iPads operating on iOS 13 through iOS 17.2.1, launched in December 2023. 

Conversely, DarkSword includes exploits that can breach iPhones and iPads with the latest models using iOS 18.4 and 18.7, released in September 2025, according to Google cybersecurity analysts examining the code.

However, the danger posed by DarkSword is more pressing for the general populace. A portion of DarkSword has been leaked and uploaded to the code-sharing platform GitHub, allowing anyone to access the harmful code and initiate attacks against Apple users on older iOS versions. 

How do Coruna and DarkSword work?

These types of attacks are inherently indiscriminate and perilous, as they can ensnare any individual who visits a specific website hosting the harmful code.

Contact Us

Do you possess additional information regarding DarkSword, Coruna, or other governmental hacking and spyware instruments? From a personal device, you can securely reach out to Lorenzo Franceschi-Bicchierai on Signal at +1 917 257 1382, or via Telegram, Keybase, and Wire @lorenzofb, or through email.

In some instances, victims can be compromised merely by accessing a legitimate website under the management of malicious hackers.

Upon initial infection, Coruna and DarkSword capitalize on several vulnerabilities within iOS, enabling hackers to gain near-total control over the target device, consequently allowing them to extract the user’s private information. This data is then transmitted to a web server managed by the hackers. 

At least some components of the Coruna toolkit, as previously reported by TechCrunch, were initially created by Trenchant, a hacking and spyware division within the U.S. defense contractor L3Harris, which sells exploits to the U.S. government and its leading allies.

Kaspersky has also associated two exploits in Coruna’s toolkit with Operation Triangulation, a complex and likely government-led cyber operation allegedly conducted against Russian iPhone users.

After Trenchant created Coruna — the specifics remain unclear — these exploits seemingly reached Russian spies and Chinese cybercriminals, potentially through one or several intermediaries selling exploits on the dark web. 

Coruna’s trajectory exemplifies once again that formidable hacking tools, even those generated for the U.S. under strict confidentiality protocols, can leak and spread uncontrollably. 

An instance of this occurred in 2017 when an exploit devised by the U.S. National Security Agency, capable of remotely infiltrating Windows computers globally, leaked online. The same exploit was later utilized in the destructive WannaCry ransomware assault, which indiscriminately hacked hundreds of thousands of computers worldwide. 

In DarkSword’s case, researchers have tracked attacks targeting users in China, Malaysia, Turkey, Saudi Arabia, and Ukraine. It remains uncertain who originally crafted DarkSword, its transfer to various hacking factions, or the manner of its leak online.

It is unclear who disseminated and uploaded the tools to GitHub, or their motives.

The hacking instruments, which TechCrunch has reviewed, are composed in web languages HTML and JavaScript, allowing them to be relatively simple to configure and self-host by anyone wishing to conduct malicious attacks. (TechCrunch is refraining from linking to GitHub as the tools can be employed in harmful attacks.) Researchers on X have already attempted the leaked tools by hacking their own Apple devices with vulnerable versions of the company’s software.

DarkSword is now described as “essentially plug-and-play,” as articulated by Justin Albrecht, principal researcher at the mobile security firm Lookout, to TechCrunch. 

GitHub informed TechCrunch that it has not removed the leaked code but will retain it for security research purposes.

“GitHub’s Acceptable Use Policies prohibit posting content that directly supports unlawful active attacks or malware campaigns causing technical harm,” Jesse Geraci, GitHub’s online safety counsel, informed TechCrunch. “However, we do not prohibit the posting of source code that could be used to develop malware or exploits, as the dissemination of such source code has educational merit and ultimately benefits the security community.”

Is my iPhone or iPad vulnerable to DarkSword?

If your iPhone or iPad is outdated, you should strongly consider updating it without delay.

Apple has advised TechCrunch that users operating on the latest versions of iOS 15 through iOS 26 are already shielded.

According to iVerify: “We highly recommend updating to iOS 18.7.6 or iOS 26.3.1. This will mitigate all vulnerabilities exploited in these attack vectors.”

Apple’s own data indicates that nearly one in three iPhone and iPad users are still not utilizing the latest iOS 26 software. This suggests that potentially hundreds of millions of devices remain susceptible to these hacking instruments, as Apple estimates over 2.5 billion active devices globally. 

What if I can’t or don’t want to upgrade to iOS 26?

Apple also mentioned that devices using Lockdown Mode, an optional enhanced security feature first implemented in iOS 16, can also obstruct these particular attacks. 

Lockdown Mode is advantageous for journalists, dissidents, human rights advocates, and anyone who believes they may be targeted based on their identity or occupation. 

Although Lockdown Mode is not flawless, there is no public evidence suggesting that hackers have been able to bypass its protective measures thus far. (We inquired with Apple about whether that assertion still holds true, and will update if we receive a response.) Lockdown Mode has reportedly thwarted at least one attempt to install spyware on a human rights defender’s device.

Conntour secures $7M from General Catalyst and YC to develop an AI search engine tailored for security video systems.

Conntour secures $7M from General Catalyst and YC to develop an AI search engine tailored for security video systems.

The surveillance technology sector is currently under scrutiny, and not for commendable reasons. Amid controversy regarding the U.S. Immigration and Customs Enforcement’s use of Flock’s camera network for monitoring individuals, as well as home security camera manufacturer Ring facing backlash for developing new functionalities that would allow law enforcement to request footage from homeowners about their surroundings, there is an extensive discussion surrounding safety, privacy, and the dynamics of surveillance.

However, controversy does not negate market opportunities, and the ongoing advancements in vision-language models have further propelled companies that create novel ways to assist organizations in overseeing activities within their facilities.

Matan Goldner, co-founder and CEO of the video surveillance startup Conntour, emphasizes the significance of ethics in this domain, claiming that his company carefully chooses its clientele. While this might not be perceived as typical startup logic for a company that is still in its early stages, Goldner asserts that Conntour is in a position to do so as it already boasts a number of substantial government and publicly traded clients, including Singapore’s Central Narcotics Bureau.

“Our relationship with such prominent clients enables us to be selective and maintain control […] We truly have autonomy over who utilizes our system, the intended applications, and we can determine what we deem to be ethical and lawful. We apply our discretion and assess specific clients that we’re comfortable collaborating with, based on our understanding of their intended use,” Goldner shared with TechCrunch in an exclusive discussion.

This momentum has assisted Conntour in more than just selecting clients. Investors have taken notice: The startup recently secured a $7 million seed funding round from General Catalyst, Y Combinator, SV Angel, and Liquid 2 Ventures.

Goldner noted that the funding round was concluded within a mere 72 hours. “I think I organized around 90 meetings in about eight days, and just after three days — we started on Monday and wrapped up by Wednesday afternoon,” he explained.

Nevertheless, Conntour’s cautious approach seems wise, particularly considering the immense power of AI tools within this sector. The company’s video platform employs AI models to allow security personnel to query camera feeds using natural language to detect any item, individual, or scenario in the footage, in real time — akin to a Google-like search engine tailored for security video feeds. It can autonomously monitor and identify threats based on predefined rules while generating alerts automatically.

In contrast to traditional systems that rely on predefined definitions or parameters to identify specific objects, movement patterns, or behaviors, Conntour asserts that its system leverages natural and vision language models, giving it a significant level of adaptability and user-friendliness. A user can simply request, “Identify instances of someone in sneakers handing over a bag in the lobby,” and Conntour’s system will swiftly comb through all recorded footage or live video feeds to provide pertinent results.

A snapshot of Conntour’s platform in operationImage Credits:Conntour

Furthermore, thanks to the integration of AI models, users can effortlessly pose questions regarding the footage and receive textual answers, along with the corresponding video feeds, and even create incident reports.

The key advantage of the company, however, lies in its scalability. Goldner clarified that the platform distinguishes itself from other AI video search solutions by being engineered to efficiently accommodate systems made up of thousands of camera feeds. Notably, he mentioned that Conntour’s system can oversee up to 50 camera feeds using a single consumer GPU, such as Nvidia’s RTX 4090.

The company achieves this by employing a variety of models and logical systems, and then determining which models and systems the algorithm should apply for each query to minimize computational demands while delivering optimal results.

Conntour asserts that its system can be entirely implemented on-site, completely in the cloud, or a combination of both. It can integrate with most existing security infrastructures or function independently as a comprehensive surveillance platform.

Nonetheless, a persistent issue in the video surveillance industry remains: The effectiveness of surveillance is only as good as the quality of the captured footage. For instance, distinguishing details from low-resolution footage taken in a dimly lit parking area with a dirty lens is quite challenging.

Goldner states that Conntour mitigates this challenge by providing a confidence score alongside its search results. If a camera feed’s source is of inadequate quality, the system presents results with low confidence levels.

Looking ahead, Goldner identifies the primary technical challenge as integrating the full extent of LLM capability within the system while retaining efficiency.

“We are focusing on two objectives simultaneously, which tend to contradict each other. On one side, we aspire to offer complete natural language flexibility, LLM-style, allowing for any inquiries. Conversely, we are concerned with efficiency, aiming to utilize minimal resources, as processing [thousands] of feeds is extremely demanding. This conflict represents the most significant technical obstacle we face in our field, and it’s what we are diligently striving to resolve.”

Cohere unveils a voice model that is open source and designed specifically for transcription.

Cohere unveils a voice model that is open source and designed specifically for transcription.

On Thursday, the enterprise AI firm Cohere unveiled its inaugural voice model: Transcribe, an open-source automatic speech recognition model suitable for applications such as note-taking and speech evaluation.

Weighing in at only 2 billion parameters, the model is designed for use with consumer-grade GPUs for those who prefer self-hosting. It presently accommodates 14 languages: English, French, German, Italian, Spanish, Portuguese, Greek, Dutch, Polish, Chinese, Japanese, Korean, Vietnamese, and Arabic.

According to Cohere, Transcribe outperforms models like Zoom Scribe v1, IBM Granite 4.0 1B, ElevenLabs Scribe v2, and Qwen3-ASR-1.7B Speech in the Hugging Face Open ASR leaderboard, registering an average word error rate (WER) of 5.42, the lowest among all models tested.

The company asserts that Transcribe achieved an average win rate of 61% against competing models when human judges evaluated its transcriptions based on accuracy, coherence, and usability. Nevertheless, the model lagged behind its competitors in transcribing Portuguese, German, and Spanish.

Cohere notes that Transcribe can handle 525 minutes of audio in just one minute, which is impressive for its model category.

The firm intends to incorporate Transcribe into its enterprise agent orchestration platform, North, and is offering the model for free via its API. Additionally, it will be accessible on Model Vault, Cohere’s managed inference platform.

The popularity of speech recognition models is on the rise as the need for note-taking and dictation applications like Granola and Wispr Flow increases.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Earlier this year, Cohere reportedly informed investors that it was on track for an annual recurring revenue of $240 million by 2025, with CEO Aidan Gomez stating that the startup might go public “soon”.

WhatsApp can now create AI-generated replies based on your discussions

WhatsApp can now create AI-generated replies based on your discussions

WhatsApp is introducing a range of new functionalities and updates, including one that creates AI-driven suggested responses based on your chats. The messaging service, owned by Meta, is also debuting additional features, such as a new method for clearing storage, capabilities for enhancing photos with Meta AI, and more.

Most significantly, however, is the application’s enhancement to its “Writing Assistance” function that aids users in composing messages. Writing Assistance, which was first introduced last August, already helps users reinterpret, proofread, or modify the tone of their communications. On Thursday, the Meta-owned firm announced in a blog post that this recent enhancement will assist users in perfecting their messages.

Meta likely anticipates that users will leverage its in-app features for message preparation instead of external services like ChatGPT. Naturally, not everyone may be inclined to utilize the new feature, as users probably favor genuine, personal exchanges with friends and relatives over AI-generated texts. (Employing AI for drafting an email is one thing; deploying it for family group chats is entirely different!)

The company asserts that conversations remain confidential, even if individuals utilize Writing Assistance.

To access this feature, users must tap the chat box, select the stickers icon within the typing area, and then click on the pencil with a sparkledust icon indicating an AI functionality.

Regarding storage management, WhatsApp can now assist users in locating and eliminating large files directly within any conversation. This allows users to remove unnecessary items without erasing entire discussions. For example, they can opt to only delete media files when clearing a chat while preserving the chat history.

The company also revealed that users can now employ Meta AI to enhance photos right within a chat. This enables users to perform actions like removing distracting elements from an image, altering the background, or applying a new aesthetic.

Moreover, WhatsApp now facilitates the transfer of chat history from iOS to Android, as well as within the same operating system. Additionally, users now have the capability to be logged into two WhatsApp accounts simultaneously on iOS, a feature already available on Android.

Stickers on WhatsApp are also receiving an update, as WhatsApp will now recommend specific stickers to users while they type emojis, allowing them to replace them with a sticker.

These new functionalities are currently being rolled out and will soon be accessible to all users, according to WhatsApp.