Metadata Is Key To Archive Monetization

Executives from Fox News, Sinclair and Hearst Television discussed efforts underway to organize and capitalize on their massive archives at last week’s NewsTECHForum, where efficient — and more potentially inexpensive — methodologies are beginning to emerge.

Broadcasters want to derive more value from their archives by enriching daily news production, creating original programming for multiplatform distribution and generating new revenues from third-party licensing. But to do so they need to be able to easily search through and access old content, no easy task for legacy broadcasters with decades of analog tapes, and even film canisters, sitting in storage.

Several groups have undertaken large-scale digitization efforts to tackle the problem, with some exploring new AI and ML (machine learning) tools to more efficiently tag and index video. Regardless of the method, generating accurate metadata is key to any archive efforts, both for old content and fresh material being created today, said broadcasters last week at TVNewsCheck’s NewsTECHForum in New York City.

Metadata’s Critical Role

“Before we can actually monetize the archives in a reasonable way, we have to have metadata on it,” said Mike Palmer, AVP, advanced technology/media management for Sinclair. “And in many cases, most cases, we have not been putting good metadata on it.”

Palmer, speaking on the panel “Harvesting the Archive for New Content and Opportunities” moderated by this reporter, said archive metadata must not only include enough information to find content using a media asset management (MAM) system. It also needs to have information about the rights attached to the content, since most call-letter stations have a mix of content they shot themselves, and fully own the rights to, and derivative content originally sourced from a network news service.

There isn’t any technical means today to tell whether a station owns a piece of content or not, Palmer said. That question can usually be answered only by calling and (hopefully) finding an employee who was there when it first aired.


“How long have we been talking about archives and metadata, but we’re not bringing back basic information about ownership, what camera it was shot on, the date, the geolocation, all this metadata that is in the cameras that we should be carrying forward,” Palmer said. “And we’re recreating the same problem that we’re trying to solve today with AI and ML because we’re simply not putting the right metadata on that content as it moves into the archive.”

Palmer said the culprit for lost camera metadata is often nonlinear editing systems that strip it out during the production process. To combat the problem going forward he sees a solution in the Coalition for Content Provenance and Authenticity (C2PA) standard, as promoted by the Content Authenticity Initiative (CAI). C2PA specifies provenance metadata that survives all the way from camera to distribution. C2PA not only addresses content ownership, but also content authenticity, an issue of growing importance in the age of AI-generated fake images.

‘A Wildly Human Process’

To improve accessibility of content for its journalists and producers, Hearst Television began digitizing the archives across its stations in 2021. To date it has digitized about 20%-25% of its archive material, representing roughly 45,000 hours of video.

“We parachute into a couple of stations at a time and help them digitize their archives in a systematic way,” said Devon Armijo, director, digital news integration for Hearst Television. “We bring in archival staff that handles not only the physical media but also the paper data that associates with it. Not only do we focus on digitization, but they also are not only tagging. They are looking at it in a discovery way. making sure they’re telling about the editorial opportunities, the promotional opportunities and sometimes the sales opportunities that are there in the archives — things that are sealed in the tapes that folks may know or not know that they have.”

While Hearst makes some use of automation, Armijo said that digitization remains “a wildly human process,” particularly when dealing with physical media that is beyond its end of life, such as 40-50 year-old tapes. That is where Hearst’s archivists serve as “the first line of defense.”

“They’re putting tapes through on a daily basis and making so many human decisions, up front at the beginning of digitization, that helps you with any sort of automation that rolls through afterwards,” Armijo said. “We had some automation processes throughout, like black [frame] detection. But that stuff is all secondary to the human decisions, the conversations, and understanding the history of not only the station but the content that’s there in your archive.”

Hearst licenses archive content to third parties, Armijo said, but the group itself remains “our first customer.” So far this year, Hearst has used its archive to produce over 370 pieces of digital original content along with a handful of linear specials and some local streaming content, including the popular true crime series Hometown Tragedy.

Fox is digitizing the archives across its station group as well as Fox News and Fox Business and bringing them into cloud storage. It has taken a different approach than Hearst by outsourcing the work, which encompasses tens of thousands of U-matic, one-inch and two-inch tapes, 16mm and 35mm film and various digital tape formats.

“We have tractor trailers come and pick up the entire library and it goes off to one of our five digitizing vendors, and then it works through their process,” said Ben Ramos, VP, Fox Archive, field and emerging tech, Fox News. “They have around 35 metadata enhancers who watch every frame of it, and kind of tag it as they’re going through it. It’s very manual, we haven’t gotten to too many AI/ML tools yet.”

Fox’s first goal was to preserve “at-risk” content like one-inch, two-inch and U-matic libraries, with the second objective being to generate ROI by licensing content to third-party documentary filmmakers. The initial effort was aimed at 5,000 U-matic tapes.

“What do we have in there, what’s the failure rate, and can we find ROI?” Ramos said. “We found ROI within six months, so that kind of supercharged the process, and then we got to do the rest of the 70,000 U-matic, two-inch and one-inch, and then we started dipping into the more expensive 16mm.”

Fox has experienced a failure rate of 3%-5% on that older content, and those impaired assets are now sitting on two pallets “awaiting further remediation,” Ramos said. That could involve baking them for several weeks to remove moisture, or even cracking tapes open to clean them and rehouse them.

Overall, it is a slow process, and so far, Fox has only digitized about 8% or 9% of its total physical media assets. One of the surprising findings is that newer formats like Beta, DV and DVCPRO tapes are also experiencing similar 3%-5% failure rates during the digitization process, and some of the older one-inch and U-matic tapes are actually playing better depending on how and where they were stored.

“Now everything feels a little bit at risk,” Ramos said.

Finding Answers With AI, ML

Sinclair was early in archiving some of its content in the public cloud, and last year struck a deal with producer Anthony Zuiker to mine its news archives to create original content that can be licensed to third parties. The group has around 23 million assets that were “born digital,” Palmer said, which means they been archived from a newsroom computer system with a script attached to it. Those assets have accurate metadata, allowing one to search that content across the entire enterprise and access it. Sinclair also has another roughly 10 million assets sitting on shelves on varied physical media.

“The question at this point is what do we want to invest in to bring this back?” Palmer said. “We look at news content, and it’s a fact that most news content has no value in the archive. It is the rare jewel that justifies the expense of all the rest of the work that you put into that. So, we’re focused right now in trying to determine, to the best of our knowledge, which portions of the archive have the highest probability for containing those jewels, and then go mining in that direction. And we may not — I say may, because there are no hard decisions at this point — but we may not want to go back to those 10 million assets and actually digitize them all. It depends on what we find.”

Sinclair has worked with archiving vendor Memnon to digitize cutsheets and tape labels on stored media at a few stations. It plans to use AI tools like optical character recognition (OCR) to analyze them and hopefully generate good descriptions that it can then use to determine what is worth digitizing.

Fox Sports has spent several years on its own complex archive project with Google to create a system that allows producers to quickly call up old footage, such as to enhance a halftime package. Ramos said he has been given access to it and “playing with it for about six months.” The system uses two kinds of metadata: metadata created by human loggers, as well as metadata created by the same ML algorithms that form the basis of YouTube search. A user has a choice of searching by either type.

“It’s definitely working,” Ramos said. “It’s a massive, massive archive, it’s huge. They’ve got a lot of content in there, so it would be really hard to search otherwise.”

Ramos’ own budget for AI/ML tools is more modest, so his team has focused on the least expensive AI tools, speech-to-text and OCR, and runs content through the AI tools themselves.

“Usually when there’s an anchor or a reporter talking about something, it relates to the video that’s covering that,” Ramos said. “So that’s been a really good way for us to inexpensively find most of what we need. But it’s not 100% of the way there.”

Finding Affordability

French company Newsbridge wants to make indexing archive content and searching through it more affordable. The company has developed a cloud-based AI engine called MXT-1 that can quickly sift through archive video and generate human-like descriptions, and do it more affordably than conventional AI systems, said Newsbridge CEO Phillippe Petitpont. Its indexing technology can also be applied to ingesting live content.

“With 1,000 hours of archive, there might be three hours that are hidden gems that have a lot of value,” Petitpont said. “So, you need to analyze 1,000 hours but there are maybe only three or four that are relevant. The problem is that current AI, monomodal indexing technology is very expensive. You don’t want to spend $10 million to index something that might be valuable for just two or three hours. So, we took this problem and have been working on it for a few years. We need AI with video understanding that is able to be very efficient, so that it can meet business realities in terms of pricing.”

Petitpont said a key differentiator for Newsbridge’s AI that it is multimodal, which means that it doesn’t just analyze speech or recognize text but considers multiple types of data within video as a human would. And instead of analyzing each individual frame of video, MXT-1 employs “smart subsampling” and only looks at a few key relevant frames. This cuts down on the use of expensive graphics processing units (GPUs) on public cloud compute and avoids wasting money by “overindexing” content.

“We only process a frame that will really best illustrate the content,” Petitpont said. “So then we’ve reduced by an order of magnitude a lot of traditional sampling.”

Sinclair is not currently a customer of Newsbridge, but Palmer said when he spoke with them he was impressed by their smart subsampling approach. The company obviously had arrived earlier at the same conclusion that his team at Sinclair had reached.

“That was, that you don’t need to look at every frame of video,” Palmer said. “You don’t need to do some of these massive tagging things for every frame of video. Some of these AI models will create pages and pages of metadata for each frame of video, and that is not appropriate for news. Less in some cases, and probably this case, is better.”

Read more coverage of NewsTECHForum 2023 here. Watch this session and all the NewsTECHForum 2023 videos here.

Comments (0)

Leave a Reply