Friday , 21 October 2016
Breaking News
Home » Internet » Digital Asset Management explained

Digital Asset Management explained

Certainly, social tagging is architected such that the asset is not simply posted into the database and then read out for consumers. The asset metadata effectively continues to grow via repeated annotation by the end users. In fact, the viewers become authors in a sense; they alter the content and add value for subsequent viewers. So there is a feedback loop for additional metadata annotation. An interesting example of this phenomenon is where users translate Web media and create subtitles that are made available via a flash player.

One may even consider the act of viewing an asset as a source of additional metadata about the asset. Systems can log the number of views, fast forwards, etc. for each asset. Similarly, we may consider systems where the annotators are not casual Web users seeking entertainment, but rather professional annotators logging content, perhaps with specific business purposes in mind.

The architecture is somewhat similar, in which an asset is entered directly into the database, and the metadata is added later. For these systems, content management systems (CMS) architectures may be employed where a bus connects various Web service enabled components and workflows are defined for a range of applications.

The terms Digital Asset Management (DAM) or Media Asset Management (MAM) are also used to indicate a more specific type of CMS. Additional automated post processing operations such as importing transcripts or subtitles which may not be available at the time of ingest can be implemented as independent execution threads – reading, processing to create additional metadata, and rewriting to update the asset record. This decoupled processing is used for cases where real-time processing is not practical given existing hardware resources, and where content is not arriving at a continuous rate.

The range of options for metadata extraction operations on media is vast and still growing as new media analysis algorithms are developed. These will be discussed in detail in later chapters, but will typically involve demultiplexing streams, decoding and performing computationally intensive operations. Also, with any system that handles video, data bandwidth is a concern and careful system design is required to minimize data access and transport throughout the system. Unfortunately, it is challenging to design a flexible system that can operate at scale to support comprehensive video search. As a result, many search engines today deal solely with high-level metadata, and the extent of their media processing is limited to transcoding and representative image selection.

About Emma G.

Working in the marketing industry since 2002. This blog is one of my hobbies.

Leave a Reply