reduction

What is Temporal Noise Reduction

Source: http://gizmodo.com/5891352/what-is-temporal-noise-reduction

What Is Temporal Noise Reduction?One of the new iPad’s video features—along with 1080p recording and video stabilization—is temporal noise reduction. Apple claims it will improve the quality of footage in low-light conditions. OK, but what the hell is it?

It’s a clever technique…

There’s no getting around this: temporal noise reduction is tough to explain. That’s because it’s a complex process used to improve image and video rendering. This is very much a simplified explanation of what happens.

…that greatly reduces the noise of video…

When you record footage in low-light conditions, the resulting images are often noisy—speckled with pixelation that looks like a staticky TV screen. Why? Because there’s just not enough light hitting the sensor. In bright conditions, all the light provides a huge signal; noise—from electrical interference or imperfections in the detector—is still present, but it’s drowned out. In low light, the signals are much smaller which means that the noise is painfully apparent.

…by comparing what pixels actually move…

So, onto temporal noise reduction itself. Basically, it exploits the fact that with video there are two pools of data to use: each separate image, and the knowledge of how the frames change with time. Using that information, it’s possible to create an algorithm that can work out which pixels have changed between frames. But it’s also possible to work out which pixels are expected to change between frames. For instance, if a car’s moving from left to right in a frame, software can soon work out that pixels to the right should change dramatically.

…and guessing what is noise and what is actual detail…

By comparing what is expected to change between frames, and what actually does, it’s possible to make a very good educated guess as to which pixels are noisy and which aren’t. Then, the pixels that are deemed noisy can have a new value calculated for them based on their surrounding brothers.

…to make low-light video super-sharp.

So, the process manages to sneakily use data present in the video stream to attenuate the effects of noise and improve the image. It’s something that’s been used in 3D rendering for years, but it requires a fair amount of computational grunt. Clearly, the new iPad can handle that—and as a result, we’ll be fortunate enough to have better low-light video.

Tags: , , , , , , , , , , , , , , , , , , ,

Thursday, March 8th, 2012 news No Comments

How Google Crunches All That Data

Source: http://gizmodo.com/5495097/how-google-crunches-all-that-data

If data centers are the brains of an information company, then Google is one of the brainiest there is. Though always evolving, it is, fundamentally, in the business of knowing everything. Here are some of the ways it stays sharp.

For tackling massive amounts of data, the main weapon in Google’s arsenal is MapReduce, a system developed by the company itself. Whereas other frameworks require a thoroughly tagged and rigorously organized database, MapReduce breaks the process down into simple steps, allowing it to deal with any type of data, which it distributes across a legion of machines.

Looking at MapReduce in 2008, Wired imagined the task of determining word frequency in Google Books. As its name would suggest, the MapReduce magic comes from two main steps: mapping and reducing.

The first of these, the mapping, is where MapReduce is unique. A master computer evaluates the request and then divvies it up into smaller, more manageable “sub-problems,” which are assigned to other computers. These sub-problems, in turn, may be divided up even further, depending on the complexity of the data set. In our example, the entirety of Google Books would be split, say, by author (but more likely by the order in which they were scanned, or something like that) and distributed to the worker computers.

Then the data is saved. To maximize efficiency, it remains on the worker computers’ local hard drives, as opposed to being sent, the whole petabyte-scale mess of it, back to some central location. Then comes the second central step: reduction. Other worker machines are assigned specifically to the task of grabbing the data from the computers that crunched it and paring it down to a format suitable for solving the problem at hand. In the Google Books example, this second set of machines would reduce and compile the processed data into lists of individual words and the frequency with which they appeared across Google’s digital library.

The finished product of the MapReduce system is, as Wired says, a “data set about your data,” one that has been crafted specifically to answer the initial question. In this case, the new data set would let you query any word and see how often it appeared in Google Books.

MapReduce is one way in which Google manipulates its massive amounts of data, sorting and resorting it into different sets that reveal new meanings and have unique uses. But another Herculean task Google faces is dealing with data that’s not already on its machines. It’s one of the most daunting data sets of all: the internet.

Last month, Wired got a rare look at the “algorithm that rules the web,” and the gist of it is that there is no single, set algorithm. Rather, Google rules the internet by constantly refining its search technologies, charting new territories like social media and refining the ones in which users tread most often with personalized searches.

But of course it’s not just about matching the terms people search for to the web sites that contain them. Amit Singhal, a Google Search guru, explains, “you are not matching words; you are actually trying to match meaning.”

Words are a finite data set. And you don’t need an entire data center to store them—a dictionary does just fine. But meaning is perhaps the most profound data set humanity has ever produced, and it’s one we’re charged with managing every day. Our own mental MapReduce probes for intent and scans for context, informing how we respond to the world around us.

In a sense, Google’s memory may be better than any one individual’s, and complex frameworks like MapReduce ensure that it will only continue to outpace us in that respect. But in terms of the capacity to process meaning, in all of its nuance, any one person could outperform all the machines in the Googleplex. For now, anyway. [Wired, Wikipedia, and Wired]

Image credit CNET

Memory [Forever] is our week-long consideration of what it really means when our memories, encoded in bits, flow in a million directions, and might truly live forever.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Wednesday, March 17th, 2010 news No Comments

Dr. Augustine Fou is Digital Consigliere to marketing executives, advising them on digital strategy and Unified Marketing(tm). Dr Fou has over 17 years of in-the-trenches, hands-on experience, which enables him to provide objective, in-depth assessments of their current marketing programs and recommendations for improving business impact and ROI using digital insights.

Augustine Fou portrait
http://twitter.com/acfou
Send Tips: tips@go-digital.net
Digital Strategy Consulting
Dr. Augustine Fou LinkedIn Bio
Digital Marketing Slideshares
The Grand Unified Theory of Marketing