Last week, Verizon filed a patent for a set-top box that detects what you’re doing while you watch TV, and serves you advertising accordingly. Ew, weird, companies watching what I do while I consume content. Big brother! Chill, son.
“Methods and Systems for Presenting an Advertisement Associated with an Ambient Action of a User” describes a system by which a device captures information about what you’re doing while enjoying TV, movies, etc, and uses it to target advertising to you. Using a “a depth sensor, an image sensor, an audio sensor, and a thermal sensor” the system would be able to detect whether you’re fiddling with your phone, interacting with another person, as well as performing any of:
eating, exercising, laughing, reading, sleeping, talking, singing, humming, cleaning, and playing a musical instrument.
Now, this might seem kind of creepy, but there’s a few important points to remember before you freak out and sound the privacy alarm. First, companies like Facebook, Google, etc, are capturing all sorts of information about what you’re consuming online and using it to serve you targeted advertising. Second, any system like this would almost certainly require you to opt-in before peeking into your life. Besides, how many of these patents actually turn into products, anyway? [USPTO via Ars Technica via Betabeat]
This great graph, taken using a wearable sensor, shows a student’s emotional, physical, and mental arousal during all different phases of every day of the week.
The device measures what’s called Electrodermal Activity — which measures the activity of the sympathetic nervous system, best known to control the fight-or-flight response. It is activated by emotional arousal, increased cognitive workload, or physical exertion.
Spikes pop up during lab work, exams, studying, and sleep, but what’s stunning is how low activity levels were during this student’s classes. They must have been super boring.
It makes sense that Nikon’s trotting out a Wi-Fi connected camera just like everybody else. For the people who replaced a real camera with a smartphone camera, taking pictures and posting them online are one and the same activity. But the Coolpix S800c runs Android 2.3 and has 4 gigs of storage for apps. That’s weird! Is Nikon genius for adopting an open OS standard? Or are we so desperate for Wi-Fi that we’ve resorted to Android to get our cameras online?
Without Android and Wi-Fi, the Coolpix S800c is about as boring of a point-and-shoot as any: It has a 16-megapixel, backside-illuminated CMOS sensor, a 10x optical zoom, built-in GPS, touchscreen controls, and it shoots 1080p video all for $350. Give or take a spec, dimension, or a couple of bucks, and it could easily be its Wi-Fi brethren like the Samsung MV900F or the Canon 530HS.
Except for one important difference: the connected features on those Wi-Fi cameras are so poorly designed that they’re virtually unusable. And while there’s some hints things might be getting better the problem hasn’t changed. As of right now, Sony, Panasonic, Samsung, and Canon all have their own Wi-Fi interfaces that connect to an assortment of proprietary smartphone apps and cloud storage systems. We’ve used the cheapest and the priciest, and so far we’ve yet to be impressed. If these cameras are meant to have Wi-Fi, why can’t it be easier? It’s enough to make you wish you’d just plugged your camera into your computer to get the photos off.
You can say whatever you want about the outmoded Android 2.3 OS, but at the very least it works. The interface is immediately understandable to anyone who has ever used a smartphone. Maybe more importantly, by putting Android on the camera, you can suddenly load the camera up with photo-specific Android apps. Finally, Instagram on your camera. Wait, is that cheating? And hey, maybe developers will get creative and develop something new with connected cameras in mind.
Still, Android on a camera doesn’t solve every problem, and in a way it’s more reflective of existing failures than anything. Android doesn’t suddenly make your camera a phone, and you still need an Internet connection to post photos online.
In the end maybe what we really need is a seamless way to dump photos onto a phone—what you do from there is up to you. In fact in testing Wi-Fi cameras across the board that seems to be the only feature everyone can agree on. Now it’s just a question of nailing it down. We’ll reserve judgement on the latest crop of Wi-Fi cams—including this bizarre Android thing—until they’re available this fall. [Nikon USA]
Marketers already know way to much about us thanks to online tracking, and now they have yet another powerful way to understand consumers.
MIT startup Affectiva has created a webcam that codes facial expressions and a sensor that measures changes in body temperature. Both could be a huge way for brands to steamline the market research process.
Liz Gannes over at All Things Digital reports that MIT professor Rosalind Picard and research scientist Rana el Kaliouby initially created the technology to “help children with autism understand facial expressions,” but now marketing research companies like WWP Millard Brown and IPG Media Lab primarily use the products. Affectiva just raised $12 million in Series C funding from Kleiner Perkins and Horizon Ventures.
Kaliouby told Gannes that “we have the largest repository of facial responses ever collected in the world,” which is part of its webcam product, the Affdex dashboard.
According to the company site, the “dashboard provides overall emotion scores and real-time, scene-by-scene playback of facial data.” It can also compare the difference ! in emoti onal and facial responses from men and women, and people of different races.
Affectiva’s other product is the Q Sensor, which measures skin conductance — or in other words, how body temperature and sweat glands change over time.
NOW READ: The Incredible Story Of How Target Exposed A Teen Girl’s Pregnancy With Sophisticated Market Research >
One of the new iPad’s video features—along with 1080p recording and video stabilization—is temporal noise reduction. Apple claims it will improve the quality of footage in low-light conditions. OK, but what the hell is it?
It’s a clever technique…
There’s no getting around this: temporal noise reduction is tough to explain. That’s because it’s a complex process used to improve image and video rendering. This is very much a simplified explanation of what happens.
…that greatly reduces the noise of video…
When you record footage in low-light conditions, the resulting images are often noisy—speckled with pixelation that looks like a staticky TV screen. Why? Because there’s just not enough light hitting the sensor. In bright conditions, all the light provides a huge signal; noise—from electrical interference or imperfections in the detector—is still present, but it’s drowned out. In low light, the signals are much smaller which means that the noise is painfully apparent.
…by comparing what pixels actually move…
So, onto temporal noise reduction itself. Basically, it exploits the fact that with video there are two pools of data to use: each separate image, and the knowledge of how the frames change with time. Using that information, it’s possible to create an algorithm that can work out which pixels have changed between frames. But it’s also possible to work out which pixels are expected to change between frames. For instance, if a car’s moving from left to right in a frame, software can soon work out that pixels to the right should change dramatically.
…and guessing what is noise and what is actual detail…
By comparing what is expected to change between frames, and what actually does, it’s possible to make a very good educated guess as to which pixels are noisy and which aren’t. Then, the pixels that are deemed noisy can have a new value calculated for them based on their surrounding brothers.
…to make low-light video super-sharp.
So, the process manages to sneakily use data present in the video stream to attenuate the effects of noise and improve the image. It’s something that’s been used in 3D rendering for years, but it requires a fair amount of computational grunt. Clearly, the new iPad can handle that—and as a result, we’ll be fortunate enough to have better low-light video.
Dr. Augustine Fou is Digital Consigliere to marketing executives, advising them on digital strategy and Unified Marketing(tm). Dr Fou has over 17 years of in-the-trenches, hands-on experience, which enables him to provide objective, in-depth assessments of their current marketing programs and recommendations for improving business impact and ROI using digital insights.
Collaborators – Digital Profs
- Try On New Glasses in Warby Parker's Virtual Booth
- Netflix vs Blockbuster - Perfect example of an industry replaced by a more efficient version of itself
- Coke vs Pepsi vs Dr Pepper
- Facebook advertising metrics and benchmarks
- The Grand Unified Theory of Marketing(tm) - Digital String Theory
- Marketing Costs Normalized to CPM Basis for Comparison
- social media benchmarks
- The JKWeddingDance video was real; the viral effect was MANUFACTURED - Post 1 of 2
- Samsung 52 inch HDTV $9.99 at BestBuy - purchase receipt below (6:21a eastern time August 12, 2009)
- Brand Advertisers: Escaping an Ecosystem of Digital Advertising Fraud
- #SESNY: Toward a Performance Mindset for All Advertising
- Tips for Marketers Selecting a Digital Agency
- Context Is Not King or Queen; It's Just Necessary
- 2013 New Year's Digital Marketing Resolutions
- The Good, Bad, and Ugly of Online Campaign Ratings and eGRPs
- Why You Should Banish the Net Promoter Score Immediately
- Digital Strategy To-MAY-to vs. To-MAH-to
- The Agency-Client Relationship is Forever Changed
- Targeting vs. Privacy - Who Will Win?
- July 2015 (2)
- June 2015 (5)
- May 2015 (4)
- April 2015 (32)
- March 2015 (57)
- February 2015 (79)
- January 2015 (86)
- December 2014 (69)
- November 2014 (98)
- October 2014 (150)
- September 2014 (109)
- August 2014 (44)
- July 2014 (92)
- June 2014 (118)
- May 2014 (173)
- April 2014 (130)
- March 2014 (247)
- February 2014 (167)
- January 2014 (222)
- December 2013 (167)
- November 2013 (111)
- October 2013 (116)
- September 2013 (214)
- August 2013 (210)
- July 2013 (200)
- June 2013 (87)
- May 2013 (87)
- April 2013 (70)
- March 2013 (114)
- February 2013 (89)
- January 2013 (136)
- December 2012 (96)
- November 2012 (130)
- October 2012 (147)
- September 2012 (93)
- August 2012 (93)
- July 2012 (112)
- June 2012 (71)
- May 2012 (82)
- April 2012 (80)
- March 2012 (122)
- February 2012 (114)
- January 2012 (129)
- December 2011 (60)
- November 2011 (54)
- October 2011 (29)
- September 2011 (17)
- August 2011 (30)
- July 2011 (18)
- June 2011 (19)
- May 2011 (23)
- April 2011 (23)
- March 2011 (52)
- February 2011 (69)
- January 2011 (108)
- December 2010 (82)
- November 2010 (67)
- October 2010 (68)
- September 2010 (44)
- August 2010 (101)
- July 2010 (61)
- June 2010 (28)
- May 2010 (28)
- April 2010 (26)
- March 2010 (33)
- February 2010 (21)
- January 2010 (13)
- December 2009 (4)
- November 2009 (2)
- October 2009 (14)
- September 2009 (6)
- August 2009 (19)
- July 2009 (34)
- June 2009 (11)
- May 2009 (4)
- April 2009 (6)
- March 2009 (13)
- February 2009 (32)
- January 2009 (25)
- December 2008 (1)
- October 2008 (1)
- June 2008 (1)
- November 2007 (1)