ECE516 Lab05: Meta-sensing (sensing of sensing)

In this lab you will learn the fundamentals of meta-sensing.

Meta-sensing (also known as metaveillance) is the sensing of sensing (veillance of veillance).

"Meta" is a Greek word that means "beyond". For example, a meta-conversation is a conversation about conversations. A meta-argument is an argument about arguments. Metadata is data about data. An example of meta-data is the GPS coordinates where a photograph was taken. This meta-data is appended into the header information of the picture.

An example of meta-sensing is metavision.

Metavision is the vision of vision, i.e. seeing sight, and visualizing vision.

Industrial and commercial importance of metaveillance

There exist well-defined standards for products such as lighting equipment, but there is a need for well-defined standards and certification for sensors, e.g. to guarantee a minimum sensory capacity for insurance purposes, and for efficacy of sensors in industries like autonomous vehicles.

This year (2020), we have secured $200,000 in funds to hire students to work on meta-sensing for autonomous vehicles (funded by Ford Motor Company of Canada). We have positions in Toronto and in Silicon Valley and in Shenzhen and Xiamen for people skilled in the art of metasensing.

Metavision makes it possible to photograph the sensory capacity of a self-driving car, and provide photographic evidence (to a courtroom, judge, jury, etc.), to prove the car's sensors were in good working order when it left the assembly plant for example.

Metavision is the name of the AR (Augmented Reality) company that my PhD student, Raymond Lo, and I, and others, founded in Silicon Valley, California (we raised $75,000,000 US) to manufacture computerized eyeglasses.

Raymond Lo, the "brains" behind the company, is now at Harvard University, and is visiting Toronto for the next two months.

Background reading:

Instructable: Phenomenological Augmented Reality

The kinds of things all of us really like are things that are extremely simple yet profoundly deep conceptually. Linking the prevous labs, we have the 1-pixel camera connected to the 1-pixel display, to give rise to a simple example of meta-sensory 3D AR (Augmented Reality). There is an overlay of some virtual information in perfect and exact alignment with some form of physical reality. In a sense you now have a true and accurate scientific "outstrumentTM", i.e. a scientific instrument that has a "readout" that reads "out" into the real world.

Grading: Here is the marking scheme:

• Complete the Instructable and get it working with feedback (e.g. you present a light source that glows distinctly more brightly when the light source is in view of the camera's sensor, and less brightly when not in the field-of-view of the camera): 4 marks;
• Metaveillography (long exposure photograph of the sensor in a camera, showing the metaveillogrammetric response of the camera): 4 marks;
• Answer a simple question or otherwise demonstrate knowledge of this work and its purpose: 2 marks;

Examples from previous years

Here are four examples of metaveillographs students took and posted from a previous year:

Bonus marks:

For additional bonus marks (and a possible mark greater than 10/10) try constructing an automated metavision system such as a robotic mechanism or track or rail, etc., as shown below (multiple exposure photograph of a smart city streetlight's ability to sense):

See also Augmented Reality robotics



References:
Prof. Wang's reference document
Kineveillance look at Figures 4, 5, and 6, and Equations 1 to 10.
• The concept of veillance flux (link);
• (optional reading Minsky, Kurzweil, and Mann, 2013);
• (optional reading Humanistic Intelligence, see around Figure 3 of this paper)