here's one possible suggestion: "Member Evaluation" at the end of the term that allows each member to grade the other members on their performance.
possible ways to overcome the problem of distinguishing popularity from technical capability?
science, scientific literature, competition...
the nature of microprocessors is rapidly changing.
microprocessors were originally designed for calculations (A).
the world's first microprocessor, the intel 4004, was designed for
busicom, a manufacturer of electronic calculators.
in today's era, however, microprocessors are used primarily for
communication (B). for example, you're more likely to use microprocessors
for document preparation, placing a phone call, use of multimedia services,
watching television, or making a purchase. these are all forms of
communication.
multimedia communication: more than just text, graphics, and sound.
personal imaging is a growing field of research that involves the use of image capture devices.
the simplest image capture device is probably the connectix QuickCam (TM), which is what we'll be learning about in the labs.
there is lots of info on the quickcam; just do a www search or the like.
there's even a www page on how to disassemble a quickcam.
instead of just talking like you do on the telephone, telepresence involves a sense of shared reality, or "being there".
videoconferencing tried to provide telepresence, but failed.
(nearly every commercial videoconferencing offering failed to obtain
widespread acceptance, yet standard voice telephones are ubiquitous).
just seeing somebody doesn't provide a good sense of collaborative capability.
the goal of this lab is to illustrate the use of microprocessors in modern communications, in particular, telepresence.
you will implement a remote pointing device called "telepoint" (also known as a "telepointer").
basically, you shine the laser at a screen in one location, and it "comes out" in another location that could be hundreds of miles away.
telepoint can be used to replace the mouse and keyboard of a traditional computer, and is much more intuitive to use than a computer mouse.
the aremac is something you can make from 2 servos and a laser pointer, and it scans out the scene kind of like the cathode ray tube does, except it can point at 3d scenes instead of the flat screen of a television.
an aremac is kind of like a tv set that displays onto 3d objects instead of a flat screen.
This collaboration takes place between a person standing in the vicinity of aremac 140, and another person, perhaps thousands of miles away, standing in front of a video projector 120.
Objects 210 (e.g. any objects within the field of illumination of aremac 140) scatter light from aremac 140, so that the output of aremac 140 is visible to a person in the vicinity of objects 210.
Objects 140 are also visible at a remote site, by way of a portion of scene light deflected by beamsplitter 220 to camera 136, where an image is recorded and transmitted, typically by a radio transmitter 230, into transmitting antenna 232.
A person, hereafter referred to as ``the photographer'' (without loss of generality, e.g. whether or not the task said person is engated in is photography), in or near the scene where objects 210 are located interacts with a remote manager while viewing objects 210.
The signal from camera 136 is sent by way of a radio transmitter, by telephone lines, computer network, or the like, to a remote, possibly distant location, where it is routed to projector 120. Emanating from projector 120 there are rays of light 252 which reach beamsplitter 254 and are partially reflected as rays 256 which are considered wasted light. However, some of the light from projector 120 will pass through beamsplitter 254 and emerge as light rays 258. The projected image thus appears upon screen 260.
A second person, hereafter, referred to as the photographer's manager or assistant, without intended loss of generality (e.g. regardless of whether the task to which assistance or guidance is being offered is the task of photography or some other task), can observe the scene 210 on screen 260, and can point to objects in the scene 210, by simply pointing to various parts of the screen 260. Camera 237 can also observe the screen 260, by way of beamsplitter 254, and this image of the photographer's manager or assistant pointing at objects in the scene is transmitted back to aremac 140. In order to prevent there from being video feedback, there is, a polarizer 280 in front of camera 237, oriented to pass light from the manager. Insofar as beamsplitter 254 may or may not fall at exactly Brewster's angle --- the angle of maximum polarization, a second polarizer 282 is provided in front of screen 260, whereby polarizers 280, 282, along with the angle of beamsplitter 254 (and correspondingly, keeping camera 237 properly oriented), are adjusted to minimize video feedback, and maximize the quality of the image from the manager. The light, 290, emanating from aremac 140, hits beamsplitter 220, and some is lost as waste light 292. The rest of the light, 294, that passes through beamsplitter 220, illuminates the scene 210. Thus photographer 240 sees the image of the manager cast upon objects in the scene 210. Although this image of the manager will appear disjoint in the photographer's direct view of objects 210, the photographer's view of objects 210 as seen by camera 136, projected into display 244 will appear as a coherent view of the manager and gestures such as pointing at particular objects in scene 210. This coherence and continuity of images as seen in display 244 is due to the same principle by which a spotlight operator always sees the circular shape of the spotlight even when projecting onto oblique or disjoint surfaces. The shared view facilitates collaboration, which is especially effective when combined with a voice communications capability as might be afforded by the use of a wearable hands--free cellular telephone used together with the visual collaboration apparatus. Alternatively, the photographer's portion of the voice communications capability can be built into the head mounted display 244, and share a common data communications link, for example, having voice, video, and data communications routed through a body worn computer system attached to photographer 240 and linked to the system of the manager by way of a wireless data communications network.
The video signal output of screen camera 330 is connected to a vision processor (e.g. a 486 based "wearcomp") 340 which simply determines the coordinates of the brightest point in the image seen by camera 330 if there is a dominant brightest point. Camera 330 does not need to be a high quality camera since it will only be used to see where the laser pointer is pointing. A cheap black and white QuickCam (TM) will suffice for this purpose.
Selection of the brightest pixel will tell us the coordinates, but a better estimate can be made by using vision processor 340 to determine the coordinates of a bright red blob 320 to sub--pixel accuracy. This would help reduce the resolution needed, so that smaller images could be used (which load more quickly over the QuickCam's parallel port).
These coordinates as signals 350 and 351 are received at the photographer's studio 301 and are fed to a galvo drive mechanism (servo) which controls two galvos (e.g. Futaba (TM) hobby servos, or the like). Coordinate signal 350 drives azimuthal galvo 380 while coordinate signal 351 drives elevational galvo 381. These galvos are calibrated by the galvo drive unit 360 so that aremac laser 370 is directed to form a red dot 321 on the object in the photographer's studio 301 that the manager is pointing at from her office 300. Aremac laser 370 together with galvo drive 360 and galvos 380 and 381 together comprise a device called an aremac which may be built into the photographer's camera so that they will be properly calibrated. This aremac may alternatively be housed on the same mounting tripod as the photographer's camera, where the two may be combined by way of beamsplitter. If it is not practical or desirable to use a beamsplitter, or it is not practical to calibrate the entire apparatus, the manager may use an infrared laser pointer so that she cannot see the dot formed by the laser pointer. In this case, she will look at the image of the red dot that is captured by the photographer's camera so that what is seen by her as dot 320 on screen 315 is by way of her ability to look through the photographer's camera. Note that in all cases, the laser beam in the photographer's studio will be in the visible portion of the spectrum (e.g. red and not infrared). In this way, her very act of pointing will cause her own mind and body to close the feedback loop around any reasonable degree of misalignment or parallax error in the entire system.
this mark is awarded for showing us that you can download (from the internet, or the like), install, and use any suitable quickcam program to take a picture with the quickcam.
your program should accept as input an image from the quickcam, and output two numbers corresponding to the coordinates of the brightest pixel. your output numbers should both be scaled from 0 to 255, so that regardless of the size of the image you select, the numbers you get can be used by one of your previously written device drivers that take an unsigned character input. (image size and brightness, etc., is adjusted by editing /etc/qcam.conf or the like).
you should be able to adjust the brightness of the quickcam image so that it is all black, or nearly so, until a laser pointer is used to shine a point in its field of view, in which case it should easily detect this bright spot. grading will be done by shining a laser pointer at a point in the camera's field of view and seeing if your program reports the coordinates of the bright red dot made by the laser pointer.
you're welcome to do something creative like put a red filter over the camera to make it more red sensitive, but this will likely not be necessary.
in this way, two servos, one mounted horizontally, the other vertically, with small mirrors attached to each, will serve to deflect a laser beam in horizontal and vertical directions.
you will need to change your duty cycle of /dev/pwm1 to match that of /dev/pwm0 (e.g. change it from range 0 to 1 to range .1 to .2, since it will be driving a second servo instead of an LED).
A suggestion to create incentive for students to do pre-lab: ============================================================ Create a take-home pre-lab questionaire due at the beginning of the lab. This 5 question questionaire is handed out to each student and can only be answered by reading and preparing for the pre-lab. An Individual Pre-lab Questionaire: =================================== 0) What is the URL where you can find the qcam files that we are using for lab8? # http://www.eyetap.org/ece385/lab8/programs/ 1) What is the default screen size in pixels that the qcam outputs? # found in qcan.conf # default width 160 and height 120 2) What is the name of the file format of that qcam outputs and how is this indicated in the file? # qcam output a PGM file # indicated by P5 3) What are the two formats found in the output file of qcam? # first three lines are ascii # remainder of file is in binary 4) What is the name of the user defined function that writes the scan buffer out to a file and what is the c function call it uses? # function qc_writepgm() # calls: fputc(scan[i],f); 5) What is the name of the variable in the qcam structure that holds the value of the width of the output image and how do you call it? # from file qcam.h: # struct qcam { int width, height; # called with: q->widthAlbert Tam of 1999 class