Invisibility has been on humanity's wish list at least since Egyptian times. With recent advances in optics and computing however, this elusive goal is no longer purely imaginary.
Last spring, Susumu Tachi, an engineering professor at the University of Tokyo, demonstrated a crude invisibility cloak. Through the clever application of some dirt-cheap technology, the Japanese inventor has brought personal invisibility a step closer to reality.
Tachi's cloak - a shiny raincoat that serves as a movie screen, showing imagery from a video camera positioned behind the wearer - is more gimmick than practical prototype. Nonetheless, from the right angle and under controlled circumstances, it does make a sort of ghost of the wearer. And, unlike traditional camouflage, it's most effective when either the wearer or the background is moving (but not both). You don't need a university lab to check it out: Stick a webcam on your back and hold your laptop in front of you, screen facing out. Your friends will see right through you. It's a great party trick.
Such pathetic parlor trick demonstrations aren't going to fool anyone for more than a fraction of a second. Where is Harry Potter's cloak, wrapped around the student wizard as he wanders the halls of Hogwarts undetected? What about James Bond's disappearing Aston-Martin in Die Another Day? The extraterrestrial camouflage suit in the 1987 movie Predator? Wonder Woman's see-through Atlantean jet? It's not difficult to imagine a better system than Tachi's. In fact, invisibility that would satisfy any wizard - not to mention any spy, thief, or soldier - is closer than you might think.
US Defense Department press releases citing "adaptive," "advanced," and "active" camouflage suggest that the government is working on devices like this. If so, it's keeping them under wraps. However, NASA's Jet Propulsion Laboratory has published a preliminary design for an invisible vehicle, and battalions of armchair engineers have weighed in with gusto on newsgroups and blogs. As it happens, most of the schemes that have been advanced overlook the complexities of the problem. Invisibility isn't a simple matter of sensors that read the light beams on one side of an object and LEDs or LCDs that reproduce those beams on the other. In fact, such a system would work about as well as the laptop party trick with the webcam's lens removed: Objects right up against the sensors would produce blurry images on the display, but a few centimeters away they'd disintegrate into a featureless gray haze.
An invisibility cloak, if it's going to dupe anyone who might see it, needs to represent the scene behind its wearer accurately from any angle. Moreover, since any number of people might be looking through it at any given moment, it has to reproduce the background from all angles at once. That is, it has to project a separate image of its surroundings for every possible perspective.
Impossible? Not really, just difficult. Rather than one video camera, we'll need at least six stereoscopic pairs (facing forward, backward, right, left, upward, and downward) - enough to capture the surroundings in all directions. The cameras will transmit images to a dense array of display elements, each capable of aiming thousands of light beams on their own individual trajectories. And what imagery will these elements project? A virtual scene derived from the cameras' views, making it possible to synthesize various perspectives. Of course, keeping this scene updated and projected realistically onto the cloak's display fabric will require fancy software and a serious wearable computer.
Many of the technical hurdles have already been overcome. Off-the-shelf miniature color cameras can serve as suitable light sensors. As for the display, to remain unseen at a Potteresque distance of, say, 2 meters, the resolution need not be much finer than the granularity of human vision at that distance (about 289 pixels per square centimeter). LEDs this size are readily available. Likewise, color isn't a problem - 16-bit displays are common and ought to suffice.
But it will take more than off-the-shelf parts to make the cloak's image bright enough to blend in with the daytime sky. If the effect is to work in all lighting conditions, the display must be able to reproduce anything from the faintest flicker of color perceptible to the human eye (1 milliwatt per square meter) to the glow of the open sky (150 watts per square meter). Actually, the problem is worse than that: According to Rich Gossweiler at HP Labs, the sun is 230,000 times more intense than the sky surrounding it. If we want the cloak to be able to pass in front of the sun without looking hazy or casting shadows, we'll need to make it equally bright. Of course, this would put severe demands on the display technology - LEDs just ain't that brilliant - and it would increase battery size or shrink battery life accordingly. So let's ignore the sun and take our chances. An average TV screen looks blank in full daylight, so we'll need something brighter, more along the lines of a traffic light.
Another problem is response time. Like a TV screen, the cloak's display must be able to update faster than the eye's ability to perceive flickering. It has to register motion in real time without the blurring, ghosting, smearing, and judder that plague today's low-end monitors. A laptop LCD screen isn't going to cut it. A lattice of superbright LED microarrays probably will.
The real challenge however is turning the video images into a realistic picture. The view from a pair of cameras strapped to your body is different from the perspective of an observer even a short distance away. The observer can see things the cameras can't, thanks to parallax - the way the angles change with the distance.
If you like This post Please Click on LIKE to appreciated us
ê.