31 July 2009

New Way of Capturing Images

New research in imaging may lead to advancements for the Air Force in data encryption and wide-area photography with high resolution. Researchers at Princeton University used a special optical device called a nonlinear crystal, rather than an ordinary lens, to capture an image. Every image is made up of a collection of light waves, and a lens bends (refracts) the waves towards a detector. In contrast, in the nonlinear material, these waves communicate to each other and interact, generating new waves and distorting themselves in the process. The mixing is a form of physical (vs. numerical) encryption, but it would be useless if the process could not be reversed. The proposed algorithm provides a way of undoing the image and thus recovering the original signal. If the signal itself is encrypted from the beginning, then this method would provide another layer of protection.

The reversing algorithm also allows the researchers to capture information that is lost in other imaging systems. Experimentally, the method relies on imaging both the intensity and travel direction of the waves. This is done by taking a standard photograph of the object alone and then one with the object and an added plane waves. The result, called a hologram, is then fed into the numerical code. The researchers obtained photos of various objects by using the image-capturing equipment, and in every instance, their images consistently have a wide view with a high resolution. They used an Air Force resolution chart, which is designed to check the quality of imaging systems. Imaging applications include optical systems that maintain their field of view as they zoom, sharper microscopes, improved lithography, and dynamical imaging of 3D objects.

More information:

http://www.sciencedaily.com/releases/2009/07/090714165100.htm

27 July 2009

Train Minds to Move Matter

Learning to move a computer cursor or robotic arm with nothing but thoughts can be no different from learning how to play tennis or ride a bicycle, according to a new study of how brains and machines interact. In experiments, signals from the brain’s motor cortex were translated by a ‘decoder’ into deliberate movements of a computer cursor. The research, which was carried out in monkeys but is expected to apply to humans, involves a fundamental redesign of brain-machine experiments. In previous studies, the computer interfaces that translate thoughts into movements are given a new set of instructions each day — akin to waking up each morning with a new arm that you have to figure out how to use all over again. In the new experiments, monkeys learned how to move a computer cursor with their thoughts using just one set of instructions and an unusually small number of brain cells that deliver instructions for performing movements the same way each day. In this new experiment, electrodes are implanted directly into the brain to record activity from a population of 75 to 100 cells that help guide movement. As animals move a hand or arm, the activity pattern of those cells is recorded. Later the limb is immobilized, and researchers can predict what the animal wants to do with it by looking at the cells’ activity; that pattern is then sent to a so-called decoder. But because of the variability caused by motions of the electrodes and changes in brain cells, researchers have assumed that a new population of cells would be in control of the movements each day. They recalibrated the decoder each day, and the subject had to relearn the task — move a cursor, reach with a robot arm — every time.

Researchers at the University of California, Berkeley, wondered what would happen if he kept the decoder constant as it measured the activity of just a few neurons observed to fire reliably with a given task. Could an initially random group of 10 to 15 neurons, with practice, be coaxed into forming a stable motor memory? Could the brain, not the decoder, do the learning? The team trained two monkeys to use a joystick to move a computer cursor to blue targets on a circle and extracted a decoder for the movements. The animals then practiced moving the cursor with their thoughts for 19 days. In the beginning, the cursor trajectories were hit or miss. But over time the pattern of cell firing stabilized, and the monkeys developed a stable mental model for cursor control. This is exactly how you learn to ride a bike or play tennis. At first your movements are uncoordinated. But with time, a motor memory is engraved in your brain. They then decided to test the memory in his monkeys. They changed the decoder. Instead of moving the cursor to blue targets, for example, the color changed to yellow. Within a couple of days, the monkeys learned the new task using the same small group of cells, he said. Moreover, they could switch back and forth between the tasks with ease. They had two mental maps that did not interfere with one another. This is like learning to play tennis on a clay court and switching to a grass court or like switching between a mountain bike and a road bike, he said. The brain can acquire multiple skills using the same set of neurons to carry out different movements.

More information:

http://www.nytimes.com/2009/07/21/health/21brai.html?_r=1

23 July 2009

Slimmest Watch Phone

Samsung Electronics has introduced the world’s slimmest watch phone. This measurement of this awesome gadget is a mere 11.98 millimeters (0.48 inches) in thickness. The new watch phone is able to retain its ultra-slim look due to the fact that, its circuit board is composed of 42 individual components, that are re-sized to reduce the thickness of the watch phone. The all new slimmest watch phone from Samsung will be available in the first week of July and will be launched in France at first, with a price tag of 459 euros that amounts to $639.

Samsung is planning to further launch this product in other European nations. Samsung’s slimmest watch phone sports a 1.76-inch touch screen, which empowers the users to use important features such as, e-mails, listen to MP3s and make and receive phone calls. This tiny device also comes with a built in bluetooth function and voice recognition commands. One of Samsung’s fiercest rivals, LG Electronics has developed its very own slim phone that will hit the commercial markets in August 2009.

More information:

http://trendsupdates.com/samsung-unveils-an-all-new-slimmest-watch-phone/

20 July 2009

DIY In Second Life

Anyone who wants to can now produce their own vehicle in a factory on the ‘Second Life’ Internet platform. They can program the industrial robots, and transport and assemble the individual parts themselves. Learning platforms provide relevant background information. In the ‘transparent factory’, car enthusiasts can watch vehicles being assembled part by part, and a new system set up by researchers of the Fraunhofer Institute for Manufacturing Engineering and Automation IPA even enables users to try their own hand at producing a quad bike, a four-wheeled motorbike. They can switch on conveyor belts, program industrial robots, and paint the frame themselves. At the end, they can zoom out of the factory hall with their finished product without paying a single cent. Because the factory does not exist in the real world but on the Internet platform of Second Life, a virtual world through which users can move in the form of a virtual figure known as an ‘avatar’.

Second Life has grown steadily: While in 2007, between 20,000 and 40,000 people were simultaneously online at any given time, this number has now risen to between 50,000 and 80,000. In the factory, users first of all indicate which quad model they would like to produce. Powerful or fuel-saving? Black, silver or red? What type of wheel rims? They can choose from a variety of models as they please. Once their avatar has made a choice, production can begin. The parts list is sent out, and all components are manufactured, assembled and subjected to a quality inspection. The avatar can watch the production process and interact at certain stages. Learning platforms located at various points in the factory hall provide users with relevant background information. How is the production process controlled? How does a press work?

More information:

http://www.sciencedaily.com/releases/2009/07/090707094704.htm

19 July 2009

Tracking Home Water Use

When a cell phone or credit-card bill arrives, each call or purchase is itemized, making it possible to track trends in calling or spending, which is especially helpful if you use a phone plan with limited minutes or are trying to stick to a budget. Within the next few years, household utilities could be itemized as well, allowing residents to track their usage and see which devices utilize the most electricity, water, or gas. New sensor technology that consists of a single device for each utility, which builds a picture of household activity by tracing electrical wiring, plumbing, and gas lines back to specific devices or fixtures, could make this far simpler to implement. Researchers at the University of Washington, in Seattle, developed the sensors, which plug directly into existing infrastructure in buildings, thereby eliminating the need for an elaborate set of networked sensors throughout a structure. For example, an electrical sensor plugs into a single outlet and monitors characteristic noise in electrical lines that are linked to specific devices, such as cell-phone chargers, refrigerators, DVD players, and light switches.

And a gas sensor attaches to a gas line and monitors pressure changes that can be correlated to turning on a stove or furnace, for instance. Now, researchers have developed a pressure sensor that fits around a water pipe. The technology, called Hydrosense, can detect leaks and trace them back to their source, and can recognize characteristic pressure changes that indicate that a specific fixture or appliance is in use. They hope to incorporate electrical, gas, and water sensors into a unified technology and has cofounded a soon-to-be-named startup that he hopes will start offering combined smart meters to utility companies within the next year or so. Smart sensors have become increasingly popular over the past few years as more people have become interested in cutting their utility bills and minimizing the resources that they consume. A number of startups offer to connect utility providers and consumers so that resource use can be tracked over the Internet. So far, however, no company or utility has been able to provide the sort of fine-grain resource usage that Patel hopes to offer with his startup.

More information:

http://www.technologyreview.com/computing/22947/

14 July 2009

Computers May Read Thoughts

It sounds like something from a science fiction movie: Sensors are surgically inserted in the brain to understand what you're thinking. Machines that can speak, move or process information — based on the fleeting thoughts in a person's imagination. But it's not completely fictional. The technology is out there. A researcher in Wisconsin recently announced the ability to ‘think’ updates onto the Twitter website. Locally, researchers at Washington University have developed even deeper ways of tying humans and computers together. The main idea is to connect people with devices and machines through their thoughts directly. The research is a component of Brain-Computer Interface Technology, which decodes brainwaves in a certain part of the brain. Computers are then programmed to understand those signals and perform an action accordingly. But so far, only signals for imagined actions have been decoded. Moving on to decoding speech will make communication to computers from the mind easier. Ultimately, the technology will better connect humans and machines. For those with disabilities, it will connect them more closely to the world. In the past, researchers used the technology to develop video games that can be played with the mind. Players control the game by imagining an action.

For example, imagining the movement of the left hand may mean moving left, whereas imagining the movement of the tongue may mean to move up. The Space Invaders video game has been tested on only 15-20 people so far because the sensors that read those brain signals go directly on or in the brain through surgery. Because patient testing would require surgery, children with epilepsy are given the chance to participate because they already have similar equipment placed in their brains that also locate electric signals in the brain. Since the introduction of BCI technology in the late 1980s, researchers have been thinking of practical applications for BCI technology though they are still in the process of making them available. Research that will give people better control of prosthetic limbs is being conducted at Washington University's Computer Engineering Department. Smart said this application of BCI technology would be available in about 10 years and would allow people with prosthetic arms to better grasp items. Researchers are still attacking some obstacles in their research. For example they needed to make the implant in the brain much smaller. Researchers also have to figure out how to teach caretakers to use the system and get the cost down. The equipment used in labs currently costs tens of thousands of dollars.

More information:

http://www.stltoday.com/stltoday/news/stories.nsf/sciencemedicine/story/D8AE00548427D1F4862575EB0003BBD6?OpenDocument

07 July 2009

3D Nano Measurements

From the motion sensor to the computer chip - in many products of daily life components are used whose functioning is based on smallest structures of the size of thousandths - or even millionths - of millimetres. These micro and nano structures must be manufactured and assembled with the highest precision so that in the end, the overall system will function smoothly. Because of this, details are important. Scientists at the Physikalisch-Technische Bundesanstalt (PTB) have now developed a metrological scanning probe microscope into a micro and nano coordinate measuring instrument. This allows dimensional quantities with nanometer resolution also to be measured on three-dimensional objects in an extraordinarily large measurement range of 25 mm x 25 mm x 5 mm. The new device is already extensively being used at PTB - to a large part for calibration orders from industry and research. Often, such small dimensions can be grasped only when they are transferred to everyday life. If we assume, for example, that someone lost a cube of sugar within an area of 25 square kilometres – the new micro and nano coordinate measuring instrument would not only be able to find it, but it would also be able to determine its exact position and shape. This does not only apply to plane surfaces, but also to three-dimensional landscapes, for example if the cube of sugar were stuck to a steep wall.

As increasingly, components with structures in the micro- and nanometer range are being used in industry, dimensional metrology on such structures is becoming increasingly important. To meet the increasing requirements for 3D measurements of micro and nano structures, 3D measuring probes newly developed at PTB were incorporated in a metrological scanning probe microscope based on a commercial nano-positioning system with integrated laser displacement sensors of the company SIOS Messtechnik GmbH. The new functionalities given by the measuring probe and the software extend the scanning probe microscope to a metrological micro/nano coordinate measuring machine (CMM) which also allows 3D measurements conforming to standards to be performed on micro and nano structures. International intercomparisons on step-height standards and lattice structures have shown that the measuring system is worldwide one of the most precise of its kind. The new measuring instrument is available for dimensional precision measurements with nm resolution on 3D micro and nano structures such as micro gears, micro balls, hardness indenters and nano lattice standards as well as for comparisons of measures; moreover, it serves as a platform for research and development tasks. It is an important link between nano, micro and macro coordinate metrology.

More information:

http://www.sciencedaily.com/releases/2009/07/090706090557.htm

03 July 2009

Robot Navigates Like A Human

European researchers have developed a robot capable of moving autonomously using humanlike visual processing. The robot is helping the researchers explore how the brain responds to its environment while the body is in motion. What they discover could lead to machines that are better able to navigate through cluttered environments. The robot consists of a wheeled platform with a robotic ‘head’ that uses two cameras to capture stereoscopic vision. The robot can turn its head and shift its gaze up and down or sideways to gauge its surroundings, and can quickly measure its own speed relative to its environment. The machine is controlled by algorithms designed to mimic different parts of the human visual system. Rather than capturing and mapping its surroundings over and over in order to plan its route--the way most robots do--the European machine uses a simulated neural network to update its position relative to the environment, continually adjusting to each new input. This mimics human visual processing and movement planning.

The robot mimics several different functions of the human brain--object recognition, motion estimation, and decision making--to navigate around a room, heading for specific targets while avoiding obstacles and walls. Ten different European research groups, each with expertise in fields including neuroscience, computer science, and robotics, designed and built the robot through a project called Decisions in Motion. The group's challenge was to pull together traditionally disparate fields of neuroscience and integrate them into a ‘coherent model architecture’. To develop a real, humanlike computer model for navigation, the researchers needed to incorporate all these aspects into one system. Once the robot had been given the software, the researchers found that it did indeed move like a human. When moving slowly, it passed close to an obstacle, because it knew that it could recalculate its path without changing course too much. When moving more quickly toward the target, the robot gave obstacles a wider berth since it had less time to calculate a new trajectory.

More information:

http://www.technologyreview.com/computing/22946/