27 November 2013

3D Imaging Using Nash's Theorem

UT Dallas computer scientists have developed a technique to make 3D images that finds practical applications of a theory created by a famous mathematician. This technique uses anisotropic triangles – triangles with sides that vary in length depending on their direction – to create 3D mesh computer graphics that more accurately approximate the shapes of the original objects, and in a shorter amount of time than current techniques. These types of images are used in movies, video games and computer modeling of various phenomena, such as the flow of water or air across the Earth, the deformation and wrinkles of clothes on the human body, or in mechanical and other types of engineering designs. Researchers hope this technique will also lead to greater accuracy in models of human organs to more effectively treat human diseases, such as cancer. The technique finds a practical application of the Nash embedding theorem, which was named after mathematician John Forbes Nash Jr. The computer graphics field represents shapes in the virtual world through triangle mesh.

Traditionally, it is believed that isotropic triangles – where each side of the triangle has the same length regardless of direction – are the best representation of shapes. However, the aggregate of these uniform triangles can create edges or bumps that are not on the original objects. Because triangle sides can differ in anisotrophic images, creating images with this technique would allow the user flexibility to more accurately represent object edges or folds. Researchers found that replacing isotropic triangles with anisotropic triangles in the particle-based method of creating images resulted in smoother representations of objects. Depending on the curvature of the objects, the technique can generate the image up to 125 times faster than common approaches. Objects using anisotropic triangles are of a more accurate quality, and most noticeable to the human eye when it comes to wrinkles and movement of clothes on human representatives. The next step of this research is moving from representing the surface of 3-D objects to representing 3-D volume.

More information:

23 November 2013

New Algorithms Improve Animations

A team led by Disney Research, Zürich has developed a method to more efficiently render animated scenes that involve fog, smoke or other substances that affect the travel of light, significantly reducing the time necessary to produce high-quality images or animations without grain or noise. The method, called joint importance sampling, helps identify potential paths that light can take through a foggy or underwater scene that are most likely to contribute to what the camera – and the viewer – ultimately sees. In this way, less time is wasted computing paths that aren't necessary to the final look of an animated sequence. Light rays are deflected or scattered not only when they bounce off a solid object, but also as they pass through aerosols and liquids. The effect of clear air is negligible for rendering algorithms used to produce animated films, but realistically producing scenes including fog, smoke, smog, rain, underwater scenes, or even a glass of milk requires computational methods that account for these participating media. So-called Monte Carlo algorithms are increasingly being used to render such phenomena in animated films and special effects. These methods operate by analyzing a random sampling of possible paths that light might take through a scene and then averaging the results to create the overall effect. 

But researchers explained that not all paths are created equal. Some paths end up being blocked by an object or surface in the scene; in other cases, a light source may simply be too far from the camera to have much chance of being seen. Calculating those paths can be a waste of computing time or, worse, averaging them may introduce error, or noise, that creates unwanted effects in the animation. Computer graphics researchers have tried various ‘importance sampling’ techniques to increase the probability that the random light paths calculated will ultimately contribute to the final scene and keep noise to a minimum. Some techniques trace the light from its source to the camera; others from the camera back to the source. Some are bidirectional – tracing the light from both the camera and the source before connecting them together. Unfortunately, even such sophisticated bidirectional techniques compute the light and camera portions of the paths independently, without knowledge of each other, before connecting them together, so they are unlikely to construct full light paths that ultimately have a strong contribution to the final image. By contrast, the joint importance sampling method developed by the Disney Research team chooses the locations along the random paths with mutual knowledge of the camera and light source locations.

More information:

21 November 2013

Brain's Crowdsourcing Software

Over the past decade, popular science has been suffering from neuromania. The enthusiasm came from studies showing that particular areas of the brain ‘light up’ when you have certain thoughts and experiences. It's mystifying why so many people thought this explained the mind. What have you learned when you say that someone's visual areas light up when they see things? People still seem to be astonished at the very idea that the brain is responsible for the mind—a bunch of gray goo makes us see! It is astonishing. But scientists knew that a century ago; the really interesting question now is how the gray goo lets us see, think and act intelligently. New techniques are letting scientists understand the brain as a complex, dynamic, computational system, not just a collection of individual bits of meat associated with individual experiences. These new studies come much closer to answering the ‘how’ question. Fifty years ago researchers made a great Nobel Prize-winning discovery. They recorded the signals from particular neurons in cats' brains as the animals looked at different patterns. The neurons responded selectively to some images rather than others. One neuron might only respond to lines that slanted right, another only to those slanting left. But many neurons don't respond in this neatly selective way. This is especially true for the neurons in the parts of the brain that are associated with complex cognition and problem-solving, like the prefrontal cortex. Instead, these cells were a mysterious mess—they respond idiosyncratically to different complex collections of features. What were these neurons doing?

In a new study researchers at Columbia University College and the Massachusetts Institute of Technology taught monkeys to remember and respond to one shape rather than another while they recorded their brain activity. But instead of just looking at one neuron at a time, they recorded the activity of many prefrontal neurons at once. A number of them showed weird, messy ‘mixed selectivity’ patterns. One neuron might respond when the monkey remembered just one shape or only when it recognized the shape but not when it recalled it, while a neighboring cell showed a different pattern. To analyze how the whole group of cells worked the researchers turned to the techniques of computer scientists who are trying to design machines that can learn. Computers aren't made of carbon, of course, let alone neurons. But they have to solve some of the same problems, like identifying and remembering patterns. The techniques that work best for computers turn out to be remarkably similar to the techniques that brains use. Essentially, they found the brain was using the same general sort of technique that Google uses for its search algorithm. You might think that the best way to rank search results would be to pick out a few features of each Web page like ‘relevance’ or ‘trustworthiness’. With neurons that detect just a few features, you can capture those features and combinations of features, but not much more. To capture more complex patterns, the brain does better by amalgamating and integrating information from many different neurons with very different response patterns.

More information:

18 November 2013

Computational Creativity

IBM has built a computational creativity machine that creates entirely new and useful stuff from its knowledge of existing stuff. But can computers be creative? That’s a question likely to generate controversial answers. It also raises and some important issues too, like how to define creativity. Seemingly unafraid of the controversy, IBM has darted into the fray by answering this poser with a resounding ‘yes’. Computers can be creative, they say, and to prove it they have built a computational creativity machine that produces results that a knowledgeable human would consider novel, useful and even valuable—the hallmarks of genuine creativity. IBM’s chosen field for this endeavour is cooking. The company’s creativity machine produces recipes based on chosen ingredients or cooking styles. And they’ve asked professional chefs to evaluate the results and say the feedback is promising. Computational machines have evolved a great deal since they were first used in war for code-cracking and gun-aiming and in business for storing, tabulating and processing data. But it has taken some time for these machines to match man human capabilities. In 1997, for instance, IBM’s Deep Blue machine used deductive reasoning to beat the world chess champion for the first time. It’s successor, a computer called Watson, went a step further in 2011 by applying inductive reasoning to huge datasets to beat humans experts on the TV game show, Jeopardy!.
Their first problem of course is to define creativity. The choice of problem, to create new recipes, is clearly a human decision. The team has then gathered information by downloading a large corpus of recipes that include dishes from all over the world that use a wide variety ingredients, combinations of flavours, serving suggestions and so on. They also download related information such as descriptions of regional cuisines from Wikipedia, the concentration of flavour ingredients in different foodstuffs from the ‘Volatile Compounds in Food’ database and Fenaroli’s Handbook of Flavor Ingredients. So big data lies at the heart of this approach. They then develop a method for combining ingredients in ways that have never been attempted using a ‘novelty algorithm’ that determines how surprising the resulting recipe will appear to an expert observer. This relies on factors such as ‘flavour pleasantness’. The computer assesses this using a training set of flavours that people find pleasant as well as the molecular properties of the food that produce these flavours such as its surface area, heavy atom count, complexity, rotatable bond count, hydrogen bond acceptor count and so on. The last stage is an interface that allows a human expert to enter some starting ingredients such as pork belly or salmon fillet and perhaps a choice of cuisine. The computer generates a number of novel dishes, explaining its reasoning for each. Of these, the expert chooses one and then makes it.

More information:

16 November 2013

Human Touch Makes Robots Defter

Cornell engineers are helping humans and robots work together to find the best way to do a job, an approach called ‘coactive learning’. Modern industrial robots, like those on automobile assembly lines, have no brains, just memory. An operator programs the robot to move through the desired action; the robot can then repeat the exact same action every time a car goes by. But off the assembly line, things get complicated: A personal robot working in a home has to handle tomatoes more gently than canned goods. If it needs to pick up and use a sharp kitchen knife, it should be smart enough to keep the blade away from humans. Researchers set out to teach a robot to work on a supermarket checkout line, modifying a Baxter robot from Rethink Robotics in Boston, designed for assembly line work. It can be programmed by moving its arms through an action, but also offers a mode where a human can make adjustments while anaxctiinis in progress. The Baxter’s arms have two elbows and a rotating wrist, so it’s not always obvious to a human operator how best to move the arms to accomplish a particular task. So the researchers, drawing on previous work, added programming that lets the robot plan its own motions. It displays three possible trajectories on a touch screen where the operator can select the one that looks best. 

Then humans can give corrective feedback. As the robot executes its movements, the operator can intervene, guiding the arms to fine-tune the trajectory. The robot has what the researchers’ call a ‘zero-G’ mode, where the robot's arms hold their position against gravity but allow the operator to move them. The first correction may not be the best one, but it may be slightly better. The learning algorithm the researchers provided allows the robot to learn incrementally, refining its trajectory a little more each time the human operator makes adjustments. Even with weak but incrementally correct feedback from the user, the robot arrives at an optimal movement. The robot learns to associate a particular trajectory with each type of object. A quick flip over might be the fastest way to move a cereal box, but that wouldn’t work with a carton of eggs. Also, since eggs are fragile, the robot is taught that they shouldn’t be lifted far above the counter. Likewise, the robot learns that sharp objects shouldn’t be moved in a wide swing; they are held in close, away from people. In tests with users who were not part of the research team, most users were able to train the robot successfully on a particular task with just five corrective feedbacks. The robots also were able to generalize what they learned, adjusting when the object, the environment or both were changed.

More information:

15 November 2013

Holograms Set for Greatness

A new technique that combines optical plates to manipulate laser light improves the quality of holograms. Holography makes use of the peculiar properties of laser light to record and later recreate three-dimensional images, adding depth to conventionally flat pictures. Researchers at the A*STAR Data Storage Institute, Singapore, have now developed a method for increasing the number of pixels that constitute a hologram, thus enabling larger and more realistic three-dimensional (3D) images. Holographic imaging works by passing a laser beam through a plate on which an encoded pattern, known as a hologram, is stored or recorded. The laser light scatters from features on the plate in a way that gives the impression of a real three-dimensional object. With the help of a scanning mirror, the system built researchers combines 24 of these plates to generate a hologram consisting of 377.5 million pixels. A previous approach by a different team only managed to achieve approximately 100 million pixels. 

The researchers patterned the plates, made of a liquid-crystal material on a silicon substrate, with a computer-generated hologram. Each plate, also called a spatial light modulator (SLM), consisted of an array of 1,280 by 1,024 pixels. Simply stacking the plates to increase the total number of pixels, however, created ‘optical gaps’ between them. As a workaround, the researchers tiled 24 SLMs into an 8 by 3 array on two perpendicular mounting plates separated by an optical beam splitter. They then utilized a scanning mirror to direct the laser light from the combined SLM array to several predetermined positions. The team demonstrated that by shining green laser light onto this composite holographic plate, they could create 3D objects that replayed at a rate of 60 FPS in a 10 by 3-inch display window. This simple approach for increasing the pixel count of holograms should help researchers develop 3D holographic displays that are much more realistic than those commercially available.

More information:

12 November 2013

Gestural Interface for Smart Watches

If just thinking about using a tiny touch screen on a smart watch has your fingers cramping up, researchers at the University of California at Berkeley and Davis may soon offer some relief: they’re developing a tiny chip that uses ultrasound waves to detect a slew of gestures in three dimensions. The chip could be implanted in wearable gadgets.

The technology, called Chirp, is slated to be spun out into its own company, Chirp Microsystems, to produce the chips and sell them to hardware manufacturers. They hope that Chirp will eventually be used in everything from helmet cams to smart watches—basically any electronic device you want to control but don’t have a convenient way to do so.

More information:

11 November 2013

Monkeys Use Minds to Control Avatar Arms

Most of us don’t think twice when we extend our arms to hug a friend or push a shopping cart—our limbs work together seamlessly to follow our mental commands. For researchers designing brain-controlled prosthetic limbs for people, however, this coordinated arm movement is a daunting technical challenge. A new study showing that monkeys can move two virtual limbs with only their brain activity is a major step toward achieving that goal, scientists say.

The brain controls movement by sending electrical signals to our muscles through nerve cells. When limb-connecting nerve cells are damaged or a limb is amputated, the brain is still able to produce those motion-inducing signals, but the limb can't receive them or simply doesn’t exist. In recent years, scientists have worked to create devices called brain-machine interfaces (BMIs) that can pick up these interrupted electrical signals and control the movements of a computer cursor or a real or virtual prosthetic.

More information: