17 July 2013

Robot Mom vs Robot Butler

If you tickle a robot, it may not laugh, but you may still consider it humanlike -- depending on its role in your life, reports an international group of researchers. Designers and engineers assign robots specific roles, such as servant, caregiver, assistant or playmate. Researchers found that people expressed more positive feelings toward a robot that would take care of them than toward a robot that needed care. To determine how human perception of a robot changed based on its role, researchers observed 60 interactions between college students and Nao, a social robot developed by Aldebaran Robotics, a French company specializing in humanoid robots. 


Each interaction could go one of two ways. The human could help Nao calibrate its eyes, or Nao could examine the human's eyes like a concerned eye doctor and make suggestions to improve vision. Participants then filled out a questionnaire about their feelings toward Nao. Researchers used these answers to calculate the robot's perceived benefit and social presence in both scenarios. The research team found that when participants perceived a strong social presence, they considered the caregiving robot smarter than the robot in the alternate scenario. Participants were also more likely to attribute human qualities to the caregiving robot.

More information:

14 July 2013

CGI Lighting and Scanning

Gaming and movie leaders might in the past have put up with CGI faces with that wax-museum look reminding users that the faces are anything but real, but this is a new day with advanced technologies that can make faces look very real. Computer generated imagery (CGI) expertise can perform facial imagery wonders. A team of collaborators with expertise that includes computational illumination and photography for graphics have developed a technique to produce CGI faces that look true, down to the skin cell level. Call it ultra-realistic skin simulation.

 
Researchers from Imperial College London and from the University of Southern California are able to make the virtual face so realistic that the renderings detail it all, pores, blemishes, wrinkles, bumps, and shadows. They do this through a special lighting system and camera. They simulate light reflecting off human skin. Each simulated light source is split into four rays, one that bounces off the epidermis, and three that penetrate the skin to different depths before being scattered. Using a special scanner, they took high-resolution images of human skin from volunteers' cheeks, chins, and foreheads.

More information:

11 July 2013

Cheap, Color, Holographic Video

Researchers at MIT’s Media Lab discovered a new approach to generate holograms that could lead to color holographic-video displays that are much cheaper to manufacture than today’s experimental, monochromatic displays. The same technique could also increase the resolution of conventional 2D displays. Using the new technique, researchers are building a prototype color holographic-video display whose resolution is roughly that of a standard-definition TV and which can update video images 30 times a second, fast enough to produce the illusion of motion. The heart of the display is an optical chip, resembling a microscope slide for about $10. When light strikes an object with an irregular surface, it bounces off at a huge variety of angles, so that different aspects of the object are disclosed when it’s viewed from different perspectives. In a hologram, a beam of light passes through a so-called diffraction fringe, which bends the light so that it, too, emerges at a host of different angles.

 
One way to produce holographic video is to create diffraction fringes from patterns displayed on an otherwise transparent screen. The problem with that approach is that the pixels of the diffraction pattern have to be as small as the wavelength of the light they’re bending, and most display technologies don’t happily shrink down that much. The team used a small crystal of a material called lithium niobate. Just beneath the surface of the crystal microscopic channels (known as waveguides) are created, which confine the light traveling through them. Each waveguide corresponds to one row of pixels in the final image. Beams of red, green and blue light are sent down each waveguide, and the frequencies of the acoustic wave passing through the crystal determine which colors pass through and which are filtered out. Combining, say, red and blue to produce purple doesn’t require a separate waveguide for each color; it just requires a different acoustic-wave pattern.

More information:

09 July 2013

Robot that Sees and Maps

Computer vision algorithms that enable Samsung’s latest humanoid robot, Roboray, to build real-time 3D visual maps to move around more efficiently have been developed by researchers from the University of Bristol. By using cameras, the robot builds a map reference relative to its surroundings and is able to remember where it has been before. The ability to build visual maps quickly and anywhere is essential for autonomous robot navigation, in particular when the robot gets into places that have no global positioning system (GPS) signals or other references. Roboray is one of the most advanced humanoid robots in the world, with a height of 140 cm and a weight of 50 kg. 


It has a stereo camera on its head and 53 different actuators including six for each leg and 12 for each hand. The robot walks in a more human-like manner by using what is known as dynamic walking.  This means that the robot is falling at every step, using gravity to carry it forward without much energy use. This is the way humans walk and is in contrast to most other humanoid robots that bend their knees to keep the centre of mass low and stable. This way of walking is also more challenging for the computer vision algorithms as objects in images move more quickly. The Bristol team, who has been collaborating with Samsung Electronics, was in charge of the computer vision aspects of 3D SLAM.

More information:

06 July 2013

Robots Hallucinate Humans

Recently, Cornell scientists developed teaching robots to use their imaginations to try to picture how a human would want a room organized. The research is successful, with algorithms that used hallucinated humans (which are the best sort of humans) to influence the placement of objects performing significantly better than other methods. The next step is about labeling 3D point-clouds obtained from RGB-D sensors by leveraging contextual hallucinated people. A significant amount of research has been done investigating the relationships between objects and other objects. It's called semantic mapping, and it's very valuable in giving robots what we'd call things like ‘intuition’ or ‘common sense’. 

 
However, we tend to live human-centered lives, and that means that the majority of our stuff tends to be human-centered too, and keeping this in mind can help to put objects in context. The other concept to deal with is that of object affordances. An affordance is some characteristic of an object that allows a human to do something with it. For example, a doorknob is an affordance that lets us open doors, and a handle on a coffee cup is an affordance that lets us pick it up and drink out of it. There's plenty to be learned about the function of an object by how a human uses it, but if you don't have a human handy to interact with the object for you, hallucinating one up out of nowhere can serve a similar purpose.

More information:

05 July 2013

High Quality VR

As a new wave of VR peripheral equipment, receives press attention and thousands of pre-orders. Having spoken to several developers and proponents of VR, the consensus seems to be twofold. For one thing, the cost of the hardware has, in the past, been astronomical. Some of the latest gadgets are still priced well above a consumer-friendly level, such as the IGS Glove, a peripheral developed by Synertial  which allows your virtual hand to flex and move exactly like your real hand, right down to intricate finger movements. Secondly, and perhaps most obviously, seamless, high-quality VR has been hard to create. Presence, although momentarily intense, is thought to be very easy to disrupt, hence the VR term ‘break in presence’, or BIP for short. 

 
The application must keep you in this other world. But the greatest challenge for virtual reality has always been movement. The act of simply walking around a virtual space is still restricted by the awkward correlation of that virtual room to the real room in which a VR user is standing. This precise problem, though, is one which a diverse array of new peripherals is hoping to tackle. There's the Omni, a kind of stationary grooved dish and harness that you can walk and run on (your feet always return to the same spot); the newly launched WizDish, a similar but more affordable concept which uses anti-friction studded shoes; and finally the elaborate VirtuSphere, a 10-foot high hollow ball which encases the VR player within its spherical design. But all of them, however, have limitations.

More information:

04 July 2013

RoboCup

With the score tied 1-1, it's gone to a penalty shootout in a tense soccer match between teams from Israel and Australia. As the Australian goalkeeper in his red jersey braces for the shot, the Israeli striker pauses. Then he breaks into a dance instead of kicking the ball. Perhaps he can be forgiven: He's a robot, after all. Welcome to the RoboCup, where more than a thousand soccer-playing robots from forty countries have descended on the Dutch technology Mecca of Eindhoven this week with one goal in mind: beat the humans.


The tournament's mission is to defeat the human World Cup winners by 2050, creating technology along the way that will have applications far beyond the realm of sport. To achieve this, organizers have created multiple competition classes including small robots, large robots, humanoid robots and even virtual robots with plans to merge their techniques into a single squad of all-star androids capable of one day winning a man vs. machine matchup. Humanoid robots have difficulty keeping their balance, and the largest (human height) move more like, well, robots than world-class athletes.

More information:

03 July 2013

AI in Mental Healthcare

AI technology can be designed to accomplish specialized intelligent tasks, such as speech or facial recognition, or to emulate complex human-like intelligent behavior such as reasoning and language processing. AI systems that are capable of interacting with and making autonomous actions within their environment are called artificial intelligent agents. An emerging application of AI technology in the mental healthcare field is the use of artificial intelligent agents to provide training, consultation, and treatment services. Researchers at the USC’s Institute for Creative Technologies, for example, are currently developing virtual mental health patients that converse with human trainees.


The continual advances of AI technologies and their application in mental healthcare lead to a concept called the ‘Super Clinician’. The ‘Super Clinician’ concept is an AI agent system that could either be in the form of a virtual reality simulation or a humanoid robot. The system design entails the integration of several advanced technologies and capabilities, including natural language processing, computer vision, facial recognition, olfactory sensors, and even thermal imaging to detect temperature changes in patients. In the context of mental health care, some questions that come to my mind are whether caring and empathetic connections between humans and artificially intelligent care providers are possible.

More information: