31 March 2013

Ministry Education Invited Talk

On the 15th of March 2013, I gave an invited presentation (in Greek) to the Hellenic Ministry of Education, Religion Affairs, Culture and Sports. The General Secretariat for Lifelong Learning, organized the first of a series of scientific workshops on IT in Adult Education with the title of “Introduction of innovation and the effective use of technology in Adult Education”.


I have presented how new technologies such as online virtual environments and augmented reality can be used to assist learning in adult education. Two case studies were used, one for an online virtual environment for supporting lecturing at Coventry University and another one for using online augmented reality for similar purposes. 

More information:

29 March 2013

VS-GAMES '13 Conference

The 5th outing of the International Conference on Games and Virtual Worlds for Serious Applications will be hosted at Bournemouth University, UK between the 11th and the 13th of September 2013. With the conference organized in previous years at locations such as Coventry (UK), Braga (Portugal), Athens (Greece) and Genoa (Italy), it will take place, for 2013, at the state of the art Kimmeridge House building of Bournemouth University, situated at the main Talbot campus of the institution. The development and deployment of games with a purpose beyond entertainment and with considerable connotations with more serious aims is an exciting area with immense academic but also commercial potential. This potential presents both immediate opportunities but also numerous significant challenges to the interested parties involved, as a result of the relatively recent emergence and popularity of the medium. The VS Games 2013 conference aims to address this variety of relevant contemporary challenges that the increasingly cross-disciplinary communities involved in serious games are currently facing. This will be achieved by, amongst other ways, the comprehensive dissemination of successful case studies and development practices, the sharing of theories, conceptual frameworks and methodologies and, finally, the discussion of evaluation approaches and their resulting studies.


For VS Games 2013 organisers are seeking for contributions from researchers, developers from the industry, practitioners and decision-makers which aim to advance the state of the art in all of the technologies related to serious games. The following listed topics are particularly encouraged, though it should be mentioned that they are not the only ones of interest to VS Games 2013 and that the list below is not exhaustive by any means: Game design; Virtual environments; Game-based learning methodologies; Mixed and augmented reality; Computer graphics; Gamification; Case studies/user studies for serious games and virtual worlds; Mobile gaming; Interactive storytelling; Application areas; AI for serious games; Educational/learning theories and their application; Visualization; Pervasive gaming; Human-computer interaction; User modeling; Alternate reality; Simulation; Platforms and tools. The authors of the best papers will be invited to write an extended version for inclusion in the Elsevier Entertainment Computing Journal (subject to additional review) and IGI Global's International Journal ofGame-BasedLearning. Authors of selected technical articles with a focus on computer graphics will be invited to submit extended versions of their works to be considered for publication in Elsevier'sComputers and GraphicsJournal. The paper submission deadline is 8st April 2012.


More information:

26 March 2013

Robot Meets World

When a robot is moving one of its limbs through free space, its behavior is well-described by a few simple equations. But as soon as it strikes something solid — when a walking robot’s foot hits the ground, or a grasping robot’s hand touches an object — those equations break down. Roboticists typically use ad hoc control strategies to negotiate collisions and then revert to their rigorous mathematical models when the robot begins to move again. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory are hoping to change that, with a new mathematical framework that unifies the analysis of both collisions and movement through free space. The work could lead to more efficient controllers for a wide range of robotic tasks, but it could also help guarantee the stability of control algorithms developed through trial and error — or of untried, but promising, new algorithms.


In a pair of recent papers, the researchers demonstrate both applications. At last year’s International Workshop on the Algorithmic Foundations of Robotics, they showed how their technique can improve trajectory planning in complex robots like the experimental Fast Runner, an ostrich-like bipedal robot being built at the Florida Institute for Human and Machine Cognition. According to associate researchers, Fast Runner offers a good illustration of the problems posed by collision. Ordinarily, a roboticist trying to develop a controller for a bipedal robot would assume that the robot’s foot makes contact with the ground in some prescribed way: say, the heel strikes first; then the forefoot strikes; then the heel lifts. To prove the stability of a control system for a robot that’s colliding with the world, then, it’s necessary to evaluate every possible solution of the resulting equations.

More information:

19 March 2013

Robots in the Workplace

At MIT, a management robot is learning to run a factory and give orders to artificial co-workers, and a BakeBot robot is reading recipes, whipping together butter, sugar and flour and putting the cookie mix in the oven. At the University of California at Berkeley, a robot can do laundry and then neatly fold ­T-shirts and towels. A wave of new robots, affordable and capable of accomplishing advanced human tasks, is being aimed at jobs that are high in the workforce hierarchy. The consequences of this leap in technology loom large for the American worker — and perhaps their managers, too. Back in the 1980s, when automated spray-painting and welding machines took hold in factories, some on the assembly line quickly discovered they had become obsolete. Today’s robots can do far more than their primitive, single-task ancestors. And there is a broad debate among economists, labor experts and companies over whether the trend will add good-paying jobs to the economy by helping firms run more efficiently or simply leave human workers out in the cold.


U.S. firms have already begun deploying some of these newer robots. General Electric has developed spiderlike robots to climb and maintain tall wind turbines. Kiva Systems, a company bought by Amazon.com, has orange ottoman-shaped robots that sweep across warehouse floors, pull products off shelves and deliver them for packaging. Some hospitals have begun employing robots that can move room to room to dispense medicines to patients or deliver the advice of a doctor who is not on site. Many companies see such automation as the key to cutting costs and staying competitive. Sales of industrial robots rose 38 percent between 2010 and 2012 and are poised to bring in record revenue this year. Already on the market is Baxter, a robot developed by a former director of MIT’s lab. With red plastic arms and a cartoon face, it can do the job of two or more workers, simultaneously unpacking pipe fittings from a conveyer belt while it weighs and places mirrors into boxes. When a human blocks its path, Baxter stops, its eyes widen, and then it courteously gets out of the way.

More information:

09 March 2013

Blueprint for an Artificial Brain

Scientists have long been dreaming about building a computer that would work like a brain. This is because a brain is far more energy-saving than a computer, it can learn by itself, and it doesn’t need any programming. Researchers from Bielefeld University’s Faculty of Physics are experimenting with memristors – electronic microcomponents that imitate natural nerves. They proved that they could do this a year ago. They constructed a memristor that is capable of learning. Memristors are made of fine nanolayers and can be used to connect electric circuits. For several years now, the memristor has been considered to be the electronic equivalent of the synapse. Synapses are the bridges across which nerve cells (neurons) contact each other. Their connections increase in strength the more often they are used. Usually, one nerve cell is connected to other nerve cells across thousands of synapses.

 
Like synapses, memristors learn from earlier impulses. In their case, these are electrical impulses that (as yet) do not come from nerve cells but from the electric circuits to which they are connected. The amount of current a memristor allows to pass depends on how strong the current was that flowed through it in the past and how long it was exposed to it. Memristors are particularly suitable for building an artificial brain – a new generation of computers. They allow us to construct extremely energy-efficient and robust processors that are able to learn by themselves. Researchers take the classic psychological experiment with Pavlov’s dog as an example. The experiment shows how you can link the natural reaction to a stimulus that elicits a reflex response with what is initially a neutral stimulus – this is how learning takes place.

More information:

05 March 2013

SpaceTop - 3D Desktop

The history of computer revolutions will show a logical progression from the Mac to the iPad to something like this SpaceTop 3D desktop. The Massachusetts Institute of Technology researchers developed last year the ZeroN, a levitating 3D ball that can record and replay how it is moved around by a user. Their latest environment is a 3D computer interface that allows a user to ‘reach inside’ a computer screen and grab web pages, documents, and videos like real-world objects. More advanced tasks can be triggered with hand gestures. The system is powered by a transparent LED display and a system of two cameras, one tracking the users’ gestures and the other watching her eyes to assess gaze and adjust the perspective on the projection.


The SpaceTop weaves these two threads together, joining 3-D interface with 3-D gesture controls, a smart convergence that will likely become more common. SpaceTop and ZeroN, are part of a broader shift toward interfaces we can grab with our hands. Humans seem to prefer collaborating via physical interfaces; think of a scale model, map, or whiteboard. People also like interacting in multiple modalities; think of reading a book, underlining words and scribbling in the margins in pencil, and taking separate notes on a pad. Humans seem to prefer collaborating via physical interfaces; think of a scale model, map, or whiteboard. People also like interacting in multiple modalities; think of reading a book, underlining words and scribbling in the margins in pencil, and taking separate notes on a pad.

More information:

03 March 2013

Teaching Robots Lateral Thinking

Many commercial robotic arms perform what roboticists call ‘pick and place’ tasks: The arm picks up an object in one location and places it in another. Usually, the objects — say, automobile components along an assembly line — are positioned so that the arm can easily grasp them; the appendage that does the grasping may even be tailored to the objects’ shape. General-purpose household robots, however, would have to be able to manipulate objects of any shape, left in any location. And today, commercially available robots don’t have anything like the dexterity of the human hand.


Most experimental general-purpose robots use a motion-planning algorithm called the rapidly exploring random tree, which maps out a limited number of collision-free trajectories through the robot’s environment — rather like a subway map overlaid on the map of a city. A sophisticated-enough robot might have arms with seven different joints; if the robot is also mounted on a mobile base — as was the Willow Garage PR2 that the MIT researchers (Computer Science and Artificial Intelligence Laboratory) used — then checking for collisions could mean searching a 10-dimensional space.

More information:


02 March 2013

Brain-to-Brain Interface

A brain-to-brain interface (BTBI) enabled a real-time transfer of behaviorally meaningful sensorimotor information between the brains of two rats. In this BTBI, an ‘encoder’ rat performed sensorimotor tasks that required it to select from two choices of tactile or visual stimuli. While the encoder rat performed the task, samples of its cortical activity were transmitted to matching cortical areas of a ‘decoder’ rat using intracortical microstimulation (ICMS). 


The decoder rat learned to make similar behavioral selections, guided solely by the information provided by the encoder rat's brain. These results demonstrated that a complex system was formed by coupling the animals' brains, suggesting that BTBIs can enable dyads or networks of animal's brains to exchange, process, and store information and, hence, serve as the basis for studies of novel types of social interaction and for biological computing devices.

More information: