Categories: Archives

Robots aren’t coming to life quite yet

Stanford announced a new generation of artificial intelligence technology that continues to inch… Stanford announced a new generation of artificial intelligence technology that continues to inch robotics closer toward self-awareness. Don’t get overly excited yet, though. The robot is still a tad on the simplistic side when compared to the sentient robots of popular culture, but the robot displays the success of a paradigm shift in artificial intelligence that will surely steer engineers in the right direction. The main difference between the new robot, called the Stanford Artificial Intelligence Robot ‘- or STAIR for short ‘- and previous efforts in the realm of artificial intelligence is the fact that STAIR adapts and learns as it bumbles around. Machine learning has been implemented before but usually as an application or as a feature of a large application. The trend of programming mobile robots capable of performing tasks and manipulating objects is somewhat new. STAIR is capable of responding to voice commands and responding to them by traversing a room and using its extendable arms to interact with objects. It comes complete with cameras and other sensors, giving it the ability to avoid objects that might or might not have been present the last time it traversed an area. The true power of machine learning is in its scalability. The author of the ‘Computerworld’ article that introduced STAIR uses the chess master computers of the 1980s as an example. IBM’s Deep Blue was intelligent enough to beat human chess masters, but its smarts had to be manually encoded. STAIR and robots like it begin with a very simple algorithm that, initially, is extremely dumb. The algorithm is capable of taking in data from its surroundings, taking notice of outcomes and correcting its behavior in the future. Not only does it correct its behavior, but it also remembers that it got into trouble and takes steps in future decisions to avoid making the same mistakes. Machine learning is a significant advance in the paradigm of artificial intelligence programming, as Deep Blue’s method of programming quickly becomes too complex to become feasible. Imagine pre-programming a robot to make all of the decisions you make and react to all the stimuli you encounter during your brief walk to class every day. So if it seems we’re on to something, how much longer will I have to wait until robots can approximate human behavior? There is still a long way to go to get to Robby from ‘I, Robot.’ Machine learning algorithms depend on constant and accurate data. Whereas shopping Web sites take in their data from your decisions and purchases, robots must rely on their sensory technology. Active research is always required to create new sensors that give robotic brains new insights into the world around them. Existing sensors can also be more accurate and more discerning. More intelligent robots must, after all, be able to distinguish objects. Navigation around a room requires the ability to accurately estimate the location of an object and a robot’s speed. Manipulating objects requires the ability to accurately estimate depth of the environment in front of a camera sensor, as well as the ability to estimate object size and weight. STAIR displays the trend of robotics research to create more accurate and more sensitive sensors. In order to successfully manipulate its environment, STAIR takes advantage of a camera sensor. The sensor gives STAIR the ability not only to navigate around a room but also to locate and evaluate the dimensions of an object sitting on a lab table. STAIR also comes complete with sensitive audio equipment with enough reception accuracy to reliably respond to voice commands. STAIR and research efforts like it will continue to develop machine-learning algorithms. Robots will continue to increase their grip on reality, and their abilities to manipulate the world around them will continue to increase. It may be a long way off until a robot becomes your friend and is able to approximate the behavior and intellect of a human, but machine-learning algorithms have already been applied to produce robots capable of acting like pets. Now Stanford just needs to develop a workable furry robotics frame into which to cram its electronics.

Pitt News Staff

Share
Published by
Pitt News Staff

Recent Posts

Frustrations in Final Four: Pitt volleyball collects fourth straight loss in Final Four

The best team in Pitt volleyball history fell short in the Final Four to Louisville…

2 days ago

Olivia Babcock wins AVCA National Player of the Year

Pitt volleyball sophomore opposite hitter Olivia Babcock won AVCA National Player of the Year on…

2 days ago

Photos: Pitt women’s basketball falters against Miami

Pitt women’s basketball fell to Miami 56-62 on Sunday at the Petersen Events Center.

3 days ago

Photos: Pitt volleyball downs Kentucky

Pitt volleyball swept Kentucky to advance to the NCAA Semifinals in Louisville on Saturday at…

3 days ago

Photos: Pitt wrestling falls to Ohio State

Pitt Wrestling fell to Ohio State 17-20 on Friday at Fitzgerald Field House. [gallery ids="192931,192930,192929,192928,192927"]

3 days ago

Photos: Pitt volleyball survives Oregon

Pitt volleyball survived a five-set thriller against Oregon during the third round of the NCAA…

3 days ago