Monday, May 4, 2020

Are You Watching? 3 Game Changers in Robotics



#1: Putting Humans in the Robot Loop – Game Changer

In these unique times, all things ‘robot’ have begun to move very fast. What business has been resisting about robots for the last decade is fast becoming a priority. Robots previously considered as job killers, all of a sudden look like a brilliant solution to those tasks dull, dangerous, dirty, and toxic. A crisis will have that effect.
The point is that we now need more robots working as fast, efficiently and as effectively as possible. However, the truth is that the world remains a complicated place for a robot. There are problematic times when even a robot needs a helpful human hand. We now realize that injecting human intelligence by positioning a ‘human in the robot loop’ makes a big difference. The requirements for a ‘live’ human-robot interface link is proving to be landscape changing in a positive way to successful robot deployments.
Being able to inject human intelligence into and through a robot via a human-robot interface link especially at the right moment or a critical moment has been found to be critical and highly beneficial. It may be as simple as helping get a robot get back on its ‘map’, resolving the getting around an unforeseen impediment or obstacle or taking over a conversation when an AI-powered retail robot has run out of pre-programmed knowledge and expertise.
Millions of robots are already deployed. And, the number will continue to grow. There are those that predict that robots will at some point outnumber cell phones as the ubiquity of robots increases in the newly emerging economic and social fabric. At present, however, we constantly hear that the robots are not ready for prime time. And, to a great extent, that is true. Reality still does meet expectations. We expect a lot of our robots. For both robot developers and users alike, the stakes are high. Artificial intelligence, machine learning, and deep learning remain essential elements in future robot-based solutions. Adopting the benefits of a ‘human-robot interface link’, a ‘human in the loop’ with the robot via cloud-based software offers an immediate and powerfully functional solution in support of the demand for rapidly expanding the use and deployment of robots.

#2: Cloud-based Robotics Software:  Setting the Stage for Robot Ubiquity – Game Changer

Robot developers are fast coming to understand that relying on the on-board computing power of their hardware platforms, thinking that they can provide fully comprehensive, fully autonomous multi-purpose, multi-functional robots is not within their current grasps. The increasing sophistication of current robots is primarily the result of access to powerful cloud-based computing and software. As a result, a whole new class of software and services providers has evolved, focused on the creation of ‘cloud-based robotics’ software platforms designed to meet the increasing needs for rapid application development, monitoring, controlling and collecting data that analyzes the use of robots in fleets and at scale.
Cloud-Based robotics software will mature in at least these three ways.

A.   Software and services that will allow robot developers to focus their engineering talent and resources on their unique platform attributes while looking to off-the-shelf software to augment their platforms with the non-unique attributes. Using this class of software will lower developmental costs, advance speed to market and increase the reliability and thus the ROI of the robotic platforms.

B.   Introduction of Robot Access Interface Layer (RAIL) software allows ‘the common person’ to use and control robots and create their own personal applications. Controlling the attributes of robot behavior has until recently remained beyond the reach of the population at large. But that is changing as software that can be placed on a robot and interface with its primary functionalities such as speaking, moving, interfacing with other applications, and interfacing with the IOT devices that control homes and monitor health.
C.   Software that ushers in an entirely new concept of robots. Robots that do not need to be embodied as hardware devices on wheels in order to be of service and value. There are now soft robots. If you can imagine an animated, avatar style robot creature that is itself AI smart and is very much as capable of interaction as a hardware-based robot. In this instance we have animated creatures on a screen that are sensitive to touch, can recognize faces, recognize emotions, and dialogue in an engaging fashion. These robot creatures appear and act as if they are themselves alive and you sense that they seem to recognize that you actually exist. More importantly, delivered via information style kiosks …Mirror, Mirror on the wall…vibrantly animated on reactive flexible robot arms these robots are scalable in a future world environment that otherwise if all comes to pass, would be awash in mechanical robots running all over the place and into each other.

#3: Coming Sub 20ms Network Latency – A Game Changer

On April 23rd, 2020 the U.S. FCC approved the allowance of WiFi 6. Increasing the WiFi spectrum means up to 4 times more capacity, 40% increase in data throughput, and increased multi-streaming capacity. Adding 5G telecommunications and network slicing capabilities of software-defined networks, we are pressing to network latency speeds that will challenge human neural networks in speed and thus on to ’seamless’ human-robot communications.
For those of us that have been working in ‘robotics’ these times are proving to be the most energizing yet when you combine the role robots are playing today in fighting the COVIOD-19 virus and the anticipated role that robots will play in our post crisis world.

Michael D. Radice is Chairman of the Technology Advisory Board for ChartaCloud ROBOTTECA,  www.robotteca.com  and www.chartacloudrobotics.com  Mike can be reached by e-mail at mike@chartacloud.com .




Thursday, March 5, 2020

Artificial Intelligence Gets A Face



SPooN.ai: Artificial Creatures Deliver Immersive User Engagement in A Voice/AI Powered World

Author: Michael D. Radice, Managing Director, ChartaCloud Robotics LLC.

Do you feel connected to the technology you use? Wouldn’t it be nice to know that the technology that you are using knows you exist as a real and unique person? That you are not just another technology system or perhaps a robot? This is the first rule of engagement that inspired and drove the development of a new digital interface technology in the age of Artificial Intelligence (AI) products and voice powered services.
To more fully frame this discussion, we need to take a moment to reflect upon the experience with robots. The emergence of robots especially ‘humanoid style’ robots, have taught us a great many lessons. Interaction and engagement expectations (i.e. human robot interface - HRI) with humanoid robots were and remain high. Today’s robots struggle to meet that expectation. Robots are, however, amazingly powerful in at least two aspects. One, they excel in the power of attraction. That is, they can attract and gather an audience. Two, they can be seductive in their anthropomorphic attributes. People like and desire to think and want to believe they are alive. The point is, is that as hardware technologies, which is what robots are, the current state of interface with robots leaves us wanting more. The best I have experienced thus far is the seductive power of the NAO humanoid style robot. Its design and its animated engagement using what is called autonomous life, does proffer powerful engagement.  These previous robot engagement experiences provided the stimulation that a new style interface to digital technology was needed. One that meets real life personal engagement expectations. AI products are robots of a different sort.
We have moved fast past the point where a breakthrough in creating a new interface to digital technology was needed. AI, voice-powered interaction, machine learning, facial recognition, emotional discernment are the technologies driving the demand and the need for a new unified interface to digital technology. For product developers, the challenge is even greater. How do you create an application interface that embraces so many disparate interaction elements? The forces pulling and pushing the need for creating a new model interface in AI powered digital technologies has in my opinion become irresistible. With 150 million users using voice to interface with a growing aspect of their daily AI driven technology, the stakes for creating a breakthrough were getting higher. The creation of a new unified interface is becoming a winner take all proposition. The ‘mouse’ won’t get us there. The stylus was never the end-all be-all. Touch screen interfaces work well but many times they too can be problematic. Chatbots are well, just that chatbots. Infobots are very much solo info-point devices giving square answers to round questions. Technology is now capable of seeing you and knowing who you are, discerning a lot about your emotional state, knowing your experiential preferences. For example, what will be the defining attributes for delivering AI driven services in collective spaces like transportation centers, hospitals, office buildings, and shopping malls? We know for sure that it will be heavily formulated as knowledge-based and experience driven AI services that learn.
So, here come the ‘artificial creatures’ and the Oxytocin Element

For further insight we can look around and take note that many of mankind’s most powerful inventions and creations were inspired and derived from the biological world. Outside of person to person bonding is there an example of stronger bonding than that between people and their pets? What is the bonding interface attribute that generates such an instant, warm and comfortable sensation reaction in our brains? When we experience such a warm encounter with a pet or yes, a person, we generate brain chemical called oxytocin. While oxytocin helps cement bonds between people, it also simply stated, makes us feel good. Hence another clue to defining the future AI interface. Its use must result in a positive sense of personal interaction. An understanding of all this brain functionality, what I call ‘brain tech’ and the power of biological design and what I now refer to as zoomorphic attributes become central and powerful elements that are being used by the creators of the new universal AI interface. I have seen it, used it, and it is called SPooN.
This is where ‘artificial creatures’ which are a creation of SPooN enter the scene. They are called SPooNys. Think of SpooNy’s as AI soft robots or smart avatars that actually possess the capacity to be your interface to all of your technology. A SPooNy is smart, being driven by AI and empowered with facial and emotional recognition to help guide the interaction. A SpooNy takes on the persona of an artificial creature in the form of soft robot creature. One of them looks like this.



It has eyes that follow you. It has facial responses that engage you with its zoomorphic character. It can feel the user.
A SpooNy can be on any digital device. A personal device or an information kiosk.
Integrated into the creature’s face are 11 embedded dynamic attributes that create the personal engagement levels that make SPooNy so powerful.
And yes, SpooNy speaks multiple languages currently Chinese, English, French, Japanese, and Spanish are already available with more to come.

Here is an implementation of a SpooNy ‘living, moving following and reaching out’ deployed on a robotics| armature. A powerful engagement mode for hospitality, retail and targeted use points such as in health care. Like robots it attracts a crowd. Tests prove that it is more powerful at engaging a person than a robot.

Here is a SpooNy deployed in a six-foot-high information kiosk. 

This info ‘totem’ make sense in what I refer to as the ‘collectives’ environment as the following discussion describes. Think of places like office buildings, transportation centers, hospitals, hotels as large complex collectives. These collectives are made up of a collection of active and internally changing elements such as individuals, trains, buses, taxis, and of passive elements such as office spaces, lobbies, mechanical centers, stores, and restaurants for example. They combine to create the entire collective entity.  SpooNy is a universal digital interface that can embrace a person’s AI and voice driven interaction(1) with all and or each the complex elements that comprise the ‘collective’, creating an AI driven kiosk with depth and a face and/or a voice that can have a relationship based immersive engagement with a person.
With AI powered SPooNy collectives can take on a reflective engagement persona sensing the needs and desires of the person with whom SPooNy is engaging. SpooNy can be the unique face of the collective. SPooNy embraces and provides a collective’s entire persona so that people can interact with the entire collective as either as an entity or on a ‘person to person’ basis.
Having experienced SpooNy firsthand I know that AI now has a face. SPooNy.
SpooNy is a product of SPooN.ai, Paris, France. More information about SpooNy can be found at www.robotteca.com
(1)   Consider the power of this voice/conversational interface in providing ADA sanctioned service assistance.
Michael Radice is Chairman of the Technology Advisory Board for ChartaCloud’s ROBOTTECA.COM and can be reached at info@chartacloud.com | ph: 603-379-9148