Friday, November 13, 2020

8 Things That Successful Social Robot Developers Do.

Focus deeply on specific use case target solutions.

 

Too many potential adopters are led to believe that a social robot can just come out of the box and do most anything and everything. Talking to markets in broad terms like serving retail, hotels, schools and healthcare is so overused that the market(s) is numb to the pitch. “Tell me specifically, how and why I should use this or that robot and the benefits and contribution(s) it will make”. Too many robot developers rely on the serendipity of the market to identify on their own the potential and value of adopting a robot.  Post-sale the inevitably of not meeting customer expectations leads to a mismatch and dis-satisfaction.

 

2.      Invest and provide proven business and consumer ROI models.

 

Business and most personal decisions are based on benefits. Read ‘ROI’. Potential customers need to at least have a starting point to assess (model out) the financial impact and contribution the robot can provide. Even the ‘soft’ metrics for measuring the impact social robots can deliver can and should be quantified. E.g. the quality of life improvements for elders or reducing pain and anxiety in children can be established.

 

3.     Face head on providing solutions and guidelines to the regulations, liability and ethics that stifle robot adoption and deployment.

 

Recognizing that introducing a robot for any use, brings with it true concerns for potential mishaps, security or privacy breaches and ethical concerns. In some industries it might be general liability if for example a robot spills hot coffee on a customer.  In other industries, the regulations are significantly more rigid such as HIPAA privacy in health care. These issues a cannot be ignored, overlooked or left for the customer to navigate alone. Clear guidance and policies need to be prepared and be ready for customer presentation and evaluation.

 

4.      Recognize that most businesses and organizations are not staffed with the skills to evaluate and adopt robots.

 

We are still way early in the robot adoption game when it comes to having ‘knowledgeable buyers’ to work with. This means that deep buyer ‘skill capacity’ fears exist and surround the many selection and support issues regarding robot adoption. Being prepared to document the skills required, the training program resources available and the mutual obligations and commitments expected, and the daily robot management guidelines are just a few of the meaningful value-added components needed to ensure successful sales and on-going satisfaction. Experience teaches that too many robots are purchased without a clear understanding of what is ultimately required.

 

5.      Do not push robots into the public domain that simply do not yet ‘fully’ work.

 

Sadly, this issue speaks for itself.

 

6.      Put in place a customer service/support model.

 

We are still in the age of ‘robots’ in the wild’. The litany of issue is manifest. Breakdowns, failures, abuse, a robot’s confusion with the surrounding electronic and physical environment, quality of connectivity are but few. What is the plan for repair? What is the method and cost of repair? How long do repairs take? Where do repairs happen? Who handles the repair shipping preparations? Who pays for the insurance coverage of repair shipments? These are but few of the mutual obligations robot adopters need to address and make clear.

 

 

7.      Focus on partner success rather than drive to build up channel partner inventory.

 

It is hard to find a robot developer organization that understands the value of channel partners and seeks to establish a true partner relationship. First question from too many robot manufacturers is all too often: “How many of our robots will you inventory”? Second question is: “What are your forecasted sales”? Good gosh, most re-seller partners have not even been given the opportunity to test drive a robot they may have interest in before these questions are asked. Re-seller partners can be a valuable contributing resource towards all of the above noted issues. If you are not willing to invest in a partnership do not do one.

 

 

8.      Understand that in the overall that the number of robots forecasted to be roaming the streets and used in homes is untenable.

 

Reflect on the nature of the true reality of the market. It is a marketplace that is over-hyped with adoption volume forecasts. Think about it. If the number of robots forecasted to be deployed were realized there would be no room on the streets or in homes for people.

 

If you wish to successfully compete and survive re-read items 1-7 above.

 

Mike Radice is Chairman of the Technology Advisory for Robotteca.com and ChartaCloudRobotics.com. Mike can be reached at info@chartacloud.com

Monday, October 12, 2020

Here We Go –The Leap from Facial Recognition to Derived Personality Traits from Facial Image Diagnostics

It is happening, and it is making robots smarter, more powerful and yes, more human-like. The ability of software to recognize faces has fast become a staple amongst a wide variety of software, security and robotic systems and technologies. Now, with the advances and power of AI (artificial neural networking, actually) we can apply diagnostic extraction technology designed to suggest personality traits from captured facial images. Essentially, profiling the personality make up of observed humans. This is what I call ‘derived facial diagnostics’.

Long considered as a pseudo-science AI is proving that there is indeed a validity to the technique that, yes, the very construct of one’s face presents a roadmap view into personality.  I reference a just published research report published on Nature.com: “…results demonstrate that real-life photographs taken in uncontrolled conditions can be used to predict personality traits using complex computer vision algorithms.” (1)

My purpose here is not to discuss the surrounding ethical, moral and social implications of this technology only to observe that when properly used it can provide a significant advance in the potential utility of robot-human interactions. No matter your position on these matters, history teaches that the derived commercial benefits will surely ensure that it happens.

So how might such a technology be used in and with robots?

This technology has already advanced to a point where a facial image capture can return key personality traits in under two seconds. Quite sufficient to for the robot to gauge a path to take for continuing in the interaction or dialogue. So, whether it is making recommendations on potential products such as food choices, clothing, hotels or vacation destinations, a more informed robot can provide a more informed suggestion.

Analyzed over a series of interactions marketers, can refine their offers by augmenting robot captured queries with the key personality traits derived from those accumulated queries. Communications via messages, advertisements and product offers powered by personality-based diagnostics, would supply savvy marketers with an advantage. No matter how small the competitive advantage, it is a knowledge-based advantage that can tip competitive scale.

Extending robot interaction is another. Humans get bored quickly. This is a serious issue with robot engagement. If the human feels disconnected from the interaction by way of generalized robot responses, they simply walk away. However, if the human feels more deeply connected to the dialogue which can now be made personality driven, the likelihood of continued engagement length increases. Thus, given more time to interaction, means more time to sell, suggest or convey a promotional theme or message. More success.

My testing of this technology has served to substantiate its power and advancing validity. If you are a robot developer, feel free to reach out to me for a discussion about early preview access to this technology.

(1) Assessing the Big Five personality traits using real-life static facial images Alexander KachurEvgeny OsinDenis DavydovKonstantin Shutilov & Alexey Novokshonov 

 Mike Radice is Chairman of the Technology Advisory at ChartaCloudRobotics.com and Robotteca.com. You can contact mike at info@chartacloud.com

Friday, July 10, 2020

3 Consternations Developing at the Front Lines of Robotics


Every business modeler understands that defining, strengths weaknesses, threats and opportunities are central to clear and comprehensive strategic planning. Unless the three long game strategic concerns outlined below find their rightful place in that strategic planning model for physical robots, physical robots are headed for a not so pretty inflection point and at the minimum they will face major constraints on long term deployment, viability and use.

Imagine the “robot jam”

Let’s be practical. There is only so much space inside buildings, hospitals, nursing homes, transportation centers, on sidewalks, in retail establishment and yes, particularly in restaurants and homes. If even half the forecasts of ‘future robots to be deployed’ are realized, robots will be crowding out people and running into one another. Simply said, the current model of physical/ mobile robot utilizations is simply not scalable. Worse yet, such large scale deployments will create social-space chaos, bringing in the regulators, licensors, and taxation. Let alone unleash the liability lawyers seeking compensation for robots obstructing and crashing into people and things or standing still to avoid collisions.

Who is Going to Service These Bots, effectively?

Let’s be even more practical. No one can expect that every deployed robot will function over time without failure or damaging incident. Such ‘failures’ as they will assuredly arise may be as simple as needing a battery replacement or more dramatic like retrieving a robot from the bottom of a swimming pool, or collecting one at the bottom of a set of stairs or one stuck stranded and immobile on a sidewalk or in a doorway. How about when some malicious person douses a robot in public with foam or glue spray? Or, when someone just picks it up and steals away with it? No matter, the point is that I have yet to learn of any robot manufacturer’s nationwide model for national on-site monitoring, pickup and repair service. The current model espoused by robot developers and manufacturers places the onus on the customer to monitor, retrieve, diagnose, package and ship for repair. Who is going to take care of and how for all the robot physical/mechanical issues generated by these thousands of forecasted robot deployments and their inherent failure rates?

“Amusing at best”.

The third issue is the human interface expectations established by the physical style of mechanical robots. Most robot design implementations thus far seek to convey a human like motif, a set of attributes that seemingly are designed to convey comfort and familiarity to the interfacing human. They usually have heads, blinking eyes, arms, some have legs. The problem is that in following this path the engagement and response expectations of the robot are set beyond what is proto-typically possible today. Most humans, interfacing with a mechanical robot, soon drift away somewhat amused, maybe, but typically underwhelmed. Truthfully, there are two factors at work. First, most robots when deployed are ‘one horse pony’ demonstrators. I’ve watched people (i.e. customers) walk right past a robot in a public environment and when asked about doing so they state “Oh yeah, I spoke to that robot the other day.  It has nothing new to say.” This is not the robot’s fault as much as the content is usually woefully weak if not silly. Secondly, there is hardly ever a sense that you are actually connecting individually with the robot and being engaged as a unique person in a useful or rewarding conversation. Left as is, robots will remain to be seen as not much more than a gimmick that does dances and takes selfie photo.

This is why smart, AI-powered robots that can engage individuals (detect) emotional conditions and conduct a ‘pathway’ of a logical, in-depth conversation are needed. In summary, my belief is that we need to move away from the mechanical, fixed structural/mechanical robot models so popular today and move to or at least create a new class of what I foresee as ‘soft robots’. Having seen the emerging screen based ‘animated, AI-powered KIOSK creatures’ that can convey engagement and be much more ‘alive’ without the scalability constraints of physical, mechanical platforms I am heartened that it is possible. These soft robots, these ‘artificial creatures’ I predict will be the new interface.

Smart robot developers would be wise to move to these ‘artificial creature’ style interfaces.

These three industry impacting considerations need to be crafted and integrated into a new era solution that creates a future robot world that is much more scalable, manageable, resilient and yes, more satisfying to humans.

Mike Radice is Chairman of the Technology Advisory for ChartaCloud Robotics, https://CHARTACLOUDROBOTICS.com and https://www.ROBOTTECA.com info@chartacloud.com


Monday, May 4, 2020

Are You Watching? 3 Game Changers in Robotics



#1: Putting Humans in the Robot Loop – Game Changer

In these unique times, all things ‘robot’ have begun to move very fast. What business has been resisting about robots for the last decade is fast becoming a priority. Robots previously considered as job killers, all of a sudden look like a brilliant solution to those tasks dull, dangerous, dirty, and toxic. A crisis will have that effect.
The point is that we now need more robots working as fast, efficiently and as effectively as possible. However, the truth is that the world remains a complicated place for a robot. There are problematic times when even a robot needs a helpful human hand. We now realize that injecting human intelligence by positioning a ‘human in the robot loop’ makes a big difference. The requirements for a ‘live’ human-robot interface link is proving to be landscape changing in a positive way to successful robot deployments.
Being able to inject human intelligence into and through a robot via a human-robot interface link especially at the right moment or a critical moment has been found to be critical and highly beneficial. It may be as simple as helping get a robot get back on its ‘map’, resolving the getting around an unforeseen impediment or obstacle or taking over a conversation when an AI-powered retail robot has run out of pre-programmed knowledge and expertise.
Millions of robots are already deployed. And, the number will continue to grow. There are those that predict that robots will at some point outnumber cell phones as the ubiquity of robots increases in the newly emerging economic and social fabric. At present, however, we constantly hear that the robots are not ready for prime time. And, to a great extent, that is true. Reality still does meet expectations. We expect a lot of our robots. For both robot developers and users alike, the stakes are high. Artificial intelligence, machine learning, and deep learning remain essential elements in future robot-based solutions. Adopting the benefits of a ‘human-robot interface link’, a ‘human in the loop’ with the robot via cloud-based software offers an immediate and powerfully functional solution in support of the demand for rapidly expanding the use and deployment of robots.

#2: Cloud-based Robotics Software:  Setting the Stage for Robot Ubiquity – Game Changer

Robot developers are fast coming to understand that relying on the on-board computing power of their hardware platforms, thinking that they can provide fully comprehensive, fully autonomous multi-purpose, multi-functional robots is not within their current grasps. The increasing sophistication of current robots is primarily the result of access to powerful cloud-based computing and software. As a result, a whole new class of software and services providers has evolved, focused on the creation of ‘cloud-based robotics’ software platforms designed to meet the increasing needs for rapid application development, monitoring, controlling and collecting data that analyzes the use of robots in fleets and at scale.
Cloud-Based robotics software will mature in at least these three ways.

A.   Software and services that will allow robot developers to focus their engineering talent and resources on their unique platform attributes while looking to off-the-shelf software to augment their platforms with the non-unique attributes. Using this class of software will lower developmental costs, advance speed to market and increase the reliability and thus the ROI of the robotic platforms.

B.   Introduction of Robot Access Interface Layer (RAIL) software allows ‘the common person’ to use and control robots and create their own personal applications. Controlling the attributes of robot behavior has until recently remained beyond the reach of the population at large. But that is changing as software that can be placed on a robot and interface with its primary functionalities such as speaking, moving, interfacing with other applications, and interfacing with the IOT devices that control homes and monitor health.
C.   Software that ushers in an entirely new concept of robots. Robots that do not need to be embodied as hardware devices on wheels in order to be of service and value. There are now soft robots. If you can imagine an animated, avatar style robot creature that is itself AI smart and is very much as capable of interaction as a hardware-based robot. In this instance we have animated creatures on a screen that are sensitive to touch, can recognize faces, recognize emotions, and dialogue in an engaging fashion. These robot creatures appear and act as if they are themselves alive and you sense that they seem to recognize that you actually exist. More importantly, delivered via information style kiosks …Mirror, Mirror on the wall…vibrantly animated on reactive flexible robot arms these robots are scalable in a future world environment that otherwise if all comes to pass, would be awash in mechanical robots running all over the place and into each other.

#3: Coming Sub 20ms Network Latency – A Game Changer

On April 23rd, 2020 the U.S. FCC approved the allowance of WiFi 6. Increasing the WiFi spectrum means up to 4 times more capacity, 40% increase in data throughput, and increased multi-streaming capacity. Adding 5G telecommunications and network slicing capabilities of software-defined networks, we are pressing to network latency speeds that will challenge human neural networks in speed and thus on to ’seamless’ human-robot communications.
For those of us that have been working in ‘robotics’ these times are proving to be the most energizing yet when you combine the role robots are playing today in fighting the COVIOD-19 virus and the anticipated role that robots will play in our post crisis world.

Michael D. Radice is Chairman of the Technology Advisory Board for ChartaCloud ROBOTTECA,  www.robotteca.com  and www.chartacloudrobotics.com  Mike can be reached by e-mail at mike@chartacloud.com .




Thursday, March 5, 2020

Artificial Intelligence Gets A Face



SPooN.ai: Artificial Creatures Deliver Immersive User Engagement in A Voice/AI Powered World

Author: Michael D. Radice, Managing Director, ChartaCloud Robotics LLC.

Do you feel connected to the technology you use? Wouldn’t it be nice to know that the technology that you are using knows you exist as a real and unique person? That you are not just another technology system or perhaps a robot? This is the first rule of engagement that inspired and drove the development of a new digital interface technology in the age of Artificial Intelligence (AI) products and voice powered services.
To more fully frame this discussion, we need to take a moment to reflect upon the experience with robots. The emergence of robots especially ‘humanoid style’ robots, have taught us a great many lessons. Interaction and engagement expectations (i.e. human robot interface - HRI) with humanoid robots were and remain high. Today’s robots struggle to meet that expectation. Robots are, however, amazingly powerful in at least two aspects. One, they excel in the power of attraction. That is, they can attract and gather an audience. Two, they can be seductive in their anthropomorphic attributes. People like and desire to think and want to believe they are alive. The point is, is that as hardware technologies, which is what robots are, the current state of interface with robots leaves us wanting more. The best I have experienced thus far is the seductive power of the NAO humanoid style robot. Its design and its animated engagement using what is called autonomous life, does proffer powerful engagement.  These previous robot engagement experiences provided the stimulation that a new style interface to digital technology was needed. One that meets real life personal engagement expectations. AI products are robots of a different sort.
We have moved fast past the point where a breakthrough in creating a new interface to digital technology was needed. AI, voice-powered interaction, machine learning, facial recognition, emotional discernment are the technologies driving the demand and the need for a new unified interface to digital technology. For product developers, the challenge is even greater. How do you create an application interface that embraces so many disparate interaction elements? The forces pulling and pushing the need for creating a new model interface in AI powered digital technologies has in my opinion become irresistible. With 150 million users using voice to interface with a growing aspect of their daily AI driven technology, the stakes for creating a breakthrough were getting higher. The creation of a new unified interface is becoming a winner take all proposition. The ‘mouse’ won’t get us there. The stylus was never the end-all be-all. Touch screen interfaces work well but many times they too can be problematic. Chatbots are well, just that chatbots. Infobots are very much solo info-point devices giving square answers to round questions. Technology is now capable of seeing you and knowing who you are, discerning a lot about your emotional state, knowing your experiential preferences. For example, what will be the defining attributes for delivering AI driven services in collective spaces like transportation centers, hospitals, office buildings, and shopping malls? We know for sure that it will be heavily formulated as knowledge-based and experience driven AI services that learn.
So, here come the ‘artificial creatures’ and the Oxytocin Element

For further insight we can look around and take note that many of mankind’s most powerful inventions and creations were inspired and derived from the biological world. Outside of person to person bonding is there an example of stronger bonding than that between people and their pets? What is the bonding interface attribute that generates such an instant, warm and comfortable sensation reaction in our brains? When we experience such a warm encounter with a pet or yes, a person, we generate brain chemical called oxytocin. While oxytocin helps cement bonds between people, it also simply stated, makes us feel good. Hence another clue to defining the future AI interface. Its use must result in a positive sense of personal interaction. An understanding of all this brain functionality, what I call ‘brain tech’ and the power of biological design and what I now refer to as zoomorphic attributes become central and powerful elements that are being used by the creators of the new universal AI interface. I have seen it, used it, and it is called SPooN.
This is where ‘artificial creatures’ which are a creation of SPooN enter the scene. They are called SPooNys. Think of SpooNy’s as AI soft robots or smart avatars that actually possess the capacity to be your interface to all of your technology. A SPooNy is smart, being driven by AI and empowered with facial and emotional recognition to help guide the interaction. A SpooNy takes on the persona of an artificial creature in the form of soft robot creature. One of them looks like this.



It has eyes that follow you. It has facial responses that engage you with its zoomorphic character. It can feel the user.
A SpooNy can be on any digital device. A personal device or an information kiosk.
Integrated into the creature’s face are 11 embedded dynamic attributes that create the personal engagement levels that make SPooNy so powerful.
And yes, SpooNy speaks multiple languages currently Chinese, English, French, Japanese, and Spanish are already available with more to come.

Here is an implementation of a SpooNy ‘living, moving following and reaching out’ deployed on a robotics| armature. A powerful engagement mode for hospitality, retail and targeted use points such as in health care. Like robots it attracts a crowd. Tests prove that it is more powerful at engaging a person than a robot.

Here is a SpooNy deployed in a six-foot-high information kiosk. 

This info ‘totem’ make sense in what I refer to as the ‘collectives’ environment as the following discussion describes. Think of places like office buildings, transportation centers, hospitals, hotels as large complex collectives. These collectives are made up of a collection of active and internally changing elements such as individuals, trains, buses, taxis, and of passive elements such as office spaces, lobbies, mechanical centers, stores, and restaurants for example. They combine to create the entire collective entity.  SpooNy is a universal digital interface that can embrace a person’s AI and voice driven interaction(1) with all and or each the complex elements that comprise the ‘collective’, creating an AI driven kiosk with depth and a face and/or a voice that can have a relationship based immersive engagement with a person.
With AI powered SPooNy collectives can take on a reflective engagement persona sensing the needs and desires of the person with whom SPooNy is engaging. SpooNy can be the unique face of the collective. SPooNy embraces and provides a collective’s entire persona so that people can interact with the entire collective as either as an entity or on a ‘person to person’ basis.
Having experienced SpooNy firsthand I know that AI now has a face. SPooNy.
SpooNy is a product of SPooN.ai, Paris, France. More information about SpooNy can be found at www.robotteca.com
(1)   Consider the power of this voice/conversational interface in providing ADA sanctioned service assistance.
Michael Radice is Chairman of the Technology Advisory Board for ChartaCloud’s ROBOTTECA.COM and can be reached at info@chartacloud.com | ph: 603-379-9148