Monday, October 12, 2020

Here We Go –The Leap from Facial Recognition to Derived Personality Traits from Facial Image Diagnostics

It is happening, and it is making robots smarter, more powerful and yes, more human-like. The ability of software to recognize faces has fast become a staple amongst a wide variety of software, security and robotic systems and technologies. Now, with the advances and power of AI (artificial neural networking, actually) we can apply diagnostic extraction technology designed to suggest personality traits from captured facial images. Essentially, profiling the personality make up of observed humans. This is what I call ‘derived facial diagnostics’.

Long considered as a pseudo-science AI is proving that there is indeed a validity to the technique that, yes, the very construct of one’s face presents a roadmap view into personality.  I reference a just published research report published on Nature.com: “…results demonstrate that real-life photographs taken in uncontrolled conditions can be used to predict personality traits using complex computer vision algorithms.” (1)

My purpose here is not to discuss the surrounding ethical, moral and social implications of this technology only to observe that when properly used it can provide a significant advance in the potential utility of robot-human interactions. No matter your position on these matters, history teaches that the derived commercial benefits will surely ensure that it happens.

So how might such a technology be used in and with robots?

This technology has already advanced to a point where a facial image capture can return key personality traits in under two seconds. Quite sufficient to for the robot to gauge a path to take for continuing in the interaction or dialogue. So, whether it is making recommendations on potential products such as food choices, clothing, hotels or vacation destinations, a more informed robot can provide a more informed suggestion.

Analyzed over a series of interactions marketers, can refine their offers by augmenting robot captured queries with the key personality traits derived from those accumulated queries. Communications via messages, advertisements and product offers powered by personality-based diagnostics, would supply savvy marketers with an advantage. No matter how small the competitive advantage, it is a knowledge-based advantage that can tip competitive scale.

Extending robot interaction is another. Humans get bored quickly. This is a serious issue with robot engagement. If the human feels disconnected from the interaction by way of generalized robot responses, they simply walk away. However, if the human feels more deeply connected to the dialogue which can now be made personality driven, the likelihood of continued engagement length increases. Thus, given more time to interaction, means more time to sell, suggest or convey a promotional theme or message. More success.

My testing of this technology has served to substantiate its power and advancing validity. If you are a robot developer, feel free to reach out to me for a discussion about early preview access to this technology.

(1) Assessing the Big Five personality traits using real-life static facial images Alexander KachurEvgeny OsinDenis DavydovKonstantin Shutilov & Alexey Novokshonov 

 Mike Radice is Chairman of the Technology Advisory at ChartaCloudRobotics.com and Robotteca.com. You can contact mike at info@chartacloud.com

Friday, July 10, 2020

3 Consternations Developing at the Front Lines of Robotics


Every business modeler understands that defining, strengths weaknesses, threats and opportunities are central to clear and comprehensive strategic planning. Unless the three long game strategic concerns outlined below find their rightful place in that strategic planning model for physical robots, physical robots are headed for a not so pretty inflection point and at the minimum they will face major constraints on long term deployment, viability and use.

Imagine the “robot jam”

Let’s be practical. There is only so much space inside buildings, hospitals, nursing homes, transportation centers, on sidewalks, in retail establishment and yes, particularly in restaurants and homes. If even half the forecasts of ‘future robots to be deployed’ are realized, robots will be crowding out people and running into one another. Simply said, the current model of physical/ mobile robot utilizations is simply not scalable. Worse yet, such large scale deployments will create social-space chaos, bringing in the regulators, licensors, and taxation. Let alone unleash the liability lawyers seeking compensation for robots obstructing and crashing into people and things or standing still to avoid collisions.

Who is Going to Service These Bots, effectively?

Let’s be even more practical. No one can expect that every deployed robot will function over time without failure or damaging incident. Such ‘failures’ as they will assuredly arise may be as simple as needing a battery replacement or more dramatic like retrieving a robot from the bottom of a swimming pool, or collecting one at the bottom of a set of stairs or one stuck stranded and immobile on a sidewalk or in a doorway. How about when some malicious person douses a robot in public with foam or glue spray? Or, when someone just picks it up and steals away with it? No matter, the point is that I have yet to learn of any robot manufacturer’s nationwide model for national on-site monitoring, pickup and repair service. The current model espoused by robot developers and manufacturers places the onus on the customer to monitor, retrieve, diagnose, package and ship for repair. Who is going to take care of and how for all the robot physical/mechanical issues generated by these thousands of forecasted robot deployments and their inherent failure rates?

“Amusing at best”.

The third issue is the human interface expectations established by the physical style of mechanical robots. Most robot design implementations thus far seek to convey a human like motif, a set of attributes that seemingly are designed to convey comfort and familiarity to the interfacing human. They usually have heads, blinking eyes, arms, some have legs. The problem is that in following this path the engagement and response expectations of the robot are set beyond what is proto-typically possible today. Most humans, interfacing with a mechanical robot, soon drift away somewhat amused, maybe, but typically underwhelmed. Truthfully, there are two factors at work. First, most robots when deployed are ‘one horse pony’ demonstrators. I’ve watched people (i.e. customers) walk right past a robot in a public environment and when asked about doing so they state “Oh yeah, I spoke to that robot the other day.  It has nothing new to say.” This is not the robot’s fault as much as the content is usually woefully weak if not silly. Secondly, there is hardly ever a sense that you are actually connecting individually with the robot and being engaged as a unique person in a useful or rewarding conversation. Left as is, robots will remain to be seen as not much more than a gimmick that does dances and takes selfie photo.

This is why smart, AI-powered robots that can engage individuals (detect) emotional conditions and conduct a ‘pathway’ of a logical, in-depth conversation are needed. In summary, my belief is that we need to move away from the mechanical, fixed structural/mechanical robot models so popular today and move to or at least create a new class of what I foresee as ‘soft robots’. Having seen the emerging screen based ‘animated, AI-powered KIOSK creatures’ that can convey engagement and be much more ‘alive’ without the scalability constraints of physical, mechanical platforms I am heartened that it is possible. These soft robots, these ‘artificial creatures’ I predict will be the new interface.

Smart robot developers would be wise to move to these ‘artificial creature’ style interfaces.

These three industry impacting considerations need to be crafted and integrated into a new era solution that creates a future robot world that is much more scalable, manageable, resilient and yes, more satisfying to humans.

Mike Radice is Chairman of the Technology Advisory for ChartaCloud Robotics, https://CHARTACLOUDROBOTICS.com and https://www.ROBOTTECA.com info@chartacloud.com


Monday, May 4, 2020

Are You Watching? 3 Game Changers in Robotics



#1: Putting Humans in the Robot Loop – Game Changer

In these unique times, all things ‘robot’ have begun to move very fast. What business has been resisting about robots for the last decade is fast becoming a priority. Robots previously considered as job killers, all of a sudden look like a brilliant solution to those tasks dull, dangerous, dirty, and toxic. A crisis will have that effect.
The point is that we now need more robots working as fast, efficiently and as effectively as possible. However, the truth is that the world remains a complicated place for a robot. There are problematic times when even a robot needs a helpful human hand. We now realize that injecting human intelligence by positioning a ‘human in the robot loop’ makes a big difference. The requirements for a ‘live’ human-robot interface link is proving to be landscape changing in a positive way to successful robot deployments.
Being able to inject human intelligence into and through a robot via a human-robot interface link especially at the right moment or a critical moment has been found to be critical and highly beneficial. It may be as simple as helping get a robot get back on its ‘map’, resolving the getting around an unforeseen impediment or obstacle or taking over a conversation when an AI-powered retail robot has run out of pre-programmed knowledge and expertise.
Millions of robots are already deployed. And, the number will continue to grow. There are those that predict that robots will at some point outnumber cell phones as the ubiquity of robots increases in the newly emerging economic and social fabric. At present, however, we constantly hear that the robots are not ready for prime time. And, to a great extent, that is true. Reality still does meet expectations. We expect a lot of our robots. For both robot developers and users alike, the stakes are high. Artificial intelligence, machine learning, and deep learning remain essential elements in future robot-based solutions. Adopting the benefits of a ‘human-robot interface link’, a ‘human in the loop’ with the robot via cloud-based software offers an immediate and powerfully functional solution in support of the demand for rapidly expanding the use and deployment of robots.

#2: Cloud-based Robotics Software:  Setting the Stage for Robot Ubiquity – Game Changer

Robot developers are fast coming to understand that relying on the on-board computing power of their hardware platforms, thinking that they can provide fully comprehensive, fully autonomous multi-purpose, multi-functional robots is not within their current grasps. The increasing sophistication of current robots is primarily the result of access to powerful cloud-based computing and software. As a result, a whole new class of software and services providers has evolved, focused on the creation of ‘cloud-based robotics’ software platforms designed to meet the increasing needs for rapid application development, monitoring, controlling and collecting data that analyzes the use of robots in fleets and at scale.
Cloud-Based robotics software will mature in at least these three ways.

A.   Software and services that will allow robot developers to focus their engineering talent and resources on their unique platform attributes while looking to off-the-shelf software to augment their platforms with the non-unique attributes. Using this class of software will lower developmental costs, advance speed to market and increase the reliability and thus the ROI of the robotic platforms.

B.   Introduction of Robot Access Interface Layer (RAIL) software allows ‘the common person’ to use and control robots and create their own personal applications. Controlling the attributes of robot behavior has until recently remained beyond the reach of the population at large. But that is changing as software that can be placed on a robot and interface with its primary functionalities such as speaking, moving, interfacing with other applications, and interfacing with the IOT devices that control homes and monitor health.
C.   Software that ushers in an entirely new concept of robots. Robots that do not need to be embodied as hardware devices on wheels in order to be of service and value. There are now soft robots. If you can imagine an animated, avatar style robot creature that is itself AI smart and is very much as capable of interaction as a hardware-based robot. In this instance we have animated creatures on a screen that are sensitive to touch, can recognize faces, recognize emotions, and dialogue in an engaging fashion. These robot creatures appear and act as if they are themselves alive and you sense that they seem to recognize that you actually exist. More importantly, delivered via information style kiosks …Mirror, Mirror on the wall…vibrantly animated on reactive flexible robot arms these robots are scalable in a future world environment that otherwise if all comes to pass, would be awash in mechanical robots running all over the place and into each other.

#3: Coming Sub 20ms Network Latency – A Game Changer

On April 23rd, 2020 the U.S. FCC approved the allowance of WiFi 6. Increasing the WiFi spectrum means up to 4 times more capacity, 40% increase in data throughput, and increased multi-streaming capacity. Adding 5G telecommunications and network slicing capabilities of software-defined networks, we are pressing to network latency speeds that will challenge human neural networks in speed and thus on to ’seamless’ human-robot communications.
For those of us that have been working in ‘robotics’ these times are proving to be the most energizing yet when you combine the role robots are playing today in fighting the COVIOD-19 virus and the anticipated role that robots will play in our post crisis world.

Michael D. Radice is Chairman of the Technology Advisory Board for ChartaCloud ROBOTTECA,  www.robotteca.com  and www.chartacloudrobotics.com  Mike can be reached by e-mail at mike@chartacloud.com .




Thursday, March 5, 2020

Artificial Intelligence Gets A Face



SPooN.ai: Artificial Creatures Deliver Immersive User Engagement in A Voice/AI Powered World

Author: Michael D. Radice, Managing Director, ChartaCloud Robotics LLC.

Do you feel connected to the technology you use? Wouldn’t it be nice to know that the technology that you are using knows you exist as a real and unique person? That you are not just another technology system or perhaps a robot? This is the first rule of engagement that inspired and drove the development of a new digital interface technology in the age of Artificial Intelligence (AI) products and voice powered services.
To more fully frame this discussion, we need to take a moment to reflect upon the experience with robots. The emergence of robots especially ‘humanoid style’ robots, have taught us a great many lessons. Interaction and engagement expectations (i.e. human robot interface - HRI) with humanoid robots were and remain high. Today’s robots struggle to meet that expectation. Robots are, however, amazingly powerful in at least two aspects. One, they excel in the power of attraction. That is, they can attract and gather an audience. Two, they can be seductive in their anthropomorphic attributes. People like and desire to think and want to believe they are alive. The point is, is that as hardware technologies, which is what robots are, the current state of interface with robots leaves us wanting more. The best I have experienced thus far is the seductive power of the NAO humanoid style robot. Its design and its animated engagement using what is called autonomous life, does proffer powerful engagement.  These previous robot engagement experiences provided the stimulation that a new style interface to digital technology was needed. One that meets real life personal engagement expectations. AI products are robots of a different sort.
We have moved fast past the point where a breakthrough in creating a new interface to digital technology was needed. AI, voice-powered interaction, machine learning, facial recognition, emotional discernment are the technologies driving the demand and the need for a new unified interface to digital technology. For product developers, the challenge is even greater. How do you create an application interface that embraces so many disparate interaction elements? The forces pulling and pushing the need for creating a new model interface in AI powered digital technologies has in my opinion become irresistible. With 150 million users using voice to interface with a growing aspect of their daily AI driven technology, the stakes for creating a breakthrough were getting higher. The creation of a new unified interface is becoming a winner take all proposition. The ‘mouse’ won’t get us there. The stylus was never the end-all be-all. Touch screen interfaces work well but many times they too can be problematic. Chatbots are well, just that chatbots. Infobots are very much solo info-point devices giving square answers to round questions. Technology is now capable of seeing you and knowing who you are, discerning a lot about your emotional state, knowing your experiential preferences. For example, what will be the defining attributes for delivering AI driven services in collective spaces like transportation centers, hospitals, office buildings, and shopping malls? We know for sure that it will be heavily formulated as knowledge-based and experience driven AI services that learn.
So, here come the ‘artificial creatures’ and the Oxytocin Element

For further insight we can look around and take note that many of mankind’s most powerful inventions and creations were inspired and derived from the biological world. Outside of person to person bonding is there an example of stronger bonding than that between people and their pets? What is the bonding interface attribute that generates such an instant, warm and comfortable sensation reaction in our brains? When we experience such a warm encounter with a pet or yes, a person, we generate brain chemical called oxytocin. While oxytocin helps cement bonds between people, it also simply stated, makes us feel good. Hence another clue to defining the future AI interface. Its use must result in a positive sense of personal interaction. An understanding of all this brain functionality, what I call ‘brain tech’ and the power of biological design and what I now refer to as zoomorphic attributes become central and powerful elements that are being used by the creators of the new universal AI interface. I have seen it, used it, and it is called SPooN.
This is where ‘artificial creatures’ which are a creation of SPooN enter the scene. They are called SPooNys. Think of SpooNy’s as AI soft robots or smart avatars that actually possess the capacity to be your interface to all of your technology. A SPooNy is smart, being driven by AI and empowered with facial and emotional recognition to help guide the interaction. A SpooNy takes on the persona of an artificial creature in the form of soft robot creature. One of them looks like this.



It has eyes that follow you. It has facial responses that engage you with its zoomorphic character. It can feel the user.
A SpooNy can be on any digital device. A personal device or an information kiosk.
Integrated into the creature’s face are 11 embedded dynamic attributes that create the personal engagement levels that make SPooNy so powerful.
And yes, SpooNy speaks multiple languages currently Chinese, English, French, Japanese, and Spanish are already available with more to come.

Here is an implementation of a SpooNy ‘living, moving following and reaching out’ deployed on a robotics| armature. A powerful engagement mode for hospitality, retail and targeted use points such as in health care. Like robots it attracts a crowd. Tests prove that it is more powerful at engaging a person than a robot.

Here is a SpooNy deployed in a six-foot-high information kiosk. 

This info ‘totem’ make sense in what I refer to as the ‘collectives’ environment as the following discussion describes. Think of places like office buildings, transportation centers, hospitals, hotels as large complex collectives. These collectives are made up of a collection of active and internally changing elements such as individuals, trains, buses, taxis, and of passive elements such as office spaces, lobbies, mechanical centers, stores, and restaurants for example. They combine to create the entire collective entity.  SpooNy is a universal digital interface that can embrace a person’s AI and voice driven interaction(1) with all and or each the complex elements that comprise the ‘collective’, creating an AI driven kiosk with depth and a face and/or a voice that can have a relationship based immersive engagement with a person.
With AI powered SPooNy collectives can take on a reflective engagement persona sensing the needs and desires of the person with whom SPooNy is engaging. SpooNy can be the unique face of the collective. SPooNy embraces and provides a collective’s entire persona so that people can interact with the entire collective as either as an entity or on a ‘person to person’ basis.
Having experienced SpooNy firsthand I know that AI now has a face. SPooNy.
SpooNy is a product of SPooN.ai, Paris, France. More information about SpooNy can be found at www.robotteca.com
(1)   Consider the power of this voice/conversational interface in providing ADA sanctioned service assistance.
Michael Radice is Chairman of the Technology Advisory Board for ChartaCloud’s ROBOTTECA.COM and can be reached at info@chartacloud.com | ph: 603-379-9148

Monday, December 2, 2019

Making the Case for ROBOTS in K-12 Education

This article was published by K12Digest

Students are eager for high tech classrooms. Is your classroom room ready? Are you?

The transformation of education has never been more dramatic. Every teacher will confess that to succeed in the classroom today means creating an engaging environment, many times on subjects that hold little initial student interest, especially when subjects are competing for a student’s interest when they have immersed themselves in the captivating power of video games, YouTube videos, and social chat groups. Attention spans have never been shorter. The ability of today’s children to multi-process different tasks seems bewildering to many adults.
The current backstory has been the development of curricula for STEM learning. It seems most schools are still struggling to get fully deployed with STEM or what may now be called STEAM programs. The challenge has been fourfold.
First, was the question of what a STEM program should consist of?
Second, how do you define and prepare students for a multi-year curriculum that offers foundational continuity?
Third, how do you prepare and establish qualifications for teachers?
And fourth, is the always overruling constraints of budgets and qualified staff expertise.
Then comes yet another hidden element that always seems to raise its head. Teachers simply don’t have the time to advance themselves in technologies or to fully embrace and put their imaginations to work using the new classroom technologies. Learning and creating take time. This is why through experience we have learned that the best technologies for K-12 STEM programs are responsive to all these real-life challenges. Simply said, classroom technologies today must be engaging, dependable, flexible, and robust at the outset. The adoption and learning curve for teachers using new technologies must be realistically planned, considered and structured. The chasm between schools is significant. I have been to schools that replicate the “Starship Enterprise” with amazing technologies for students to use and see schools that struggle to even have basic internet connectivity. The point is that ‘one-size solutions’ do not fit all situations.
With all the above points considered this is why I have focused on classroom robots as a critical component of STEM programs. Why? The price range of robots for STEM use can fit into almost any budget. Robots can serve in multiple ways. They are engaging. They provide immediate feedback. They can be used to deliver animated educational subject matter presentations. They can be used to advance special educational needs such as working with students diagnosed on the autism spectrum. They can teach new languages. They are tireless and non-judgmental. They can be easily transported from classroom to classroom. They create a sense of excitement and visible accomplishment. Most importantly, robots in education will open the expansive horizons and insights into the future world that today’s students will live in as adults. A world where they cohabitate and collaborate side-by-side with robots at work and in their homes.
Robots in schools also generate an opportunity for students to not only challenge themselves but engage with other schools and groups in robot competitions. There are few topics that generate active participation in the sciences for young women more than robots. Many of today’s newly created robotic companies are now being headed by young women.
Robots are not like most other K-12 classroom technologies that strive to deliver and teach subject matter lessons. Robots serve as a powerful platform for developing a student’s own innovation and creativity. Working and learning with robots challenge students to bring into play the thinking about a variety of disciplines such as programming, math, engineering, physics and geometry needed to accomplish a robotic construct or a robot behavior animation. More recently students of the arts have begun adopting robots to perform ‘robot theater plays’, dance routines and to present stories or poems that they have authored. So here we see the convergence of ‘science and the arts’ (STEAM)in full bloom.
It is clear, that when positioned correctly, robots are at their best when they are used as a teacher’s assistant or a tool for learning and not deemed to be a replacement for the teacher. Maybe that day will come, with the advance of artificial intelligence and robot emotional recognition capabilities, but for now, robots are captivating and engaging students worldwide and for a reason. Today’s robot platforms and technologies put every classroom in reach of the ultimate goal, making learning fun, productive and rewarding.

Thursday, October 3, 2019

Why Every NAO Robot Needs a ZORA.


Why Every NAO Robot Needs A ZORA!
My 6 Experienced-based Reasons Why NAO Robot Users Are Excited About ZORA Software and ZoraBots

With years of experience reviewing, evaluating and testing many, many robot models and types and the behavior software associated with them, it is clear that Zora Software designed to drive the NAO robot is innovative, unique, comprehensive and best-in-class.
While we are excited about the Zora Software platform, especially when packaged with the NAO robot to form a ZoraBot, so are NAO Robot users!  

I believe it is best to let those users comment on why. I provide the following collected comments:
 “I am a teacher in special education in elementary school. I don’t have the time or the technical know-how to do deep-dive robot behavior development. I tried Choreographe but never was able to get productive. With the ZORA composer, I finally have a solution that is fast and easy.”

“I move my NAO robot to many different locations for many different purposes. It was always a “WiFi” connectivity and stability nightmare. With the ZORA secure ‘hotspot’ feature my connection problems are gone. Turn on the robot with ZORA and I am ready to go! Thank you!”

“Please know that I totally enjoy the scope of the ZORA behavior library. The pre-programmed behaviors seem to cover all my needs. What was a challenge to make the NAO robot do what I needed, I am now always one click away to making my vision happen. Keep those behaviors coming, please.”

“I used to only let the robot stand or sit. With the ZORA movement controller, I can drive the robot around the room or stage, it brings a new level of engagement to the time spent with NAO.”

“My NAO was sitting idle since the previous user moved to a new department. ZORA brought that NAO back to ‘active duty’! The little bot is an active part of the team now!”

The 6 REASONS:
While ZORA features are extensive, I share my own 6 key favorite features that experience has taught me as to why every NAO needs a ZORA. They are:
Turn on the NAO robot with ZORA and you are immediately ready to go with its own “HOTSPOT WiFi”. No need to spend time locating and configuring the proper WiFi for robot connectivity. For those of you that use NAO in multiple locations, this is an ideal ease of use feature. And, no need to seek the permissions of WiFi access in secure areas like schools and hospitals.

The ‘drag and drop’ behavior composer’. Simply drag the desired robot action ‘object icon’ onto a time-line player palate and you can compose your own custom robot behaviors. No need to spend time connecting and testing often confusing and conflicting boxes and arrows. The in-depth library of available ready-made behavior icons makes interacting with NAO and creating custom behaviors truly fun, efficient and a transforming experience.

A library of over 70+ pre-programed stories, dances, quizzes, and exercise routines provide a powerful inventory of robot behaviors for most every occasion. Always be ready with Zora.

Movement control. Zora’s on-screen robot navigation controller brings real-life movement to your fingertips. Using this controller, you can easily have NAO walk on-stage, enter a classroom or move about a trade show booth. No longer is NAO limited to sitting or standing in one place With ZORA NAO becomes an animated and active participant in your event!

Language to language translations. Type in one language and have NAO speak the phrase in another. Let your imagination contemplate the possibilities.
Perhaps my favorite feature is the ZORA PowerPoint presenter. I use it all the time to demonstrate the power of ZORA. Create your PowerPoint and in the ‘notes’ field for each slide simply type what you want NAO to say. Turn on the NAO PowerPoint function and run the “slideshow”. NAO will deliver the presentation and advance to the appropriate slide on cue! Use this ZORA feature for engaging business presentations and in-classroom educational presentations!

While making our customers excited, pleased and happy I like ZORA for one more critical reason. Now in its 9th version, it really works!

Such is the power of ZORA now working in pediatric hospitals, libraries, schools, elder care facilities, and retail! You can visit www.robotteca.com to learn more about ZORA software or about the ZoraBot combination.
NAO is a product of SoftBank Robotics. Zora is a product of ZoraBots.

Mike Radice is Chairman of the Technology Advisory for ChartaCloud ROBOTTECA.


Saturday, July 20, 2019

Why I say, “The NAO Robot More Than Ever!”



 After almost five years of experience in assessing multiple robots, I remain convinced that the NAO Humanoid Robot is in a class by itself. As a matter of fact, I believe that the NAO robot has risen to a new plateau where it rises and stands alone (no pun intended) and is now even more functional and versatile. Be assured that this is not just my own myopic opinion. With others, I have assessed many, many robots to see if they can surpass NAO. The phenomenon of NAO’s continues and its adoption and its ever-expanding functionality can be evidenced by the interactions that I have with users and developers worldwide.
A picture containing indoor, wall, bathroom, toilet

Description automatically generatedInitially, NAO was a first mover in humanoid robots, especially for the university research community. They adopted, applied and used NAO to discover the interaction between humans and robots and, it continues to be adopted for use in even more leading-edge research. It is my belief that the continued evolution of NAO in the research community is now being driven not alone by its marvelous engineering and software functionality but more by the growing discoveries about the nature of human-robot interaction (HRI). What do you want the robot to do and how should it react? NAO fits supremely well into research models demanding a full-bodied construct facilitated by its humanoid (human-like form and degrees of movement freedom) capacities. I clearly see that NAO is now being driven evermore by its ability to incorporate, interact and respond using the emerging wave of integrated artificial intelligence (AI) software schema. I have seen NAO engage in AI-powered conversational dialogues to help learn a new language, connect to IBM’s WATSON and even read postings on a whiteboard in a hospital nursing ward to aid in decision making.
The more exciting observation is that NAO and its global application developers have enabled NAO to assert its prominence by delivering practical, real-world use cases.  It is a new era for NAO. The previous era was characterized by the dialogue that earlier surrounded NAO. “OK, I see it. But, what does it do?”. This new era demonstrates that answers to that question can be found in innovative NAO behavior applications that are actually being used today in autism behavior interventions, assisted medical care in pediatric hospitals, delivering uplifting social engagement in skilled nursing facilities that help reduce the trauma of isolation, in libraries advancing community digital literacy, in schools advancing STEM robotics programs, in classrooms engaging students in new subjects, even explaining art in an art retail showroom. A new era for sure. Now comes NAO as a powerful stand-alone presenter of custom PowerPoint educational presentations and AI-powered new language instruction. These are examples of the growing number answers to the question author Steven Wasic proffered in 2010 (SinglularityHUB Jan 5, 2010): “What will be the killer APP for NAO?” If I understand the question correctly, that question has now been answered. Many times, over.
I was originally moved by the NAO intro video” The Future is NAO”
Great never gets old! Inspiring never gets tiring! The “Future is NAO” is more prophetic today than even when it was introduced.
Underlying it all is, of course, the fact that NAO has proven its position as the true humanoid leader but even more so in the face of so many recent failures in robot attempts. NAO endures as an industrial-grade product. NAO’s caretaking development engineers at SoftBank Robotics working in design, software, and hardware continue to advance the NAO humanoid platform with ever-increasing and amazing version enhancements that are now in its sixth evolution.
While it is always interesting to see a robot deliver hamburgers and another clean windows it is more rewarding and I find it more meaningful, to see children on the autism spectrum positively engage and socially progress, to see children in hospitals feel less pain and anxiety and to see the elderly in an elder-care facilities exercise, laugh and want to take NAO for a walk! Having a mother write to me that NAO helped rediscover her son who was fast fading due to autism and to see a child let himself down out of his wheelchair to the surprise of everyone in order to have his picture taken with NAO are examples of why I say “NAO more than ever!”.

Mike Radice is Chairman of the Technology Advisory for ChartaCloud ROBOTTECA. Comments: info@chartacloud.com