wiki:en/ProgrammingRobotsCourse/PepperApi

Version 22 (modified by xrambous, 4 years ago) (diff)

--

PV277 Programming Applications for Social Robots

Pepper API

programming in Choregraphe via Python

  • enter only one box Python Script
  • edit its contents via double click:
    • to __init__ add:
      self.tts = ALProxy('ALTextToSpeech')
      self.tts.setLanguage('Czech')
      
    • to onInput_onStart add:
      self.tts.say("Ahoj, jak se máš?")
      self.onStopped()
      
  • add Czech into Project Properties
  • save the project and run in on a virtual robot

speech input via Python

  • in __init__ add:
        try:
            self.speech = ALProxy("ALSpeechRecognition")
            self.speech.setLanguage('Czech')
        except:
            self.logger.info('Running on virtual robot')
            self.speech = None
    
  • process speech accordingly
    def get_answer(self, reactions):
        if self.speech is None:
            return (random.choice(reactions.keys()))
        else:
            self.speech.setVocabulary(reactions.keys(), False)
            self.speech.subscribe("Test_ASR")
            self.logger.info('Speech recognition engine started')
            time.sleep(20)
            self.speech.unsubscribe("Test_ASR")
    
    def onInput_onStart(self):
        self.tts.say("Ahoj, jak se máš?")
        reactions = {
            'dobře':  'to je super!',
            'špatně': 'doufám, že to brzo bude lepší',
            'nevím': 'tak to určitě nebude tak zlé',
        }
        answer = self.get_answer(reactions)
        react = reactions.get(answer)
        self.logger.info('answer={}, react={}'.format(answer, react))
        self.tts.say(react)
        self.onStopped()
    
  • see ALSpeechRecognition documentation

dialog

  • add boxes Set Language with Czech and add Czech to project properties
  • right click the free area -> Create a new box -> Dialog...
  • in the Dialog -> Add Topic - choose Czech and Add to the package content as collaborative dialog (allows to start the dialog just by talking to the robot)
  • connect onStart -> Set Language -> Dialog
  • in Project files double click on dialog_czc.top and enter
    topic: ~dialog()
    language: czc
    
    concept:(ahoj) "ahoj robote"
    concept:(dobrý_den) ["dobrý den" "krásný den" "krásný den přeju"]
    
    u:(~ahoj) ahoj člověče
      \pau=1000\
      to máme dnes hezký den
    
    u:(~dobrý_den) ~dobrý_den
    
  • see QiChat - Introduction and QiChat - Syntax for details
  • beware that the "nice" function of recognizing any text via _* is unfortunately not available in the real robot - free speech recognition works only as a payed service over-the-network. The dialog must use predefined (possibly dynamic) concepts instead via _~conceptName.

adding animations

  • single animation - via Animation box
  • connect to dialog:
    • add rule to topic:
      u:(["můžeš zamávat" zamávej] {prosím}) ahojky $zamavej=1
      
    • add output to the dialog box (right click -> Edit box) named zamavej (Bang, punctual)
    • add Kisses animation box, connect it to the zamavej output
  • within the dialog:
    u:(~ahoj) ^start(animations/Stand/Gestures/Hey_1) ahoj člověče
      \pau=1000\
      to máme dnes hezký den 
      ^wait(animations/Stand/Gestures/Hey_1)
    

shows only on real robot, see default list of animations

Pepper API II

Live examples

Using basic arithmetics

https://nlp.fi.muni.cz/projekty/pepper/videos/pocitani.mp4

See dialog_counting app for details. Concepts for arithmetic operators and numbers are created. Not every number is defined, but rather decimal places and their combination, eg.

concept:(tens) [20 30 40 50 60 70 80 90]
concept:(number_hundreds) ["{[1 "jedno"]} sto" "dvě stě" "[3 4] sta" "[5 6 7 8 "osum" 9] set" dvěsta dvěstě pěcet šescet devěcet]
concept:(number) ["~number_hundreds {~number_tens} {~digits}" "~number_tens {~digits}" ~digits]

This way, robot can understand numbers up to 999. Concepts are used in the dialogue, passed into counting function and output result is said in the dialogue:

u:(["kolik [je jsou]" spočítej spočti] _"~number ~operator [~number ~number2]")
    $num_expression=$1
    ^call(ALDialogCounting.compute($num_expression))
    c1:(_* equals nan) $1 přece nejde spočítat!
    c1:(_* equals _*) $1 [je "by mohlo být"] {asi} {tak} $2

Computing function receive recognized words as parameters and have to convert the words to numbers before producing the result. command parameter contains recognized sentence, eg. "dvacet dva plus třináct".

m = re.match('(.*) (' + '|'.join(OPERATOR_WORDS) + ') (.*)', command)
if m:
  number1 = self.convert_number(m.group(1))
  operator_word = m.group(2)
  operator = OPERATOR_WORDS[operator_word]
  number2 = self.convert_number(m.group(3))
  try:
    result = str(int(eval(str(number1) + operator + str(number2)))).replace('-','minus')
   except:
     result = 'nan'

Display subtitles for speech recognition/generation

https://nlp.fi.muni.cz/projekty/pepper/videos/titulky.mp4

Access timetable API

https://nlp.fi.muni.cz/projekty/pepper/videos/kordisbot.mp4

See kordisbot app for detailed example. To enable recognition of all stops and street names, special concepts were defined (Ulice-concept.top and Zastavky-concept.top) with the list of accepted names.

Timetable search is running as a service, see scripts/kordisbot_service.py. On user question, dialog just calls specific function, eg.

u:("[řekni ukaž zobraz najdi] {mi} [odjezdy spoje] ze zastávky _~station_name na zastávku _~station_name")
    ^call(DialogKordisbot.say_answer2($1,$2,1))

Service functions say_answer1 and say_answer2 are directly generating robot answer sentence.

Connection map is displayed on the tablet, using usual map from idos.cz with the connection parameters:

self.s.ALTabletService.showWebview("http://mapy.idos.cz/idsjmk/?f={}&t={}&date={}&time={}&submit=true".format(
fromStop, toStop, date, time))

installing application to the robot

  • make a ssh key (replace <xlogin> with your login):
    ssh-keygen -m PEM -t ecdsa -N '' -f ~/.ssh/pepper_<xlogin>
    
  • copy your public key to the course directory:
    cp ~/.ssh/pepper_<xlogin>.pub /nlp/projekty/pepper/course/keys/
    
  • add host karel to your $HOME/.ssh/config:
    Host karel
        User nao
        HostName 192.168.88.10
        # IdentityFile is important for install_pkg.py
        IdentityFile ~/.ssh/pepper_<xlogin>
        StrictHostKeyChecking no
        PubkeyAuthentication yes
    
  • build the PKG package in Choregraphe
  • test logview
    ssh aurora
    /nlp/projekty/pepper/bin/logview
    
  • after the key is allowed, install it to the robot
    ssh aurora
    /nlp/projekty/pepper/bin/install_pkg.py your_package.pkg
    

running/launching the application

  • if the application contains a behavior (behavior.xar), it needs to be launched. Behaviors can have two natures: interactive (used as a dialog) or solitary (used without a direct listener). Any behavior can be launched using one of 3 ways:
    1. specify the behavior's trigger conditions (works with both solitary and interactive) and/or its trigger sentences
    2. run it with run_app.py:
      /nlp/projekty/pepper/bin/run_app.py your_package[/path_to_behavior]
      
      call run_app.py -l to obtain a list of installed behaviors.
    3. call ALAutonomousLife.switchFocus or QiChat ^switchFocus

using tablet

  • from a dialogue (see QiChat - pCall):
    u:(jak se můžu dostat na fakultu bez přijímaček?)
        Způsobů je celá řada. 
        ^pCall(ALTabletService.showWebview("https://www.fi.muni.cz/admission/guide.html.cs"))
        Všechno se dozvíš dnes na přednášce, od paní ze studijního
        ^start(animations/Stand/Gestures/ShowTablet_3)
        nebo na webu vvv fi muni cz v sekci pro uchazeče.
        ^wait(animations/Stand/Gestures/ShowTablet_3)
    
  • Using Pepper’s Tablet

face characteristics

creating application outside Choregraphe

  • prepare your pepper directory unless you already have one
    mkdir $HOME/pepper
    
  • copy template directory
    cp -r /nlp/projekty/pepper/course/template $HOME/pepper/
    
  • rename the template to template_<xlogin> (replace <xlogin> with your login) or something else:
    mv $HOME/pepper/template $HOME/pepper/template_<xlogin>
    cd $HOME/pepper/template_<xlogin>
    
  • go through all files, rename the application where necessary
  • build the PKG package (the version number will be increased):
    cd $HOME/pepper/template_<xlogin>
    make pkg
    
  • and install it
    cd $HOME/pepper/template_<xlogin>
    make install
    
    During the development this can be in one command
    make pkg install
    

creating own service

  • copy and rename template-service directory
    cp -r /nlp/projekty/pepper/course/template-service $HOME/pepper/
    mv $HOME/pepper/template-service $HOME/pepper/template-service_<xlogin>
    cd $HOME/pepper/template-service_<xlogin>
    
  • go through all files, rename the application where necessary
  • build the PKG and install it

Attachments (1)

Download all attachments as: .zip