{"id":288,"date":"2012-01-21T19:58:28","date_gmt":"2012-01-21T22:58:28","guid":{"rendered":"http:\/\/blog.abratel.com.br\/?p=288"},"modified":"2012-01-21T19:59:09","modified_gmt":"2012-01-21T22:59:09","slug":"reconhecimento-de-voz-com-asterisk","status":"publish","type":"post","link":"https:\/\/blog.abratel.com.br\/?p=288","title":{"rendered":"Reconhecimento de voz com asterisk"},"content":{"rendered":"<p>A id\u00e9ia \u00e9 utilizar EAGI para controle do canal de entrada de \u00e1udio em conjunto com o File Descriptor, o Asterisk entrega o \u00e1udio em formato RAW diretamente no File Descriptor 3, ent\u00e3o podemos utilizar esta informa\u00e7\u00e3o da maneira que acharmos conveniente, para este caso a manipula\u00e7\u00e3o se torna muito pr\u00e1tica, o que me desprende totalmente das APP\u2019s prontas para grava\u00e7\u00f5es inseridas no Asterisk Ex. Record, nada melhor do que ser livre para voar, \u00e9 claro v\u00e1rias an\u00e1lises se tornam poss\u00edveis com isso e o leque de aplica\u00e7\u00f5es poss\u00edveis se tornam infinitas.<\/p>\n<p>Estou usando novamente o m\u00f3dulo audiolab para efetuar o encode do \u00e1udio em FLAC, caso exista alguma dificuldade para a instala\u00e7\u00e3o deste m\u00f3dulo poderei pensar em adaptar o c\u00f3digo para uso externo do sox ou flac.<\/p>\n<p>Como ele funciona?<\/p>\n<p>    Atende uma liga\u00e7\u00e3o<br \/>\n    O usu\u00e1rio tem no m\u00e1ximo 10 segundos para efetuar a fala<br \/>\n    Caso nao encontre atividade de voz encerra com timeout<br \/>\n    Estrat\u00e9gia para atividade de voz verdadeira para os seguintes valores  RMS > 15 e Pitch > 75<br \/>\n    Se atividade for encontrada o usu\u00e1rio poder\u00e1 falar por no m\u00e1ximo 10 segundos<br \/>\n    O script verifica blocos em tempo real com amostras de 1 em 1 segundo e verifica se a fala cessou<br \/>\n    Caso sim o script interrompe a grava\u00e7\u00e3o autom\u00e1ticamente e envia o que foi gravado para o google<br \/>\n    Caso n\u00e3o o script continua o seu curso at\u00e9 seu m\u00e1ximo de 10 segundos<br \/>\n    Apos encontrada a resposta da fala no google o script seta a vari\u00e1vel \u201cGoogleUtterance\u201d<\/p>\n<p>Instalacao:<\/p>\n<p>Dependencies:<\/p>\n<p>apt-get install python-matplotlib<br \/>\napt-get install python-numpy<br \/>\napt-get install python-scipy<br \/>\napt-get install python-dev python-setuptools libsndfile-dev<\/p>\n<p>Download and install audiolab from:<br \/>\nhttp:\/\/pypi.python.org\/pypi\/scikits.audiolab\/<\/p>\n<p>Example how use in dialplan from Asterisk:<br \/>\nExtensions.conf<\/p>\n<p>exten=>_11111111,1,Answer()<br \/>\nexten=>_11111111,n,eagi,pahh.py<br \/>\nexten=>_11111111,n,GotoIf($[${EXISTS(${GoogleUtterance})}]?hello:bye)<br \/>\nexten=>_11111111,n(hello),NoOP(You Said = ${GoogleUtterance})<br \/>\nexten=>_11111111,n(bye),Hangup()<\/p>\n<p>Fiz um reconhecimento com comparacao:<br \/>\nexten=>_1,1,Answer()<br \/>\nexten=>_1,n,eagi(pahh.py)<br \/>\nexten=>_1,n,GotoIf($[${EXISTS(${GoogleUtterance})}]?hello:bye)<br \/>\nexten=>_1,n(hello),NoOP(You Said = ${GoogleUtterance})<br \/>\nexten=>_1,n(hello),GotoIf($[&#8220;${GoogleUtterance}&#8221; = &#8220;9 0 8&#8221;]?acertei,s,1)<br \/>\nexten=>_1,n(hello),GotoIf($[&#8220;${GoogleUtterance}&#8221; = &#8220;9 0 5&#8221;]?acertei,s,100)<br \/>\nexten=>_1,n(hello),GotoIf($[&#8220;${GoogleUtterance}&#8221; = &#8220;9 1 3&#8221;]?acertei,s,200)<br \/>\nexten=>_1,n(bye),Hangup()<\/p>\n<p>; tratei a comparacao:<br \/>\n[acertei]<br \/>\nexten => s,1,Dial(DAHDI\/8,20)<br \/>\nexten => s,100,Dial(DAHDI\/5,20)<br \/>\nexten => s,200,Dial(DAHDI\/13,20)<\/p>\n<p>Criar o script com nome pahh.py e colocar na pasta \/var\/lib\/asterisk\/agi-bin<br \/>\nEfetuar o comando chmod +x \/var\/lib\/asterisk\/agi-bin\/pahh.py<\/p>\n<p>Script pahh.py abaixo:<br \/>\n#!\/usr\/bin\/python<br \/>\n#Copyright (c) 2012, Eng Eder de Souza<br \/>\n#Accessing the Google API for speech recognition With Asterisk!<br \/>\n#Eng Eder de Souza<br \/>\n#date 15\/01\/2012<br \/>\n#http:\/\/ederwander.wordpress.com\/2012\/01\/16\/google-speech-python-asterisk\/<br \/>\n#<br \/>\n# This program is free software, distributed under the terms of<br \/>\n# the GNU General Public License Version 2. See the COPYING file<br \/>\n# at the top of the source tree.<br \/>\n#<br \/>\n#Revision 0.2<br \/>\n#History:<br \/>\n#18\/01\/2012 bug fix in local variable declaration<br \/>\n#19\/01\/2012 suport for old python interpretator<br \/>\n#19\/01\/2012 removed matplotlib dependencies<br \/>\n#19\/01\/2012 Submission of warnings DeprecationWarning and UserWarning<\/p>\n<p>import warnings<br \/>\nwarnings.simplefilter(&#8220;ignore&#8221;, DeprecationWarning)<br \/>\nwarnings.simplefilter(&#8220;ignore&#8221;, UserWarning)<br \/>\nfrom scikits.audiolab import Format, Sndfile<br \/>\nfrom scipy.signal import firwin, lfilter<br \/>\nfrom tempfile import mkstemp<br \/>\nimport numpy as np<br \/>\nimport urllib2<br \/>\nimport math<br \/>\nimport sys<br \/>\nimport re<br \/>\nimport os<\/p>\n<p>#For Portuguese Brazilian Speech Recognizer!<br \/>\nLang=&#8221;pt-BR&#8221;<\/p>\n<p>#or for English Speech Recognizer<br \/>\n#Lang=&#8221;en-US&#8221;<\/p>\n<p>url = &#8216;https:\/\/www.google.com\/speech-api\/v1\/recognize?xjerr=1&#038;client=chromium&#038;lang=&#8217;+Lang<\/p>\n<p>silence=True<br \/>\nenv = {}<br \/>\nRawRate=8000<br \/>\nchunk=1024<\/p>\n<p>#http:\/\/en.wikipedia.org\/wiki\/Vocal_range<br \/>\n#Assuming Vocal Range Frequency upper than 75 Hz<br \/>\nVocalRange = 75.0<\/p>\n<p>#cd, FileNameTmp    = mkstemp(&#8216;TmpSpeechFile.flac&#8217;)<\/p>\n<p>#Assuming Energy threshold upper than 15 dB<br \/>\nThreshold = 15<\/p>\n<p>#10 seconds x 16000 samples\/second x ( 16 bits \/ 8bits\/byte ) = 160000 bytes<br \/>\n#160000\/1024 = +\/- 157<br \/>\n#157*1024 = 160768<br \/>\nTimeoutSignal = 160768<\/p>\n<p>#then 1 second x 16000 = 16000<br \/>\n#16000\/1024 = 15,625 round to 16<br \/>\n#16*1024 = 16384<br \/>\nTimeout_NoSpeaking=16384<\/p>\n<p>#normalization for RMS Calc<br \/>\nSHORT_NORMALIZE = (1.0\/32768.0)<\/p>\n<p>#<br \/>\nLastBlock=&#8221;<\/p>\n<p>#File Descriptor delivery in Asterisk<br \/>\nFD=3<\/p>\n<p>#Open File Descriptor<br \/>\nfile=os.fdopen(FD, &#8216;rb&#8217;)<\/p>\n<p>signal=0<\/p>\n<p>all=[]<\/p>\n<p>while 1:<br \/>\n        line = sys.stdin.readline().strip()<\/p>\n<p>        if line == &#8221;:<br \/>\n                break<br \/>\n        key,data = line.split(&#8216;:&#8217;)<br \/>\n        if key[:4] <> &#8216;agi_&#8217;:<br \/>\n                sys.stderr.write(&#8220;Did not work!\\n&#8221;);<br \/>\n                sys.stderr.flush()<br \/>\n                continue<br \/>\n        key = key.strip()<br \/>\n        data = data.strip()<br \/>\n        if key <> &#8221;:<br \/>\n                env[key] = data<\/p>\n<p>for key in env.keys():<br \/>\n        sys.stderr.write(&#8221; &#8212; %s = %s\\n&#8221; % (key, env[key]))<br \/>\n        sys.stderr.flush()<\/p>\n<p>def SendSpeech(File):<br \/>\n        flac=open(File,&#8221;rb&#8221;).read()<br \/>\n        os.remove(File)<br \/>\n        header = {&#8216;Content-Type&#8217; : &#8216;audio\/x-flac; rate=8000&#8217;}<br \/>\n        req = urllib2.Request(url, flac, header)<br \/>\n        data = urllib2.urlopen(req)<br \/>\n        find = re.findall(&#8216;&#8221;utterance&#8221;:(.*),&#8217;, data.read())<br \/>\n        #utterance<br \/>\n        try:<br \/>\n                result = find[0].replace(&#8216;&#8221;&#8216;, &#8221;)<br \/>\n        except:<br \/>\n                sys.stdout.write(&#8220;EXEC &#8221; + &#8220;\\&#8221;&#8221; + &#8220;NOOP&#8221; + &#8220;\\&#8221; \\&#8221;&#8221; + &#8220;speech not recognized &#8230;&#8221; + &#8220;\\&#8221; &#8221; + &#8220;\\n&#8221;)<br \/>\n                sys.stdout.flush()<br \/>\n        if result:<br \/>\n                sys.stdout.write(&#8216;SET VARIABLE GoogleUtterance &#8220;%s&#8221;\\n&#8217;% str(result))<br \/>\n                sys.stdout.flush()<br \/>\n                sys.stdout.write(&#8220;EXEC &#8221; + &#8220;\\&#8221;&#8221; + &#8220;NOOP&#8221; + &#8220;\\&#8221; \\&#8221;&#8221; &#8220;%s \\n&#8221;% str(result))<br \/>\n                sys.stdout.flush()<\/p>\n<p>def Filter(samps):<br \/>\n        FC = 0.05\/(0.5*RawRate)<br \/>\n        N = 200<br \/>\n        a = 1<br \/>\n        b = firwin(N, cutoff=FC, window=&#8217;hamming&#8217;)<br \/>\n        return lfilter(b, a, samps)<\/p>\n<p>def Pitch(signal):<br \/>\n        if sys.version_info < (2, 6):\n                crossing =[]\n                for s in signal:\n                        crossing.append(s)\n        else:\n                crossing = [math.copysign(1.0, s) for s in signal]\n        #index = find(np.diff(crossing));\n        index = np.nonzero(np.diff(crossing));\n        index=np.array(index)[0].tolist()\n        f0=round(len(index) *RawRate \/(2*np.prod(len(signal))))\n        return f0;\n\ndef rms(shorts):\n        rms2=0\n        count = len(shorts)\/2\n        sum_squares = 0.0\n        for sample in shorts:\n                n = sample * SHORT_NORMALIZE\n                sum_squares += n*n\n                rms2 = math.pow(sum_squares\/count,0.5)\n        return rms2 * 1000\n\ndef speaking(data):\n        rms_value = rms(data)\n        if rms_value > Threshold:<br \/>\n                return True<br \/>\n        else:<br \/>\n                return False<\/p>\n<p>def VAD(SumFrequency, data2):<br \/>\n        AVGFrequency = SumFrequency\/(Timeout_NoSpeaking+1);<br \/>\n        if AVGFrequency > VocalRange\/2:<br \/>\n                S=speaking(data2)<br \/>\n                if S:<br \/>\n                        return True;<br \/>\n                else:<br \/>\n                        return False;<\/p>\n<p>        else:<br \/>\n                return False;<\/p>\n<p>def RecordSpeech(TimeoutSignal, LastBlock, LastLastBlock):<br \/>\n        for s in LastLastBlock:<br \/>\n                all.append(s)<br \/>\n        for s in LastBlock:<br \/>\n                all.append(s)<br \/>\n        signal=0;<br \/>\n        while signal <= TimeoutSignal:\n                RawSamps = file.read(Timeout_NoSpeaking)\n                samps = np.fromstring(RawSamps, dtype=np.int16)\n                for s in samps:\n                        all.append(s)\n                signal = signal + Timeout_NoSpeaking;\n                #rms_value=rms(samps)\n                Speech=speaking(samps)\n                #sys.stdout.write(\"EXEC NOOP %s \\\"\\\"\\\"\\n\"% str(rms_value))\n                #sys.stdout.flush()\n\n                #if rms_value > Threshold:<br \/>\n                if Speech:<br \/>\n                        sys.stdout.write(&#8220;EXEC &#8221; + &#8220;\\&#8221;&#8221; + &#8220;NOOP&#8221; + &#8220;\\&#8221; \\&#8221;&#8221; + &#8220;Speech Found &#8230;&#8221; + &#8220;\\&#8221; &#8221; + &#8220;\\n&#8221;)<br \/>\n                        sys.stdout.flush()<br \/>\n                else:<br \/>\n                        sys.stdout.write(&#8220;EXEC &#8221; + &#8220;\\&#8221;&#8221; + &#8220;NOOP&#8221; + &#8220;\\&#8221; \\&#8221;&#8221; + &#8220;End of the Speech&#8230;&#8221; + &#8220;\\&#8221; &#8221; + &#8220;\\n&#8221;)<br \/>\n                        sys.stdout.flush()<br \/>\n                        signal=TimeoutSignal+1<\/p>\n<p>def PlayStream (params):<br \/>\n        sys.stderr.write(&#8220;STREAM FILE %s \\&#8221;\\&#8221;\\n&#8221; % str(params))<br \/>\n        sys.stderr.flush()<br \/>\n        sys.stdout.write(&#8220;STREAM FILE %s \\&#8221;\\&#8221;\\n&#8221; % str(params))<br \/>\n        sys.stdout.flush()<br \/>\n        result = sys.stdin.readline().strip()<\/p>\n<p>sys.stdout.write(&#8220;EXEC &#8221; + &#8220;\\&#8221;&#8221; + &#8220;NOOP&#8221; + &#8220;\\&#8221; \\&#8221;&#8221; + &#8220;Hello Waiting For Speech &#8230;&#8221; + &#8220;\\&#8221; &#8221; + &#8220;\\n&#8221;)<br \/>\nsys.stdout.flush()<\/p>\n<p>PlayStream(&#8220;beep&#8221;);<br \/>\nsys.stdout.flush()<\/p>\n<p>while silence:<br \/>\n        #Input Real-time Data Raw Audio from Asterisk<br \/>\n        RawSamps = file.read(chunk)<br \/>\n        samps = np.fromstring(RawSamps, dtype=np.int16)<br \/>\n        samps2=Filter(samps)<br \/>\n        Frequency=Pitch(samps2)<br \/>\n        rms_value=rms(samps)<br \/>\n        signal = signal + chunk;<br \/>\n        if (rms_value > Threshold) and (Frequency > VocalRange):<br \/>\n                silence=False<br \/>\n                LastLastBlock=LastBlock<br \/>\n                LastBlock=samps<br \/>\n                sys.stdout.write(&#8220;EXEC &#8221; + &#8220;\\&#8221;&#8221; + &#8220;NOOP&#8221; + &#8220;\\&#8221; \\&#8221;&#8221; + &#8220;Speech Detected Recording&#8230;&#8221; + &#8220;\\&#8221; &#8221; + &#8220;\\n&#8221;)<br \/>\n                sys.stdout.flush()<br \/>\n        if (signal > TimeoutSignal):<br \/>\n                sys.stdout.write(&#8220;EXEC &#8221; + &#8220;\\&#8221;&#8221; + &#8220;NOOP&#8221; + &#8220;\\&#8221; \\&#8221;&#8221; + &#8220;Time Out No Speech Detected &#8230;&#8221; + &#8220;\\&#8221; &#8221; + &#8220;\\n&#8221;)<br \/>\n                sys.stdout.flush()<br \/>\n                sys.exit()<\/p>\n<p>RecordSpeech(TimeoutSignal, LastBlock, LastLastBlock)<\/p>\n<p>array = np.array(all)<\/p>\n<p>fmt         = Format(&#8216;flac&#8217;, &#8216;pcm16&#8217;)<br \/>\nnchannels   = 1<\/p>\n<p>cd, FileNameTmp    = mkstemp(&#8216;TmpSpeechFile.flac&#8217;)<\/p>\n<p># making the file .flac<br \/>\nafile =  Sndfile(FileNameTmp, &#8216;w&#8217;, fmt, nchannels, RawRate)<\/p>\n<p>#writing in the file<br \/>\nafile.write_frames(array)<\/p>\n<p>SendSpeech(FileNameTmp)<\/p>\n<p># FIM &#8212;&#8212;&#8212;- CORTE AQUI &#8212;&#8212;&#8212;&#8212;&#8212;&#8211;<\/p>\n<p>Creditos: Eng Eder Wander<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A id\u00e9ia \u00e9 utilizar EAGI para controle do canal de entrada de \u00e1udio em conjunto com o File Descriptor, o Asterisk entrega o \u00e1udio em formato RAW diretamente no File Descriptor 3, ent\u00e3o podemos utilizar esta informa\u00e7\u00e3o da maneira que acharmos conveniente, para este caso&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/blog.abratel.com.br\/index.php?rest_route=\/wp\/v2\/posts\/288"}],"collection":[{"href":"https:\/\/blog.abratel.com.br\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.abratel.com.br\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.abratel.com.br\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.abratel.com.br\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=288"}],"version-history":[{"count":0,"href":"https:\/\/blog.abratel.com.br\/index.php?rest_route=\/wp\/v2\/posts\/288\/revisions"}],"wp:attachment":[{"href":"https:\/\/blog.abratel.com.br\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=288"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.abratel.com.br\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=288"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.abratel.com.br\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=288"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}