2

我对使用 pocketsphinx 完全陌生,我遵循了演示应用程序的集成,如

使用 PocketSphinx 的 Android 离线语音识别

在我的应用程序中将pocketsphinx 作为库集成后它工作正常,但输出不如预期准确。即使没有从提供的字典中说出,它也会使用单词。

我想了解,如何提高检测单词的准确性:我最初使用的是 .lm 文件;然后我没有使用它,而是简单地创建了一个 .jsgf 文本文件并使用它,但准确性仍然没有提高,所以在使用 .jsgf 文件后,我需要编译它还是简单地复制粘贴 .jsgf 文本资产文件中的文件就足够了

http://cmusphinx.sourceforge.net/wiki/tutorialandroid在这个链接中给出了构建pocketsphinx-android。我没有这样做。刚刚将其集成为库项目

编码:

public class SphinxSpeechRecognizerActivity extends Activity implements RecognitionListener {

    private static String TAG = SphinxSpeechRecognizerActivity.class.getSimpleName();

    private SpeechRecognizer mRecognizer;
    private HashMap<String, Integer> mCaptions;

//    private static final String KWS_SEARCH = "wakeup";
//    private static final String KEYPHRASE = "phone";
    private static final String COMMANDS = "command";
    private boolean mErrorFlag = false;
    private static boolean isRecognizerInProgress = false;

    @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.fragment);
        initViews();


    }

    @Override
    public void onResume() {
        super.onResume();
    }

    @Override
    public void onPause() {
        super.onPause();
    }


    public void onDestroy() {
        super.onDestroy();
        Log.d(TAG, "** onDestroy **");
        stopRecgonizer(true);

    }

    @Override
    public void onBackPressed() {
        super.onBackPressed();
        stopRecgonizer(true);
    }

    private void initViews() {
        final ImageView img_close = (ImageView)findViewById(R.id.ttsClose);
        final ImageView img_voice_view = (ImageView)findViewById(R.id.tts_voice_view);
        final ImageView img_info = (ImageView)findViewById(R.id.ttsInfo);

        img_close.setOnClickListener(mOnClickListener);
        img_info.setOnClickListener(mOnClickListener);
        img_voice_view.setOnClickListener(mOnClickListener);
    }

    // Set press indicator
    private View.OnClickListener mOnClickListener = new View.OnClickListener() {
        @Override
        public void onClick(View v) {

            switch (v.getId()){
                case R.id.ttsInfo:
                    break;

                case R.id.tts_voice_view:
                    if (!isRecognizerInProgress) {
                        isRecognizerInProgress = true;
                        setupRecognizerController();
                    } else {
                        Log.d(TAG, "Sphinx recognizer is already running");
                    }
                    break;

                case R.id.ttsClose:
                default:
                    // Call back event
                    onBackPressed();
                    break;
            }

        }
    };

    @Override
    public void onBeginningOfSpeech() {
        Log.d(TAG, "** onBeginningOfSpeech **" + mErrorFlag);
    }

    @Override
    public void onEndOfSpeech() {
        Log.d(TAG, "** onEndOfSpeech **");
        mRecognizer.stop();
    }

    @Override
    public void onPartialResult(Hypothesis hypothesis) {
        Log.d(TAG, "** onPartialResult **");

        if (hypothesis == null)
            return;
        mRecognizer.stop();
    }

    private void switchSearch(String languageModelSearch) {
        mRecognizer.stop();
        mRecognizer.startListening(languageModelSearch, 2000);
    }


    @Override
    public void onResult(Hypothesis hypothesis) {
        hideListeningBackground();
        stopRecgonizer(true);

        if(hypothesis != null){
            final String recognizedCommand = hypothesis.getHypstr();
            Log.d(TAG,"Recognized Text: = " + recognizedCommand + " Score: " + hypothesis.getBestScore());

            runOnUiThread(new Runnable() {
                @Override
                public void run() {
                    if(!recognizedCommand.equals("")) {
                        if (recognizedCommand.equalsIgnoreCase(<given_command>)) {
                            Intent speech_converted_intent = new Intent(SphinxSpeechRecognizerActivity.this, Subclass.class);
                            startActivity(speech_converted_intent);
                            finish();
                        }
                    } else {
                        showErrorMsg(Constants.MODE_SUCCESS);
                    }
                }
            });

        } else {
            showErrorMsg(Constants.MODE_DEFAULT);
        }
    }

    @Override
    public void onError(Exception e) {
        Log.e(TAG, "** onError **");
        showErrorMsg(Constants.MODE_FAILED);
    }

    @Override
    public void onTimeout() {
        Log.i(TAG, "** onTimeout **");
        mRecognizer.stop();
    }


    private void setupRecognizerController() {

        new AsyncTask<Void, Void, Exception>() {
            @Override
            protected Exception doInBackground(Void... params) {
                try {
                    Assets assets = new Assets(SphinxSpeechRecognizerActivity.this);
                    File assetDir = assets.syncAssets();
                    setupRecognizer(assetDir);
                } catch (IOException e) {
                    return e;
                }
                return null;
            }

            @Override
            protected void onPostExecute(Exception result) {
                if(result == null){
                    Log.d(TAG, "Sphinx Recognizer: Start");
                    mRecognizer.startListening(COMMANDS, 3000);
                }
                displayListeningBackground();

            }
        }.execute();
    }

    private void setupRecognizer(File assetsDir) throws IOException {
        mRecognizer = defaultSetup()
                .setAcousticModel(new File(assetsDir, "en-us-ptm"))
                .setDictionary(new File(assetsDir, "cmudict-en-us.dict"))
                .setKeywordThreshold(1e-10f)
                .setFloat("-beam", 1e-30f)
                .setBoolean("-allphone_ci", true)

                .getRecognizer();
        mRecognizer.addListener(this);

        File languageModel = new File(assetsDir, "command.gram");
        mRecognizer.addGrammarSearch(COMMANDS, languageModel);
 //       reset();
    }


    private void reset(){
        mRecognizer.stop();
   //     mRecognizer.startListening(COMMANDS);
    }

    private void stopRecgonizer(boolean flag){
        if(flag && mRecognizer != null){
            mRecognizer.cancel();
            mRecognizer.shutdown();
            isRecognizerInProgress = false;
        }
        hideListeningBackground();
    }

    String mShowText = "ERROR";
    private void showErrorMsg(final int error_type) {

        runOnUiThread(new Runnable() {
            @Override
            public void run() {
                switch (error_type) {
                    case Constants.MODE_FAILED:
                        // ...
                        break;
                    case Constants.MODE_SUCCESS:
                        //...
                        break;
                    case Constants.MODE_DEFAULT:
                    default:
                        //../
                        break;
                }
            }
        });
    }
}

我的语法文件

#JSGF V1.0;

grammar commands;

public <commands> = (<label> | <mainMenu> | <subMenu> | <track> )+;

<mainMenu> = ( music
         | phone
         | navigation 
         | vehicle 
         | homepage
         | shortcut
         );

<label> =  ( back
                  | usb ( one | two )
                  | contact
                  | sms
                  | message
                  | dial
                  | ( homepage ( one | two | three ))
                  | ( shortcut ( one | two | three ))
                  );

<subMenu> = ( back
            | ( next | previous ) station
            | ( fm ( one | two ))
            | ( dr ( one | two ))
            | am
            | listen
            | play
            | ( next | previous )
            | search [ artists | playlists | songs | albums ]
            | call
            | received
            | missed
            | dial
            | address
            );

<track> = ( one
             | two
             | three
             | four
             | five
             | six
             | seven
             | eight
             | nine
             | ten
             | eleven
             | twelve
             | thirteen
             | fourteen
             | fifteen
             | sixteen
             | seventeen
             | eighteen
             | nineteen
             | twenty
             | (twenty ( one
                       | two
                       | three
                       | four
                       | five
                       | six
                       | seven
                       | eight
                       | nine
                       )
                )
             | thirty
             | (thirty ( one
                       | two
                       | three
                       | four
                       | five
                       | six
                       | seven
                       | eight
                       | nine
                       )
                )
             | forty
             | (forty ( one
                      | two
                      | three
                      | four
                      | five
                      | six
                      | seven
                      | eight
                      | nine
                      )
                )
             | fifty
             | (fifty ( one
                      | two
                      | three
                      | four
                      | five
                      | six
                      | seven
                      | eight
                      | nine
                      )
                )
             | sixty
             | (sixty ( one
                      | two
                      | three
                      | four
                      | five
                      | six
                      | seven
                      | eight
                      | nine
                      )
                )
             | seventy
             | (seventy ( one
                        | two
                        | three
                        | four
                        | five
                        | six
                        | seven
                        | eight
                        | nine
                        )
                )
             | eighty
             | (eighty   ( one
                         | two
                         | three
                         | four
                         | five
                         | six
                         | seven
                         | eight
                         | nine
                         )
                )
             | ninety
             | (ninety ( one
                       | two
                       | three
                       | four
                       | five
                       | six
                       | seven
                       | eight
                       | nine
                       )
               )
            );

我的日志显示:

I/cmusphinx: INFO: pocketsphinx.c(993): Writing raw audio log file: /storage/emulated/0/Android/data/com.techmahindra.rngo/files/sync/000000000.raw
4

1 回答 1

0

精度调试是一个复杂的过程,可能存在太多问题 - 数据中的噪声、糟糕的 CPU 速度导致记录延迟、错误的信道估计。

为了调试性能,您首先需要收集数据。取消注释对演示的调用,setRawLogDir并在 logcat 中查看原始数据文件存储在 sdcard 上。检查这些文件以确保正确录制音频。将数据与日志和您的模型一起共享,以获得准确性方面的帮助。确保数据记录正确,没有噪音,格式正确,说话没有口音。

如果您想连续听并忽略不感兴趣的单词,则需要使用关键字定位模式,而不是语言模型或语法。

于 2016-01-07T12:40:27.370 回答