开发者

Real-time audio processing in Android

开发者 https://www.devze.com 2022-12-19 21:41 出处:网络
I\'m trying to figure out how to write an app that can decode audio mors开发者_如何学Pythone code on the fly. I found this document which explains how to record audio from the microphone in Android. W

I'm trying to figure out how to write an app that can decode audio mors开发者_如何学Pythone code on the fly. I found this document which explains how to record audio from the microphone in Android. What I'd like to know is whether it's possible to access the raw input from the microphone or whether it has to be written/read to a file.

Thanks.


If you use MediaRecorder (the example, above) it will save compressed audio to a file.

If you use AudioRecord, you can get audio samples directly.

Yes, what you want to do should be possible.


there is a sensing framework from MIT media labs called funf: http://code.google.com/p/funf-open-sensing-framework/
They already created classes for audio input and some analysis (FFT and the like), also saving to files or uploading is implemented as far as I've seen, and they handle most of the sensors available on the phone. You can also get inspired from the code they wrote, which I think is pretty good.


Using AudioRecord is overkill. Just check MediaRecorder.getMaxAmplitude() every 1000 milliseconds for loud noises versus silence.

If you really need to analyze the waveform, then yes you need AudioRecord. Get the raw data and calculate something like the root mean squared of the part of the raw bytes you are concerned with to get a sense of the volume.

But, why do all that when MediaRecorder.getMaxAmplitude() is so much easier to use.

see my code from this answer: this question


I have found a way how to do it. Basically you need to run a new thread within which you continuously call myAndroidRecord.read(). After this call loop over all the entries in the buffer, and you can see raw values in real time one by one. Below is the code sample of the Main activity

package com.example.mainproject;

import androidx.appcompat.app.AppCompatActivity;
import androidx.core.content.ContextCompat;
import androidx.core.app.ActivityCompat;


import android.content.pm.PackageManager;
import android.Manifest;

import android.content.Context;
import android.media.AudioRecord;
import android.media.MediaRecorder;
import android.widget.TextView;
import android.media.AudioManager;
import android.media.AudioFormat;
import android.os.Bundle;



import java.util.Arrays;

public class MainActivity extends AppCompatActivity {

    private AudioManager myAudioManager;
    private static final int REQUEST_RECORD_AUDIO_PERMISSION = 200;
    // Requesting permission to RECORD_AUDIO
    private boolean permissionToRecordAccepted = false;
    private String [] permissions = {Manifest.permission.RECORD_AUDIO};

    private static final int PERMISSION_RECORD_AUDIO = 0;
    Thread mThread;

    @Override
    public void onRequestPermissionsResult(int requestCode,  String[] permissions,  int[] grantResults) {
        super.onRequestPermissionsResult(requestCode, permissions, grantResults);
        switch (requestCode){
            case REQUEST_RECORD_AUDIO_PERMISSION:
                permissionToRecordAccepted  = grantResults[0] == PackageManager.PERMISSION_GRANTED;
                break;
        }
        if (!permissionToRecordAccepted ) finish();

    }

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        if(ContextCompat.checkSelfPermission(this,Manifest.permission.RECORD_AUDIO) != PackageManager.PERMISSION_GRANTED){
            if (ActivityCompat.shouldShowRequestPermissionRationale(this,
                    Manifest.permission.RECORD_AUDIO)) {
                // Show an explanation to the user *asynchronously* -- don't block
                // this thread waiting for the user's response! After the user
                // sees the explanation, try again to request the permission.
                ActivityCompat.requestPermissions(this,
                        new String[] { Manifest.permission.RECORD_AUDIO },
                        PERMISSION_RECORD_AUDIO);
                return;
            } else {
                // No explanation needed; request the permission
                ActivityCompat.requestPermissions(this,
                        new String[]{Manifest.permission.RECORD_AUDIO},
                        1);
                ActivityCompat.requestPermissions(this,
                        new String[] { Manifest.permission.RECORD_AUDIO },
                        PERMISSION_RECORD_AUDIO);

                // MY_PERMISSIONS_REQUEST_READ_CONTACTS is an
                // app-defined int constant. The callback method gets the
                // result of the request.
            }
        }else{

            myAudioManager = (AudioManager)getSystemService(Context.AUDIO_SERVICE);
            String x = myAudioManager.getProperty(AudioManager.PROPERTY_SUPPORT_AUDIO_SOURCE_UNPROCESSED);

            runOnUiThread(()->{
                TextView tvAccXValue = findViewById(R.id.raw_available);
                tvAccXValue.setText(x);
            });

            mThread = new Thread(new Runnable() {
                @Override
                public void run() {
                    record();
                }
            });
            mThread.start();
        }
    }

    private void record(){
        int audioSource = MediaRecorder.AudioSource.MIC;
        int samplingRate = 11025;
        int channelConfig = AudioFormat.CHANNEL_IN_DEFAULT;
        int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
        int bufferSize = AudioRecord.getMinBufferSize(samplingRate,channelConfig,audioFormat);

        short[] buffer = new short[bufferSize/4];
        AudioRecord myRecord = new AudioRecord(audioSource,samplingRate,channelConfig,audioFormat,bufferSize);

        myRecord.startRecording();

        int noAllRead = 0;
        while(true){
            int bufferResults = myRecord.read(buffer,0,bufferSize/4);
            noAllRead += bufferResults;
            int ii = noAllRead;
            for (int i = 0;i<bufferResults;i++){
                int val = buffer[i];
                runOnUiThread(()->{
                    TextView raw_value = findViewById(R.id.sensor_value);
                    raw_value.setText(String.valueOf(val));
                    TextView no_read = findViewById(R.id.no_read_val);
                    no_read.setText(String.valueOf(ii));
                });
            }

        }
    }
}

This is just a demonstration and in reall app you will need to think a bit more about how and when to stop the running thread. This example just runs indefinitely untill you exit the app.

Code concerning the UI updates such as TextView raw_value = findViewById(R.id.sensor_value); is specific to this example and you should define your own.

Lines int ii = noAllRead; and int val = buffer[i]; are necesary because Java doesent let you put non effectively final variables in lambda methods.


It looks like it has to be dumped first to a file.

If you peek at the android.media.AudioRecord source, the native audio data byte buffers are not exposed to the public API.

In my experience, having built an audio synthesizer for Android, it's hard to achieve real-time performance and maintain audio fidelity. A Morse Code 'translator' is certainly doable though, and sounds like a fun little project. Good Luck!

0

精彩评论

暂无评论...
验证码 换一张
取 消