添加链接
link之家
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接

*****************************************源码*************************************************
    public void testPlaybackHeadPositionAfterInit() throws Exception {
        // constants for test
        final String TEST_NAME = "testPlaybackHeadPositionAfterInit";
        final int TEST_SR = 22050;
        final int TEST_CONF = AudioFormat.CHANNEL_OUT_STEREO;
        final int TEST_FORMAT = AudioFormat.ENCODING_PCM_16BIT;
        final int TEST_MODE = AudioTrack.MODE_STREAM;
        final int TEST_STREAM_TYPE = AudioManager.STREAM_MUSIC;
        //-------- initialization --------------
        AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT, 
                AudioTrack.getMinBufferSize(TEST_SR, TEST_CONF, TEST_FORMAT), TEST_MODE);
        //--------    test        --------------
        assumeTrue(TEST_NAME, track.getState() == AudioTrack.STATE_INITIALIZED);
        assertTrue(TEST_NAME, track.getPlaybackHeadPosition() == 0);
        //-------- tear down      --------------
        track.release();

**********************************************************************************************
源码路径:
frameworks\base\media\tests\mediaframeworktest\src\com\android\mediaframeworktest\functional\MediaAudioTrackTest.java
#######################说明################################
    //Test case 1: getPlaybackHeadPosition() at 0 after initialization
    public void testPlaybackHeadPositionAfterInit() throws Exception {
        // constants for test
        final String TEST_NAME = "testPlaybackHeadPositionAfterInit";
        final int TEST_SR = 22050;
        final int TEST_CONF = AudioFormat.CHANNEL_OUT_STEREO;
        final int TEST_FORMAT = AudioFormat.ENCODING_PCM_16BIT;
        final int TEST_MODE = AudioTrack.MODE_STREAM;
        final int TEST_STREAM_TYPE = AudioManager.STREAM_MUSIC;
        //-------- initialization --------------
        AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT, 
                AudioTrack.getMinBufferSize(TEST_SR, TEST_CONF, TEST_FORMAT), TEST_MODE);
        //--------    test        --------------
        assumeTrue(TEST_NAME, track.getState() == AudioTrack.STATE_INITIALIZED);
// 今天要看的,就是下面的getPlaybackHeadPosition函数
// 由于在getPlaybackHeadPosition之前,并没有调用play函数,也就是没有开始播放,所以获取的position应该为0
        assertTrue(TEST_NAME, track.getPlaybackHeadPosition() == 0);
// +++++++++++++++++++++++++++++++getPlaybackHeadPosition+++++++++++++++++++++++++++++++++
     * Returns the playback head position expressed in frames
    public int getPlaybackHeadPosition() {
// 很直接
// 调用的是函数android_media_AudioTrack_get_position
        return native_get_position();
// ++++++++++++++++++++++++++++++android_media_AudioTrack_get_position++++++++++++++++++++++++++++++++++
// 路径:frameworks\base\core\jni\android_media_AudioTrack.cpp
static jint android_media_AudioTrack_get_position(JNIEnv *env,  jobject thiz) {
    AudioTrack *lpTrack = (AudioTrack *)env->GetIntField(
                thiz, javaAudioTrackFields.nativeTrackInJavaObj);
    uint32_t position = 0;
    if (lpTrack) {
        lpTrack->getPosition(&position);
        return (jint)position;
// ++++++++++++++++++++++++++++++++AudioTrack::getPosition++++++++++++++++++++++++++++++++
status_t AudioTrack::getPosition(uint32_t *position)
    if (position == 0) return BAD_VALUE;
    *position = mCblk->server;
    return NO_ERROR;
// server是mCblk的成员变量,mCblk是audio_track_cblk_t对象。
// server在函数audio_track_cblk_t::stepServer中有被赋值。
// 另外,mCblk->server有在函数AudioTrack::setPosition中被赋值。
// +++++++++++++++++++++++++++++++++AudioTrack::setPosition+++++++++++++++++++++++++++++++
status_t AudioTrack::setPosition(uint32_t position)
    Mutex::Autolock _l(mCblk->lock);
    if (!stopped()) return INVALID_OPERATION;
    if (position > mCblk->user) return BAD_VALUE;
    mCblk->server = position;
    mCblk->flags |= CBLK_FORCEREADY_ON;
    return NO_ERROR;
// android_media_AudioTrack.cpp文件中的以下两个函数调用了AudioTrack::setPosition:
// android_media_AudioTrack_set_pos_update_period函数
// android_media_AudioTrack_set_position函数
// 这些接口上给java层用的。
// 想象一下使用场景,拖动当前光标?
// ---------------------------------AudioTrack::setPosition-------------------------------
// +++++++++++++++++++++++++++++++++audio_track_cblk_t::stepServer+++++++++++++++++++++++++++++++
bool audio_track_cblk_t::stepServer(uint32_t frameCount)
    // the code below simulates lock-with-timeout
    // we MUST do this to protect the AudioFlinger server
    // as this lock is shared with the client.
    status_t err;
    err = lock.tryLock();
    if (err == -EBUSY) { // just wait a bit
        usleep(1000);
        err = lock.tryLock();
    if (err != NO_ERROR) {
        // probably, the client just died.
        return false;
    uint64_t s = this->server;
    s += frameCount;
    if (flags & CBLK_DIRECTION_MSK) {
        // Mark that we have read the first buffer so that next time stepUser() is called
        // we switch to normal obtainBuffer() timeout period
        if (bufferTimeoutMs == MAX_STARTUP_TIMEOUT_MS) {
            bufferTimeoutMs = MAX_STARTUP_TIMEOUT_MS - 1;
        // It is possible that we receive a flush()
        // while the mixer is processing a block: in this case,
        // stepServer() is called After the flush() has reset u & s and
        // we have s > u
        if (s > this->user) {
            LOGW("stepServer occured after track reset");
            s = this->user;
    if (s >= loopEnd) {
        LOGW_IF(s > loopEnd, "stepServer: s %llu > loopEnd %llu", s, loopEnd);
        s = loopStart;
        if (--loopCount == 0) {
            loopEnd = ULLONG_MAX;
            loopStart = ULLONG_MAX;
    if (s >= serverBase + this->frameCount) {
        serverBase += this->frameCount;
    this->server = s;
    cv.signal();
    lock.unlock();
    return true;
// 函数AudioFlinger::ThreadBase::TrackBase::step中调用了函数audio_track_cblk_t::stepServer。
// ++++++++++++++++++++++++++++++AudioFlinger::ThreadBase::TrackBase::step++++++++++++++++++++++++++++++++++
bool AudioFlinger::ThreadBase::TrackBase::step() {
    bool result;
    audio_track_cblk_t* cblk = this->cblk();
    result = cblk->stepServer(mFrameCount);
    if (!result) {
        LOGV("stepServer failed acquiring cblk mutex");
        mFlags |= STEPSERVER_FAILED;
    return result;
// 函数AudioFlinger::PlaybackThread::Track::getNextBuffer和函数AudioFlinger::RecordThread::RecordTrack::getNextBuffer
// 调用了函数AudioFlinger::ThreadBase::TrackBase::step。
// 我们这儿只看函数AudioFlinger::PlaybackThread::Track::getNextBuffer。
// +++++++++++++++++++++++++++++AudioFlinger::PlaybackThread::Track::getNextBuffer+++++++++++++++++++++++++++++++++++
status_t AudioFlinger::PlaybackThread::Track::getNextBuffer(AudioBufferProvider::Buffer* buffer)
     audio_track_cblk_t* cblk = this->cblk();
     uint32_t framesReady;
     uint32_t framesReq = buffer->frameCount;
     // Check if last stepServer failed, try to step now
     if (mFlags & TrackBase::STEPSERVER_FAILED) {
         if (!step())  goto getNextBuffer_exit;
         LOGV("stepServer recovered");
         mFlags &= ~TrackBase::STEPSERVER_FAILED;
     framesReady = cblk->framesReady();
     if (LIKELY(framesReady)) {
        uint64_t s = cblk->server;
        uint64_t bufferEnd = cblk->serverBase + cblk->frameCount;
        bufferEnd = (cblk->loopEnd < bufferEnd) ? cblk->loopEnd : bufferEnd;
        if (framesReq > framesReady) {
            framesReq = framesReady;
        if (s + framesReq > bufferEnd) {
            framesReq = bufferEnd - s;
         buffer->raw = getBuffer(s, framesReq);
         if (buffer->raw == 0) goto getNextBuffer_exit;
         buffer->frameCount = framesReq;
        return NO_ERROR;
getNextBuffer_exit:
     buffer->raw = 0;
     buffer->frameCount = 0;
     LOGV("getNextBuffer() no more data for track %d on thread %p", mName, mThread.unsafe_get());
     return NOT_ENOUGH_DATA;
// 该函数在看framesReady的时候已经看过了,此时就不再多说了。
// -----------------------------AudioFlinger::PlaybackThread::Track::getNextBuffer-----------------------------------
// ------------------------------AudioFlinger::ThreadBase::TrackBase::step----------------------------------
// ---------------------------------audio_track_cblk_t::stepServer-------------------------------
// --------------------------------AudioTrack::getPosition--------------------------------
    }  else {
        jniThrowException(env, "java/lang/IllegalStateException",
            "Unable to retrieve AudioTrack pointer for getPosition()");
        return AUDIOTRACK_ERROR;
// ------------------------------android_media_AudioTrack_get_position----------------------------------
// -------------------------------getPlaybackHeadPosition---------------------------------
        //-------- tear down      --------------
        track.release();

###########################################################
get position其实就是取得audio_track_cblk_t对象中server的位置。
server的位置在set position或者step server的时候会被改变。
应用程序会set position。
Get next buffer的时候会step server。 打算以测试代码中所使用的接口为点,以接口间调用关系为线,逐步撕开Android中Audio的面纱。*****************************************源码************************************************* public void testPlaybackHeadPositionAfterInit()
在最初的代码testWriteByte中,创建完AudioTrack对象后,调用了AudioTrack对象的write函数实现播放。 今天就来看看write函数的实现。 *****************************************源码******
E/AndroidRuntime: FATAL EXCEPTION: Thread-18065 java.lang.IllegalStateException: Unable to retrieve AudioTrack pointer for write() at android.media.AudioTrack.nativ
前面我们分析了三个播放器的av sync逻辑,可以看到他们都各有不同,那么究竟哪种方法可以达到最好的avsync结果?哪些逻辑是必要的?如果我们想自己从零开始写一个av同步的播放器,都需要做哪些工作? 本文通过一个demo解答上面的问题
如果一幅图中只有一小部分图像你感兴趣(你想研究的部分),那么截图工具就可以了,但是如果你想知道这个区域在原图像中的坐标位置呢? 这可是截图工具所办不到的,前段时间我就需要这个功能,于是将其用Matlab实现。 其实只要用到Matlab中的两个函数函数:imrect 函数:getPosition 如果要截取其中的部分图像,就离不开下面的函数函数:imcrop 代码实现 clear; close all; %---------------------------...
package com.txz.test;import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.IOException; import java.io.RandomAccessFile; import java.util.Arrays;import android.media.Audi
play, stop, flush这几个函数,今天来看看pause函数。 *****************************************源码************************************************* //Test case 4: getPlaybackHeadPosition() is &gt; 0 after play(); p...
在看AudioSessionId相关代码的时候了解到,共用一个AudioSessionId的AudioTrack和MediaPlayer会共用一个AudioEffect。 今天就来看看AudioEffect是个什么东东。 看这个类的目的,主要是为了搞清楚AudioEffect是个什么东东。 打算重点看看类的介绍及其构造函数上。 *******************************