当前位置: 首页> 健康> 母婴 > 最靠谱的六个网上批发网站_影视自助建站_seo站长网_16888精品货源入口

最靠谱的六个网上批发网站_影视自助建站_seo站长网_16888精品货源入口

时间:2025/7/11 23:20:13来源:https://blog.csdn.net/ljx1400052550/article/details/146101587 浏览次数:0次
最靠谱的六个网上批发网站_影视自助建站_seo站长网_16888精品货源入口

接着上一篇Android AudioFlinger(一)——初识AndroidAudio Flinger,我们继续分析AndroidAudio,看看AndroidAudio Flinger 是如何实现设备管理的。

继续分析

按照职责来划分的画,audioflinger只是策略的执行者,策略的制定者其实是audiopolicyservice。比如某种stream类型对应什么设备、何时打开接口等等,都由它来决定。

目前Android音频系统中支持的音频设备接口主要有三种:

static const char * const audio_interfaces[] = {AUDIO_HARDWARE_MODULE_ID_PRIMARY,  //主要音频设备AUDIO_HARDWARE_MODULE_ID_A2DP, //蓝牙音频设备AUDIO_HARDWARE_MODULE_ID_USB, //USB音频设备
};

这三种音频设备对应的就是三个so库,那如何知道我需要加载哪一个库呢,下面我们就跟随代码来看一看。

AudioHwDevice* AudioFlinger::findSuitableHwDev_l(audio_module_handle_t module,audio_devices_t devices)
{// if module is 0, the request comes from an old policy manager and we should load// well known modulesif (module == 0) {ALOGW("findSuitableHwDev_l() loading well know audio hw modules");for (size_t i = 0; i < arraysize(audio_interfaces); i++) {loadHwModule_l(audio_interfaces[i]);}...return NULL;
}

这里可以看到将audio_interface传入了loadHwModule_l函数中,循环遍历了三种音频设备都会调用一遍。

audio_module_handle_t AudioFlinger::loadHwModule_l(const char *name)
{for (size_t i = 0; i < mAudioHwDevs.size(); i++) {if (strncmp(mAudioHwDevs.valueAt(i)->moduleName(), name, strlen(name)) == 0) {ALOGW("loadHwModule() module %s already loaded", name);return mAudioHwDevs.keyAt(i);}}sp<DeviceHalInterface> dev;int rc = mDevicesFactoryHal->openDevice(name, &dev);
....audio_module_handle_t handle = (audio_module_handle_t) nextUniqueId(AUDIO_UNIQUE_ID_USE_MODULE);mAudioHwDevs.add(handle, new AudioHwDevice(handle, name, dev, flags));ALOGI("loadHwModule() Loaded %s audio interface, handle %d", name, handle);return handle;}

这个函数我们主要看两个地方,openDevice打开了一个音频设备,而且是根据我们for循环穿下来的name打开的。之后调用的mAudioHwDevs.add将设备添加到vector容器中,很明显我们需要先看下如何打开的这设备。

// /frameworks/av/media/libaudiohal/DevicesFactoryHalInterface.cpp
sp<DevicesFactoryHalInterface> DevicesFactoryHalInterface::create() {if (hardware::audio::V4_0::IDevicesFactory::getService() != nullptr) {return new V4_0::DevicesFactoryHalHybrid();}if (hardware::audio::V2_0::IDevicesFactory::getService() != nullptr) {return new DevicesFactoryHalHybrid();}return nullptr;
}

这段代码表示的是mDevicesFactoryHal的创建过程,很明显,mDevicesFactoryHal就等于DevicesFactoryHalHybrid()。

status_t DevicesFactoryHalHybrid::openDevice(const char *name, sp<DeviceHalInterface> *device) {if (mHidlFactory != 0 && strcmp(AUDIO_HARDWARE_MODULE_ID_A2DP, name) != 0 &&strcmp(AUDIO_HARDWARE_MODULE_ID_HEARING_AID, name) != 0) {return mHidlFactory->openDevice(name, device);}return mLocalFactory->openDevice(name, device);
}

这里会做一个判断,如果不是蓝牙也不是hearing就会调用mHidlFactory,我们就走这里看下,不是蓝牙的情况,也就是说primary和usb都会走这里获取。

///frameworks/av/media/libaudiohal/4.0/DevicesFactoryHalHidl.cpp
status_t DevicesFactoryHalHidl::openDevice(const char *name, sp<DeviceHalInterface> *device) {if (mDevicesFactory == 0) return NO_INIT;Result retval = Result::NOT_INITIALIZED;Return<void> ret = mDevicesFactory->openDevice(name,[&](Result r, const sp<IDevice>& result) {retval = r;if (retval == Result::OK) {*device = new DeviceHalHidl(result);}});
...return FAILED_TRANSACTION;
}

这里也没做什么只是继续调用了mDevicesFactory->openDevice并传入了name和一个回调函数,采用了lamda表达式,可以进一步了解下这个写法。

///hardware/interfaces/audio/core/all-versions/default/include/core/all-versions/default/DevicesFactory.impl.hReturn<void> DevicesFactory::openDevice(const hidl_string& moduleName, openDevice_cb _hidl_cb) {if (moduleName == AUDIO_HARDWARE_MODULE_ID_PRIMARY) {return openDevice<PrimaryDevice>(moduleName.c_str(), _hidl_cb);}return openDevice(moduleName.c_str(), _hidl_cb);
}
Return<void> DevicesFactory::openPrimaryDevice(openPrimaryDevice_cb _hidl_cb) {return openDevice<PrimaryDevice>(AUDIO_HARDWARE_MODULE_ID_PRIMARY, _hidl_cb);
}
Return<void> DevicesFactory::openDevice(const char* moduleName, openDevice_cb _hidl_cb) {return openDevice<implementation::Device>(moduleName, _hidl_cb);
}template <class DeviceShim, class Callback>
Return<void> DevicesFactory::openDevice(const char* moduleName, Callback _hidl_cb) {audio_hw_device_t* halDevice;Result retval(Result::INVALID_ARGUMENTS);sp<DeviceShim> result;int halStatus = loadAudioInterface(moduleName, &halDevice);if (halStatus == OK) {result = new DeviceShim(halDevice);retval = Result::OK;} else if (halStatus == -EINVAL) {retval = Result::NOT_INITIALIZED;}_hidl_cb(retval, result);return Void();
}// static
int DevicesFactory::loadAudioInterface(const char* if_name, audio_hw_device_t** dev) {const hw_module_t* mod;int rc;rc = hw_get_module_by_class(AUDIO_HARDWARE_MODULE_ID, if_name, &mod);if (rc) {ALOGE("%s couldn't load audio hw module %s.%s (%s)", __func__, AUDIO_HARDWARE_MODULE_ID,if_name, strerror(-rc));goto out;}rc = audio_hw_device_open(mod, dev);if (rc) {ALOGE("%s couldn't open audio hw device in %s.%s (%s)", __func__, AUDIO_HARDWARE_MODULE_ID,if_name, strerror(-rc));goto out;}if ((*dev)->common.version < AUDIO_DEVICE_API_VERSION_MIN) {ALOGE("%s wrong audio hw device version %04x", __func__, (*dev)->common.version);rc = -EINVAL;audio_hw_device_close(*dev);goto out;}return OK;out:*dev = NULL;return rc;
}

这块代码相对简单就没有什么一次往下追就可以看到最终调用了hw_get_module_by_class,并且传入了name,AUDIO_HARDWARE_MODULE_ID就是hal层会设置的ID,相互匹配上。

// /hardware/libhardware/hardware.c#if defined(__LP64__)
#define HAL_LIBRARY_PATH1 "/system/lib64/hw"
#define HAL_LIBRARY_PATH2 "/vendor/lib64/hw"
#define HAL_LIBRARY_PATH3 "/odm/lib64/hw"
#else
#define HAL_LIBRARY_PATH1 "/system/lib/hw"
#define HAL_LIBRARY_PATH2 "/vendor/lib/hw"
#define HAL_LIBRARY_PATH3 "/odm/lib/hw"
#endifstatic const char *variant_keys[] = {"ro.hardware",  /* This goes first so that it can pick up a differentfile on the emulator. */"ro.product.board","ro.board.platform","ro.arch"
};int hw_get_module_by_class(const char *class_id, const char *inst,const struct hw_module_t **module)
{if (inst)snprintf(name, PATH_MAX, "%s.%s", class_id, inst);elsestrlcpy(name, class_id, PATH_MAX);
...snprintf(prop_name, sizeof(prop_name), "ro.hardware.%s", name);if (property_get(prop_name, prop, NULL) > 0) {if (hw_module_exists(path, sizeof(path), name, prop) == 0) {goto found;}}/* Loop through the configuration variants looking for a module */for (i=0 ; i<HAL_VARIANT_KEYS_COUNT; i++) {if (property_get(variant_keys[i], prop, NULL) == 0) {continue;}if (hw_module_exists(path, sizeof(path), name, prop) == 0) {goto found;}}/* Nothing found, try the default */if (hw_module_exists(path, sizeof(path), name, "default") == 0) {goto found;}return -ENOENT;found:/* load the module, if this fails, we're doomed, and we should not try* to load a different variant. */return load(class_id, path, module);
}

到这里就会去获取你得项目代号,通过prop获取,可以看到variant_keys里面有四个属性,一次判断,有其中之一匹配上即可,如果匹配不上就会使用默认的primary,库就在对应的vendor、system、odm下搜索。到这里我们设备so库的查找就结束了,只要我们配置好项目属性值,与so库对应即可查询使用。

回到我们一开始说的地方,loadHwModule_l函数中还做了一件是,就是将获取的设备添加到mAudioHwDevs键值对中,其中key值由(audio_module_handle_t)

nextUniqueId(AUDIO_UNIQUE_ID_USE_MODULE);获取,这样可以保证这个值全局唯一。
现在我们完成audio interface的加载,但是每一个interface里面包含的设备又有非常多个,目前Android支持的就如下代码所示。


enum {    AUDIO_DEVICE_NONE                          = 0x0,    /* reserved bits */    AUDIO_DEVICE_BIT_IN                        = 0x80000000,    AUDIO_DEVICE_BIT_DEFAULT                   = 0x40000000,    /* output devices */这些是输出声音的设备,就是我们听声音的设备    AUDIO_DEVICE_OUT_EARPIECE                  = 0x1,    // 听筒    AUDIO_DEVICE_OUT_SPEAKER                   = 0x2,    // 扬声器    AUDIO_DEVICE_OUT_WIRED_HEADSET             = 0x4,    // 线控耳机,可以通过耳机控制远端播放、暂停、音量调节等功能的耳机    AUDIO_DEVICE_OUT_WIRED_HEADPHONE           = 0x8,    // 普通耳机,只能听,不能操控播放    AUDIO_DEVICE_OUT_BLUETOOTH_SCO             = 0x10,   // 单声道蓝牙耳机,十进制16    AUDIO_DEVICE_OUT_BLUETOOTH_SCO_HEADSET     = 0x20,   // 车载免提蓝牙设备,十进制32    AUDIO_DEVICE_OUT_BLUETOOTH_SCO_CARKIT      = 0x40,   // 立体声蓝牙耳机,十进制64    AUDIO_DEVICE_OUT_BLUETOOTH_A2DP            = 0x80,   // 蓝牙耳机    AUDIO_DEVICE_OUT_BLUETOOTH_A2DP_HEADPHONES = 0x100,  // 十进制256    AUDIO_DEVICE_OUT_BLUETOOTH_A2DP_SPEAKER    = 0x200,  // 十进制512    AUDIO_DEVICE_OUT_AUX_DIGITAL               = 0x400,  // HDMI输出    AUDIO_DEVICE_OUT_ANLG_DOCK_HEADSET         = 0x800,  // 十进制2048    AUDIO_DEVICE_OUT_DGTL_DOCK_HEADSET         = 0x1000, // 十进制4096    AUDIO_DEVICE_OUT_USB_ACCESSORY             = 0x2000,    AUDIO_DEVICE_OUT_USB_DEVICE                = 0x4000, //USB设备    AUDIO_DEVICE_OUT_REMOTE_SUBMIX             = 0x8000,    AUDIO_DEVICE_OUT_DEFAULT                   = AUDIO_DEVICE_BIT_DEFAULT,    /* input devices */ (这里主要是接收声音的设备就是我们对着说话的设备)    AUDIO_DEVICE_IN_COMMUNICATION         = AUDIO_DEVICE_BIT_IN | 0x1,    AUDIO_DEVICE_IN_AMBIENT               = AUDIO_DEVICE_BIT_IN | 0x2,    AUDIO_DEVICE_IN_BUILTIN_MIC           = AUDIO_DEVICE_BIT_IN | 0x4,  //手机自带MIC    AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET = AUDIO_DEVICE_BIT_IN | 0x8,     //蓝牙耳机    AUDIO_DEVICE_IN_WIRED_HEADSET         = AUDIO_DEVICE_BIT_IN | 0x10,  //3.5MM线性耳机    AUDIO_DEVICE_IN_AUX_DIGITAL           = AUDIO_DEVICE_BIT_IN | 0x20,    AUDIO_DEVICE_IN_VOICE_CALL            = AUDIO_DEVICE_BIT_IN | 0x40,    AUDIO_DEVICE_IN_BACK_MIC              = AUDIO_DEVICE_BIT_IN | 0x80,    AUDIO_DEVICE_IN_REMOTE_SUBMIX         = AUDIO_DEVICE_BIT_IN | 0x100,    AUDIO_DEVICE_IN_ANLG_DOCK_HEADSET     = AUDIO_DEVICE_BIT_IN | 0x200,    AUDIO_DEVICE_IN_DGTL_DOCK_HEADSET     = AUDIO_DEVICE_BIT_IN | 0x400,    AUDIO_DEVICE_IN_USB_ACCESSORY         = AUDIO_DEVICE_BIT_IN | 0x800,    AUDIO_DEVICE_IN_USB_DEVICE            = AUDIO_DEVICE_BIT_IN | 0x1000,   //USB耳机    AUDIO_DEVICE_IN_DEFAULT               = AUDIO_DEVICE_BIT_IN | AUDIO_DEVICE_BIT_DEFAULT,};

下面我们来继续分析,audioflinger是如何打开一个output通道的。打开output通道在audioflinger内的函数是openOutput:

sp<AudioFlinger::ThreadBase> AudioFlinger::openOutput_l(audio_module_handle_t module,audio_io_handle_t *output,audio_config_t *config,audio_devices_t devices,const String8& address,audio_output_flags_t flags)
{AudioHwDevice *outHwDev = findSuitableHwDev_l(module, devices);if (outHwDev == NULL) {return 0;}if (*output == AUDIO_IO_HANDLE_NONE) {*output = nextUniqueId(AUDIO_UNIQUE_ID_USE_OUTPUT);}mHardwareStatus = AUDIO_HW_OUTPUT_OPEN;AudioStreamOut *outputStream = NULL;status_t status = outHwDev->openOutputStream(&outputStream,*output,devices,flags,config,address.string());mHardwareStatus = AUDIO_HW_IDLE;if (status == NO_ERROR) {if (flags & AUDIO_OUTPUT_FLAG_MMAP_NOIRQ) {sp<MmapPlaybackThread> thread =new MmapPlaybackThread(this, *output, outHwDev, outputStream,devices, AUDIO_DEVICE_NONE, mSystemReady);mMmapThreads.add(*output, thread);return thread;} else {sp<PlaybackThread> thread;if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {thread = new OffloadThread(this, outputStream, *output, devices, mSystemReady);} else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT)|| !isValidPcmSinkFormat(config->format)|| !isValidPcmSinkChannelMask(config->channel_mask)) {thread = new DirectOutputThread(this, outputStream, *output, devices, mSystemReady);} else {thread = new MixerThread(this, outputStream, *output, devices, mSystemReady);}mPlaybackThreads.add(*output, thread);return thread;}}return 0;
}

上面代码的主要做了以下操作:

  • 查找合适的音频接口设备(findSuitableHwDev_l),这里最后调用的就是我们一开始讲道理的loadHwModule_l函数。

  • 创建音频流,通过openOutputStream获得了一个AudioStreamOut。

  • 创建播放线程PlaybackThread

    outHwDev用于记录了一个打开的音频接口设备,数据类型是audioHwDevice,就是通过刚刚分析的键值对mAudioHwDevs中获取到,对应的就是HAL的audio_hw_device,里面实现了HAL的共有属性,setMasterMute、setMasterVolume、open_output_stream等。

struct audio_module {struct hw_module_t common;
};struct audio_hw_device {struct hw_device_t common;int (*set_master_volume)(struct audio_hw_device *dev, float volume);int (*set_parameters)(struct audio_hw_device *dev, const char *kv_pairs);int (*open_output_stream)(struct audio_hw_device *dev,audio_io_handle_t handle,audio_devices_t devices,audio_output_flags_t flags,struct audio_config *config,struct audio_stream_out **stream_out,const char *address);int (*set_master_mute)(struct audio_hw_device *dev, bool mute);};

下面我们来分别讲一下每一步都是如何实现的

如何实现

1. 第一步:findSuitableHwDev_l

当module为0的时候,加载所有的音频设备interface,然后遍历查找符合我们要求的device。
当module不为0的时候直接通过module索引查找对应的device,就从我们一开始就提到过的mAudioHwDevs容器中获取。


AudioHwDevice* AudioFlinger::findSuitableHwDev_l(audio_module_handle_t module,audio_devices_t devices)
{// if module is 0, the request comes from an old policy manager and we should load// well known modulesif (module == 0) {ALOGW("findSuitableHwDev_l() loading well know audio hw modules");for (size_t i = 0; i < arraysize(audio_interfaces); i++) {loadHwModule_l(audio_interfaces[i]);}// then try to find a module supporting the requested device.for (size_t i = 0; i < mAudioHwDevs.size(); i++) {AudioHwDevice *audioHwDevice = mAudioHwDevs.valueAt(i);sp<DeviceHalInterface> dev = audioHwDevice->hwDevice();uint32_t supportedDevices;if (dev->getSupportedDevices(&supportedDevices) == OK &&(supportedDevices & devices) == devices) {return audioHwDevice;}}} else {// check a match for the requested module handleAudioHwDevice *audioHwDevice = mAudioHwDevs.valueFor(module);if (audioHwDevice != NULL) {return audioHwDevice;}}
return NULL;
}

2. 第二步:outHwDev->openOutputStream

这块比较抽象,我这里拿MTK平台的代码举例看一下,可以看到这里首先就是申请了一块内存给legacy_stream_out,这里也包含了audio_stream_out ,然后给其中的函数指针赋值,比如set_parameters 我们会经常用到,最终实现就是out_set_parameters,再往下我们就不分析了,过于底层,后面会有专门的篇章讲解hal层的实现。

struct legacy_stream_out {struct audio_stream_out stream;AudioMTKStreamOutInterface *legacy_out;
};static int adev_open_output_stream(struct audio_hw_device *dev,audio_io_handle_t handle,audio_devices_t devices,audio_output_flags_t flags,struct audio_config *config,struct audio_stream_out **stream_out,const char *address __unused) {
...struct legacy_stream_out *out;out = (struct legacy_stream_out *)calloc(1, sizeof(*out));if (!out) {return -ENOMEM;}
...out->stream.common.get_sample_rate = out_get_sample_rate;out->stream.common.set_sample_rate = out_set_sample_rate;out->stream.common.get_buffer_size = out_get_buffer_size;out->stream.common.get_channels = out_get_channels;out->stream.common.get_format = out_get_format;out->stream.common.set_format = out_set_format;out->stream.common.standby = out_standby;out->stream.common.dump = out_dump;out->stream.common.set_parameters = out_set_parameters;out->stream.common.get_parameters = out_get_parameters;out->stream.common.add_audio_effect = out_add_audio_effect;out->stream.common.remove_audio_effect = out_remove_audio_effect;out->stream.get_latency = out_get_latency;out->stream.set_volume = out_set_volume;out->stream.write = out_write;out->stream.get_render_position = out_get_render_position;out->stream.get_next_write_timestamp = out_get_next_write_timestamp;out->stream.set_callback = out_set_callback;out->stream.get_presentation_position = out_get_presentation_position;out->stream.update_source_metadata = out_update_source_metadata;out->stream.pause = out_pause;out->stream.resume = out_resume;out->stream.drain = out_drain;out->stream.flush = out_flush;*stream_out = &out->stream;return 0;err_open:free(out);*stream_out = NULL;return ret;}

3. 第三步:PlaybackThread。

现在我们打开了output通道,那么如何往里写入音频数据呢?这就是PlaybackThread的作用。

这里我们看到有多种thread,都有什么区别呢?

MmapPlaybackThread是用于高效处理音频播放的线程,借助内存映射技术降低延迟、提高性能,尤其适合需要低延迟和高吞吐量的音频应用场景。

OffloadThread 在音频框架中是一个专门处理音频卸载(offload)任务的线程,主要用于将音频处理的某些部分卸载到硬件上,以提高性能、降低 CPU 负担,并提供更低延迟的音频处理。

DirectOutputThread 的核心作用是通过减少音频数据处理的步骤和延迟,提供高效、低延迟的音频输出。它适用于那些对音频输出延迟要求较高的应用,如游戏音效、实时音频通信等,确保音频数据能够直接、迅速地送入硬件进行播放。

MixerThread 是音频框架中的一个重要线程,主要负责音频混音的处理。它的任务是将来自不同音频流(例如多个应用程序或音频源)的音频数据进行混合处理,以便最终输出到音频硬件(如扬声器、耳机等)。这个过程通常包括调节音量、应用音效(如回声消除、均衡等)以及将不同音频流合成一个单一的音频输出流。

这些都继承了PlaybackThread,audioflinger中主要使用下面两个全局变量来存储thread。

DefaultKeyedVector< audio_io_handle_t, sp<PlaybackThread> >  mPlaybackThreads;DefaultKeyedVector< audio_io_handle_t, sp<RecordThread> >    mRecordThreads;

下面我们先分析下MixerThread这个线程.

首先就是创建了一个AudioMixer对象,这个就是混音处理的关键。然后检查是不是DUPLICATING,接着创建了一个AudioStreamOutSink,判断是否使用fast mixer。


// /frameworks/av/services/audioflinger/Threads.cpp
AudioFlinger::MixerThread::MixerThread(const sp<AudioFlinger>& audioFlinger, AudioStreamOut* output,audio_io_handle_t id, audio_devices_t device, bool systemReady, type_t type):   PlaybackThread(audioFlinger, output, id, device, type, systemReady),mFastMixerFutex(0),mMasterMono(false){mAudioMixer = new AudioMixer(mNormalFrameCount, mSampleRate);if (type == DUPLICATING) {// The Duplicating thread uses the AudioMixer and delivers data to OutputTracks// (downstream MixerThreads) in DuplicatingThread::threadLoop_write().// Do not create or use mFastMixer, mOutputSink, mPipeSink, or mNormalSink.return;}mOutputSink = new AudioStreamOutSink(output->stream);if (initFastMixer) {...}
....}

到这里我们的mixer就创建成功了,但是何时去运行呢,因为音频数据的传输是循环不断的存取,但是这个函数里面并没有一个loop的操作,现在我回回过头来看下mPlaybackThreads的定义,他的键是一个audio_io_handle_t变量而值是一个sp强指针类型,所以PlaybackThread一定继承自Re’fBase,那他在被第一次引用的时候就会调用onFirstRef,实现如下:

void AudioFlinger::PlaybackThread::onFirstRef()
{run(mThreadName, ANDROID_PRIORITY_URGENT_AUDIO);
}

这里很简单就调用了一个run方法去开起了一个ThreadLoop线程

bool AudioFlinger::PlaybackThread::threadLoop()
{
...
}

到此我们的音频设备的管理就介绍完了,下一章我们进一步研究下PlaybackThread的循环主题具体做了什么。

关键字:最靠谱的六个网上批发网站_影视自助建站_seo站长网_16888精品货源入口

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com

责任编辑: