新的工作已经定了,在上班之前有点事情需要处理下,也趁机休息下,所以好几天没有更新博客了,但是心里还是惦记着今后需要写点什么,感觉还是有很多需要学习,但是接下来会偏向于通过实例来分析架构设计上的一些技术吧。总的来说今天的博客会从如下几方面进行展开:

  1. 开源代码的架构分析
  2. 自定义View + 设计方面的知识 + MD 风格
  3. AOSP源码分析
  4. RectNative + H5
  5. 开源流媒体播放器源码分析
    希望今年能够在架构上有所提高。

好了,转入正题,今天想向大家介绍的是Google Sample发布的android开发中的框架设计例子,完整的源码大家可以在如下链接中下载:
Android Architecture

这里包括:

todo-mvp/ - Basic Model-View-Presenter architecture.
todo-mvp-loaders/ - Based on todo-mvp, fetches data using Loaders.
todo-mvp-databinding/ - Based on todo-mvp, uses the Data Binding Library.
todo-mvp-clean/ - Based on todo-mvp, uses concepts from Clean Architecture.
todo-mvp-dagger/ - Based on todo-mvp, uses Dagger2 for Dependency Injection
todo-mvp-contentproviders/ - Based on todo-mvp-loaders, fetches data using Loaders and uses Content Providers
dev-todo-mvp-rxjava/ - Based on todo-mvp, uses RxJava for concurrency and data layer abstraction.
todo-mvp-fragmentless/ - Based on todo-mvp, uses Android views instead of Fragments.

我们接下来将会一个个对上述代码进行分析,在继续往下看之前建议大家先看下我之前写的MVC,MVP,MVVM的介绍:
这篇博客将会对todo-mvp进行分析,它是后续所有的基础,在介绍之前我们先来看下它的类图:


再和

进行对比下这样会更容易理解整个MVP模式:

下面我们再来看下项目代码是怎么组织的:

addedittask
statistics
taskdetail
tasks

这三个包是并列的分别对应的是添加编辑任务的功能,统计功能,任务详细信息查询功能,任务展现功能,我们这边只以tasks(任务展现功能)功能作为分析对象。

module 层代码位于/data 数据层上使用的是三级缓存:内存级别,本地数据级别,远程网络级别。
View 层代码位于TasksFragment中
Presenter 层代码位于TasksPresenter

TasksContract 是合同类,当中定义了MVP模式中使用到的View和Presenter接口。
TasksActivity创建了View和Presenter
BasePresenter 是所有Presenter的父接口
BaseView 是所有View层的父接口

了解了整个代码结构,大家就可以进行源码分析了。

首先看下BaseView这个接口,所有的View都会实现这个接口,它只有一个方法声明setPresenter,它是用于为View层设置Presenter。

package com.example.android.architecture.blueprints.todoapp;
public interface BaseView<T> {
void setPresenter(T presenter);
}

接着我们同样看下所有Presenter的接口,这里同样也只有一个方法start。

package com.example.android.architecture.blueprints.todoapp;
public interface BasePresenter {
void start();
}

那么这两个接口的作用是什么呢?我们可以在代码中看到所有的Presenter都会实现BasePresenter,所有的类都会实现BaseView。也就是说我们可以将所有的Presenter以及View的公共接口放置在这个地方声明。

我们接下来看下合同类:TasksContract
首先我们需要了解下合同类是用来干嘛的,合同类其实是用于规范View层和Presenter交互接口的类。其中合同类中View接口都是用于操作界面上的控制,Presenter接口基本上是通过Presenter来操作module层。

public interface TasksContract {

interface View extends BaseView<Presenter> {
void setLoadingIndicator(boolean active);
void showTasks(List<Task> tasks);
void showAddTask();
void showTaskDetailsUi(String taskId);
void showTaskMarkedComplete();
void showTaskMarkedActive();
void showCompletedTasksCleared();
void showLoadingTasksError();
void showNoTasks();
void showActiveFilterLabel();
void showCompletedFilterLabel();
void showAllFilterLabel();
void showNoActiveTasks();
void showNoCompletedTasks();
void showSuccessfullySavedMessage();
boolean isActive();
void showFilteringPopUpMenu();
}

interface Presenter extends BasePresenter {

void result(int requestCode, int resultCode);
void loadTasks(boolean forceUpdate);
void addNewTask();
void openTaskDetails(@NonNull Task requestedTask);
void completeTask(@NonNull Task completedTask);
void activateTask(@NonNull Task activeTask);
void clearCompletedTasks();
void setFiltering(TasksFilterType requestType);
TasksFilterType getFiltering();
}
}

介绍了重要的接口后我们看下具体的实现,首先看下TasksActivity:

public class TasksActivity extends AppCompatActivity {

//................
private TasksPresenter mTasksPresenter;

@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);

//...........................
//创建TaskFragment View
TasksFragment tasksFragment = (TasksFragment) getSupportFragmentManager().findFragmentById(R.id.contentFrame);
if (tasksFragment == null) {
//创建TaskFragment View
tasksFragment = TasksFragment.newInstance();
ActivityUtils.addFragmentToActivity(getSupportFragmentManager(), tasksFragment, R.id.contentFrame);
}

// 创建一个Presenter 这个Presenter持有一个Module以及一个TaskFragment这个View
mTasksPresenter = new TasksPresenter(Injection.provideTasksRepository(getApplicationContext()), tasksFragment);
//..........................
}
//..........................
}

上面的逻辑很简单,就是创建一个TaskFragment作为View层。并将其添加到TasksActivity上,然后再创建一个TasksPresenter,TasksPresenter 中持有数据层TasksRepository,以及View层TasksFragment

我们先来看下View层:

public class TasksFragment extends Fragment implements TasksContract.View {

private TasksContract.Presenter mPresenter;

private TasksAdapter mListAdapter;

private View mNoTasksView;

private ImageView mNoTaskIcon;

private TextView mNoTaskMainView;

private TextView mNoTaskAddView;

private LinearLayout mTasksView;

private TextView mFilteringLabelView;

public TasksFragment() {
// Requires empty public constructor
}

public static TasksFragment newInstance() {
return new TasksFragment();
}

@Override
public void onCreate(@Nullable Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
//创建Adapter
mListAdapter = new TasksAdapter(new ArrayList<Task>(0), mItemListener);
}

@Override
public void onResume() {
super.onResume();
//加载数据
mPresenter.start();
}

@Override
public void setPresenter(@NonNull TasksContract.Presenter presenter) {
//给当前的View设置Presenter
mPresenter = checkNotNull(presenter);
}

@Override
public void onActivityResult(int requestCode, int resultCode, Intent data) {
mPresenter.result(requestCode, resultCode);
}

@Nullable
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
View root = inflater.inflate(R.layout.tasks_frag, container, false);

// Set up tasks view
ListView listView = (ListView) root.findViewById(R.id.tasks_list);
listView.setAdapter(mListAdapter);
mFilteringLabelView = (TextView) root.findViewById(R.id.filteringLabel);
mTasksView = (LinearLayout) root.findViewById(R.id.tasksLL);

// Set up no tasks view
mNoTasksView = root.findViewById(R.id.noTasks);
mNoTaskIcon = (ImageView) root.findViewById(R.id.noTasksIcon);
mNoTaskMainView = (TextView) root.findViewById(R.id.noTasksMain);
mNoTaskAddView = (TextView) root.findViewById(R.id.noTasksAdd);
mNoTaskAddView.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
mPresenter.addNewTask();
}
});

// Set up floating action button
FloatingActionButton fab = (FloatingActionButton) getActivity().findViewById(R.id.fab_add_task);

fab.setImageResource(R.drawable.ic_add);
fab.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
mPresenter.addNewTask();
}
});

// Set up progress indicator
final ScrollChildSwipeRefreshLayout swipeRefreshLayout = (ScrollChildSwipeRefreshLayout) root.findViewById(R.id.refresh_layout);
swipeRefreshLayout.setColorSchemeColors(
ContextCompat.getColor(getActivity(), R.color.colorPrimary),
ContextCompat.getColor(getActivity(), R.color.colorAccent),
ContextCompat.getColor(getActivity(), R.color.colorPrimaryDark)
);
// Set the scrolling view in the custom SwipeRefreshLayout.
swipeRefreshLayout.setScrollUpChild(listView);
swipeRefreshLayout.setOnRefreshListener(new SwipeRefreshLayout.OnRefreshListener() {
@Override
public void onRefresh() {
mPresenter.loadTasks(false);
}
});
setHasOptionsMenu(true);
return root;
}
//...........................

}

View层中最重要的是setPresenter这个方法,通过这个方法让View层持有Presenter的引用,有了Presenter引用就可以将View层中的业务逻辑移到Presenter上,从而保证View层的轻便。
而View层中的方法大部分是和页面上控件操作有关。实现这些方法供Presenter调用。

我们再来看下数据层:

public class Injection {

public static TasksRepository provideTasksRepository(@NonNull Context context) {
checkNotNull(context);
return TasksRepository.getInstance(TasksRemoteDataSource.getInstance(),TasksLocalDataSource.getInstance(context));
}
}

我们先来看下TasksDataSource,这个是数据层的一个接口,这里定义了两个回调接口,这两个回调接口用于在数据层完成任务之后回调通知Presenter:

public interface TasksDataSource {

interface LoadTasksCallback {

void onTasksLoaded(List<Task> tasks);

void onDataNotAvailable();
}

interface GetTaskCallback {

void onTaskLoaded(Task task);

void onDataNotAvailable();
}
void getTasks(@NonNull LoadTasksCallback callback);
void getTask(@NonNull String taskId, @NonNull GetTaskCallback callback);
void saveTask(@NonNull Task task);
void completeTask(@NonNull Task task);
void completeTask(@NonNull String taskId);
void activateTask(@NonNull Task task);
void activateTask(@NonNull String taskId);
void clearCompletedTasks();
void refreshTasks();
void deleteAllTasks();
void deleteTask(@NonNull String taskId);
}

介绍了TasksDataSource我们就可以看下TasksRepository类的实现了。它有两个重要的实例对象mTasksRemoteDataSource,用于模拟网络层数据来源。mTasksLocalDataSource 用于表示本地数据来源这里代表的是数据库。
mCachedTasks 是内存缓存,使用mCacheIsDirty表示缓存内数据是否有效。

public class TasksRepository implements TasksDataSource {

private static TasksRepository INSTANCE = null;
//模拟网络层数据来源
private final TasksDataSource mTasksRemoteDataSource;
//模拟本地数据来源
private final TasksDataSource mTasksLocalDataSource;
/**
* 内存缓存
*/
Map<String, Task> mCachedTasks;

/**
* 用于表示当前缓存数据是否有效
* Marks the cache as invalid, to force an update the next time data is requested. This variable
* has package local visibility so it can be accessed from tests.
*/
boolean mCacheIsDirty = false;

private TasksRepository(@NonNull TasksDataSource tasksRemoteDataSource,
@NonNull TasksDataSource tasksLocalDataSource) {
mTasksRemoteDataSource = checkNotNull(tasksRemoteDataSource);
mTasksLocalDataSource = checkNotNull(tasksLocalDataSource);
}

/**
* Returns the single instance of this class, creating it if necessary.
*
* @param tasksRemoteDataSource the backend data source
* @param tasksLocalDataSource the device storage data source
* @return the {@link TasksRepository} instance
*/
// 使用一个单例创建TasksRepository
public static TasksRepository getInstance(TasksDataSource tasksRemoteDataSource,
TasksDataSource tasksLocalDataSource) {
if (INSTANCE == null) {
INSTANCE = new TasksRepository(tasksRemoteDataSource, tasksLocalDataSource);
}
return INSTANCE;
}

/**
* Used to force {@link #getInstance(TasksDataSource, TasksDataSource)} to create a new instance
* next time it's called.
*/
public static void destroyInstance() {
INSTANCE = null;
}

/**
* 从缓存,本地数据源(SQLite)或者远程数据源中获取 Task
* Gets tasks from cache, local data source (SQLite) or remote data source, whichever is
* available first.
* <p>
* Note: {@link LoadTasksCallback#onDataNotAvailable()} is fired if all data sources fail to
* get the data.
*/
@Override
public void getTasks(@NonNull final LoadTasksCallback callback) {

checkNotNull(callback);

// Respond immediately with cache if available and not dirty
// 如果内存中有内容并且缓存中的数据是可用的那么直接将缓存中的内容传递出去
if (mCachedTasks != null && !mCacheIsDirty) {
callback.onTasksLoaded(new ArrayList<>(mCachedTasks.values()));
return;
}

if (mCacheIsDirty) {
///如果缓存中的数据不可用那么需要从网络中获取新的数据
// If the cache is dirty we need to fetch new data from the network.
getTasksFromRemoteDataSource(callback);
} else {
// Query the local storage if available. If not, query the network.
//这个一般发生在第一次,也就是这时候mCachedTasks = null 并且mCacheIsDirty = false
mTasksLocalDataSource.getTasks(new LoadTasksCallback() {
@Override
public void onTasksLoaded(List<Task> tasks) {
//从数据库中加载数据到缓存
refreshCache(tasks);
callback.onTasksLoaded(new ArrayList<>(mCachedTasks.values()));
}
@Override
public void onDataNotAvailable() {
//如果数据库中没有数据那么就从缓存中获取
getTasksFromRemoteDataSource(callback);
}
});
}
}

/**
* 要保存的时候将数据存储到远程数据库,本地数据库,以及缓存中
*/
@Override
public void saveTask(@NonNull Task task) {
checkNotNull(task);
//保存到远程数据源中
mTasksRemoteDataSource.saveTask(task);
//保存到本地数据库中
mTasksLocalDataSource.saveTask(task);
// Do in memory cache update to keep the app UI up to date
//保存到缓存中
if (mCachedTasks == null) {
mCachedTasks = new LinkedHashMap<>();
}
mCachedTasks.put(task.getId(), task);
}

/**
* 添加完成任务,也就是将complete标志为设置为true
*/
@Override
public void completeTask(@NonNull Task task) {

checkNotNull(task);
mTasksRemoteDataSource.completeTask(task);
mTasksLocalDataSource.completeTask(task);
Task completedTask = new Task(task.getTitle(), task.getDescription(), task.getId(), true);
// Do in memory cache update to keep the app UI up to date
if (mCachedTasks == null) {
mCachedTasks = new LinkedHashMap<>();
}
mCachedTasks.put(task.getId(), completedTask);
}

@Override
public void completeTask(@NonNull String taskId) {
checkNotNull(taskId);
completeTask(getTaskWithId(taskId));
}

//添加激活任务,实现上是简单得将complete设置成false
@Override
public void activateTask(@NonNull Task task) {

checkNotNull(task);
mTasksRemoteDataSource.activateTask(task);
mTasksLocalDataSource.activateTask(task);
Task activeTask = new Task(task.getTitle(), task.getDescription(), task.getId());
// Do in memory cache update to keep the app UI up to date
if (mCachedTasks == null) {
mCachedTasks = new LinkedHashMap<>();
}
mCachedTasks.put(task.getId(), activeTask);
}

@Override
public void activateTask(@NonNull String taskId) {
checkNotNull(taskId);
activateTask(getTaskWithId(taskId));
}

/**
* 清除完成的任务
*/
@Override
public void clearCompletedTasks() {

mTasksRemoteDataSource.clearCompletedTasks();
mTasksLocalDataSource.clearCompletedTasks();

// Do in memory cache update to keep the app UI up to date
if (mCachedTasks == null) {
mCachedTasks = new LinkedHashMap<>();
}
Iterator<Map.Entry<String, Task>> it = mCachedTasks.entrySet().iterator();
while (it.hasNext()) {
Map.Entry<String, Task> entry = it.next();
if (entry.getValue().isCompleted()) {
it.remove();
}
}
}

/**
* Gets tasks from local data source (sqlite) unless the table is new or empty. In that case it
* uses the network data source. This is done to simplify the sample.
* <p>
* Note: {@link LoadTasksCallback#onDataNotAvailable()} is fired if both data sources fail to
* get the data.
*/

/**
* 依次从缓存,本地,远程中查找某个任务
* @param taskId
* @param callback
*/
@Override
public void getTask(@NonNull final String taskId, @NonNull final GetTaskCallback callback) {

checkNotNull(taskId);
checkNotNull(callback);

//从缓存中获取指定任务id的任务
Task cachedTask = getTaskWithId(taskId);
// Respond immediately with cache if available
if (cachedTask != null) {
callback.onTaskLoaded(cachedTask);
return;
}

//从本地数据源以及远程数据源获取指定id的任务数据
// Load from server/persisted if needed.
// Is the task in the local data source? If not, query the network.
mTasksLocalDataSource.getTask(taskId, new GetTaskCallback() {
@Override
public void onTaskLoaded(Task task) {
callback.onTaskLoaded(task);
}

@Override
public void onDataNotAvailable() {
mTasksRemoteDataSource.getTask(taskId, new GetTaskCallback() {
@Override
public void onTaskLoaded(Task task) {
callback.onTaskLoaded(task);
}

@Override
public void onDataNotAvailable() {
callback.onDataNotAvailable();
}
});
}
});
}

/**
* 将当前缓存设置为不可用
*/
@Override
public void refreshTasks() {
mCacheIsDirty = true;
}

/**
* 删除远程,本地,以及缓存中的任务
*/
@Override
public void deleteAllTasks() {
mTasksRemoteDataSource.deleteAllTasks();
mTasksLocalDataSource.deleteAllTasks();

if (mCachedTasks == null) {
mCachedTasks = new LinkedHashMap<>();
}
mCachedTasks.clear();
}

/**
* 将某个任务从缓存,本地数据库,远程数据库中删除
* @param taskId
*/
@Override
public void deleteTask(@NonNull String taskId) {
mTasksRemoteDataSource.deleteTask(checkNotNull(taskId));
mTasksLocalDataSource.deleteTask(checkNotNull(taskId));
mCachedTasks.remove(taskId);
}

/**
* 从远程中加载任务并添加到缓存和本地数据库
* @param callback
*/
private void getTasksFromRemoteDataSource(@NonNull final LoadTasksCallback callback) {
mTasksRemoteDataSource.getTasks(new LoadTasksCallback() {
@Override
public void onTasksLoaded(List<Task> tasks) {
///从远程加载数据后添加到缓存和本地数据库
refreshCache(tasks);
refreshLocalDataSource(tasks);
callback.onTasksLoaded(new ArrayList<>(mCachedTasks.values()));
}

@Override
public void onDataNotAvailable() {
callback.onDataNotAvailable();
}
});
}

/**
* 用参数的tasks来填充mCachedTasks
* @param tasks
*/
private void refreshCache(List<Task> tasks) {
if (mCachedTasks == null) {
mCachedTasks = new LinkedHashMap<>();
}
mCachedTasks.clear();
for (Task task : tasks) {
mCachedTasks.put(task.getId(), task);
}
mCacheIsDirty = false;
}

/**
* 用参数的tasks来填充mTasksLocalDataSource
* @param tasks
*/
private void refreshLocalDataSource(List<Task> tasks) {
mTasksLocalDataSource.deleteAllTasks();
for (Task task : tasks) {
mTasksLocalDataSource.saveTask(task);
}
}

/**
* 从缓存中获取指定id的任务
* @param id 要获取任务的id
* @return
*/
@Nullable
private Task getTaskWithId(@NonNull String id) {
checkNotNull(id);
if (mCachedTasks == null || mCachedTasks.isEmpty()) {
return null;
} else {
return mCachedTasks.get(id);
}
}
}

上述的好处是我们可以随便更换数据源数据获取的底层实现。而不需要改动到其他部分,只要我们修改方案的时候按照接口定义规范进行实现即可。这样对于多人开发也是有很大好处的,和其他部分解耦了就可以单独开发了。

我们再来看下Presenter表示层:

public class TasksPresenter implements TasksContract.Presenter {

private final TasksRepository mTasksRepository;

private final TasksContract.View mTasksView;

private TasksFilterType mCurrentFiltering = TasksFilterType.ALL_TASKS;

private boolean mFirstLoad = true;

public TasksPresenter(@NonNull TasksRepository tasksRepository, @NonNull TasksContract.View tasksView) {
//数据层
mTasksRepository = checkNotNull(tasksRepository, "tasksRepository cannot be null");
//视图层
mTasksView = checkNotNull(tasksView, "tasksView cannot be null!");
//将Presenter赋值给View层
mTasksView.setPresenter(this);
}

//加载任务数据
@Override
public void start() {
loadTasks(false);
}

//处理返回值
@Override
public void result(int requestCode, int resultCode) {
// If a task was successfully added, show snackbar
if (AddEditTaskActivity.REQUEST_ADD_TASK == requestCode && Activity.RESULT_OK == resultCode) {
//显示SnakeBar
mTasksView.showSuccessfullySavedMessage();
}
}

//加载数据
@Override
public void loadTasks(boolean forceUpdate) {
// Simplification for sample: a network reload will be forced on first load.
loadTasks(forceUpdate || mFirstLoad, true);
mFirstLoad = false;
}

/**
* @param forceUpdate Pass in true to refresh the data in the {@link TasksDataSource}
* @param showLoadingUI Pass in true to display a loading icon in the UI
*/
//加载数据
private void loadTasks(boolean forceUpdate, final boolean showLoadingUI) {

if (showLoadingUI) {
mTasksView.setLoadingIndicator(true);
}

if (forceUpdate) {
mTasksRepository.refreshTasks();
}

// The network request might be handled in a different thread so make sure Espresso knows
// that the app is busy until the response is handled.
EspressoIdlingResource.increment(); // App is busy until further notice

mTasksRepository.getTasks(new TasksDataSource.LoadTasksCallback() {
@Override
public void onTasksLoaded(List<Task> tasks) {

//创建需要显示的Task
List<Task> tasksToShow = new ArrayList<Task>();

// This callback may be called twice, once for the cache and once for loading
// the data from the server API, so we check before decrementing, otherwise
// it throws "Counter has been corrupted!" exception.
if (!EspressoIdlingResource.getIdlingResource().isIdleNow()) {
EspressoIdlingResource.decrement(); // Set app as idle.
}

// We filter the tasks based on the requestType
for (Task task : tasks) {
//根据mCurrentFiltering 显示那种类型的任务
switch (mCurrentFiltering) {
case ALL_TASKS:
tasksToShow.add(task);
break;
case ACTIVE_TASKS:
if (task.isActive()) {
tasksToShow.add(task);
}
break;
case COMPLETED_TASKS:
if (task.isCompleted()) {
tasksToShow.add(task);
}
break;
default:
tasksToShow.add(task);
break;
}
}
// The view may not be able to handle UI updates anymore
if (!mTasksView.isActive()) {
return;
}
if (showLoadingUI) {
mTasksView.setLoadingIndicator(false);
}
//处理这些任务
processTasks(tasksToShow);
}

@Override
public void onDataNotAvailable() {
// The view may not be able to handle UI updates anymore
if (!mTasksView.isActive()) {
return;
}
mTasksView.showLoadingTasksError();
}
});
}

private void processTasks(List<Task> tasks) {
if (tasks.isEmpty()) {
// Show a message indicating there are no tasks for that filter type.
//根据类型显示当前需要显示的空加载信息
processEmptyTasks();
} else {
// Show the list of tasks
//显示任务
mTasksView.showTasks(tasks);
// Set the filter label's text.
showFilterLabel();
}
}

private void showFilterLabel() {
switch (mCurrentFiltering) {
case ACTIVE_TASKS:
mTasksView.showActiveFilterLabel();
break;
case COMPLETED_TASKS:
mTasksView.showCompletedFilterLabel();
break;
default:
mTasksView.showAllFilterLabel();
break;
}
}

private void processEmptyTasks() {
switch (mCurrentFiltering) {
case ACTIVE_TASKS:
mTasksView.showNoActiveTasks();
break;
case COMPLETED_TASKS:
mTasksView.showNoCompletedTasks();
break;
default:
mTasksView.showNoTasks();
break;
}
}

//AddEditTaskActivity
@Override
public void addNewTask() {
mTasksView.showAddTask();
}

//TaskDetailActivity
@Override
public void openTaskDetails(@NonNull Task requestedTask) {
checkNotNull(requestedTask, "requestedTask cannot be null!");
mTasksView.showTaskDetailsUi(requestedTask.getId());
}

@Override
public void completeTask(@NonNull Task completedTask) {
checkNotNull(completedTask, "completedTask cannot be null!");
mTasksRepository.completeTask(completedTask);
mTasksView.showTaskMarkedComplete();
loadTasks(false, false);
}

@Override
public void activateTask(@NonNull Task activeTask) {
checkNotNull(activeTask, "activeTask cannot be null!");
mTasksRepository.activateTask(activeTask);
mTasksView.showTaskMarkedActive();
loadTasks(false, false);
}

@Override
public void clearCompletedTasks() {
mTasksRepository.clearCompletedTasks();
mTasksView.showCompletedTasksCleared();
loadTasks(false, false);
}

/**
* Sets the current task filtering type.
*
* @param requestType Can be {@link TasksFilterType#ALL_TASKS},
* {@link TasksFilterType#COMPLETED_TASKS}, or
* {@link TasksFilterType#ACTIVE_TASKS}
*/
@Override
public void setFiltering(TasksFilterType requestType) {
mCurrentFiltering = requestType;
}

@Override
public TasksFilterType getFiltering() {
return mCurrentFiltering;
}

}

它持有View层和数据层,这些方法提供给View层调用,绝大部分的代码是调用View层以及数据层的实现,它只负责整个状态的逻辑。数据层和Presenter层的交互使用的是Callback回调机制,回调接口在数据层接口中定义的。

睡前叨叨 2016-08-11

  1. 今天乒乓球界的“藏獒” 科神,张继科,在睡梦中把对手给办了,哈哈,感觉科神要是想赢,只会输给比他更想赢的人。恭喜科神!!!!

  2. 明天要开始投简历找工作了,在这期间,有个小小的目标,就是培养自己的手绘爱好,下面是道具,后续会统一上传分享给大家,哎呀,小细腿露点了,呵呵。

  3. 这个月的睡前时间就交给娜姐了,后续也会写一些书评之类的,看时间是否允许了。

  4. 因为GitHub page在不翻墙的情况下会时不时得访问不了,这几天一直在找一个国内较好的博客网站,把这一百多篇博客搬过去,但是一粘贴过去格式都乱了,不好看,再调格式麻烦,找到开源中国感觉还可以,结果每天只能上传10篇,那我这边全搬过去要差不多半个月,心累,就不搬了,方正觉得这里写博客更好,比较清净,也不会为了写博客而写博客。最关键的是主题随便换,哈哈。得睡了,明天开始投简历了。

“如果想要的还没来,就永远不要放弃,并相信该来的总会来” — 不知道谁说的,没人认领就当我说的吧,哈哈。

睡前叨叨 2016-08-14

  1. 博客计划
    写一个刚入门的自己都能看懂的自定义View实例专题。
    插件开发,热修复
    分析开源框架(待定)

Zygote

故名思议就是“受精卵”的意思,受精卵的特点就是它是新生命的开始,并且受精卵一般都可以携带一些父辈的遗传信息。Zygote在Android系统中就相当于一个应用程序的受精卵一样,会生成出新的应用,我们知道Linux系统中要创建一个进程需要先调用fork方法,fork得到的进程大部分数据是和父辈进程是一致的,也就是说子进程和父进程之间共享着内存空间。只有在调用exec方法的时候才会用新的可执行文件的内容替换当前进程的代码段,数据段,堆栈等数据。但是在Android系统中Zygote在初始化的时候会创建Delvik虚拟机,在这个阶段会将系统的类库,资源文件,共享库加载到内存,在后续Zygote fork出子进程后,这个子进程也就有了上述资源,就不需要重新加载了,只需要装载APK中的字节码就可以直接运行了,所以会在很大程度上缩短启动进程的时间。

SystemServer

一般我們在开发应用的时候如果要使用一个服务需要运行startService,来启动一个服务,但是我们在使用系统服务的时候却不需要,而是只需要调用getSystemService()来获得系统服务,这是因为我们上面提到的在系统启动的时候,Zygote会最早生成的一个SystemServer这个运行在Dalvik虚拟机上的Java进程。上面运行着多种系统服务,也就说这些服务在已经启动好了的,不需要我们重新启动。

下面是整个SystemServer以及MediaServer启动的整个过程,先上图,后补上详细的代码分析,哎,又欠了一大段文字,哈哈,不过我喜欢。大家看图估计看得更清楚些:

RTSP 概述:

RTSP 是Real Time Streaming Protocol(实时流媒体协议)的简称。RTSP提供一种可扩展的框架,使得能够提供可控制的,按需传输实时数据,比如音频和视频文件。RTSP对流媒体提供了诸如暂停,快进等控制,而它本身并不传输数据,RTSP作用相当于流媒体服务器的远程控制。传输数据可以通过传输层的TCP,UDP协议,RTSP也提供了基于 RTP传输机制的一些有效的方法。

RTSP 模型:

客户机在向视频服务器请求视频服务之前,首先通过HTTP协议从WEB服务器获取所请求视频服务的演示描述(Presentation description)文件,在RTSP中,每个演示(Presentation)及其所对应的媒体流都由一个RTSP URL标识。整个演示及媒体特性都在一个演示描述(Presentation description)文件中定义,该文件可能包括媒体编码方式、语言、RTSPURLs、目标地址、端口及其它参数。用户在向服务器请求某个连续媒体流的服务之前,必须首先从服务器获得该媒体流的演示描述(Presentation description )文件以得到必需的参数。利用该文件提供的信息定位视频服务地址(包括视频服务器地址和端口号)及视频服务的编码方式等信息。
客户机根据上述信息向视频服务器请求视频服务。视频服务初始化完毕,视频服务器为该客户建立一个新的视频服务流,客户端与服务器运行实时流控制协议RTSP,以对该流进行各种VCR 控制信号的交换,如播放、暂停、快进、快退等。当服务完毕,客户端提出拆线(TEARDOWN)请求。服务器使用 RTP协议将媒体数据传输给客户端,一旦数据抵达客户端,客户端应用程序即可播放输出。在流式传输中,使用RTP/RTCP和RTSP /TCP两种不同的通信协议在客户端和服务器间建立联系。如下图:

RTSP 协议消息格式:
  • 请求消息格式:
方法   URI  RTSP版本   CR  LF 
消息头 CR LF CR LF
消息体 CR LF

其中方法包括OPTION回应中所有的命令,URI是接受方的地址,例如
rtsp://192.168.20.136

RTSP版本一般都是 RTSP/1.0.每行后面的CR LF表示回车换行,需要接受端有相应的解析,最后一个消息头需要有两个CR LF

  • 回应消息格式:
RTSP版本  状态码  解释 CR  LF 
消息头 CR LF CR LF
消息体 CR LF

其中RTSP版本一般都是RTSP/1.0,状态码是一个数值,200表示成功,解释是与状态码对应的文本解释。

简单的RTSP 交互过程:

下面以一次流媒体播放为例介绍整个播放过程的RTSP状态转换的流程:
其中C表示RTSP客户端,S表示RTSP服务端:

C->S:OPTION     request        //询问服务端有哪些方法可用
S->C:OPTION response //服务端回应信息中包括提供的所有可用方法

C->S:DESCRIBE request //要求得到服务端提供的媒体初始化描述信息
S->C:DESCRIBE response //服务端回应媒体初始化描述信息,主要是SDP

C->S:SETUP request //设置会话的属性,以及传输模式提醒服务端建立会话
S->C:SETUP response //服务端建立会话,返回会话标识符,和会话相关信息

C->S:PLAY request //客户端请求播放
S->C:PLAY response //服务器回应该请求的信息

S->C: //发送流媒体数据

C->S:TEARDOWN request //客户端请求关闭会话
S->C:TEARDOWN response //服务端回应该请求

其中第SETUP和PLAY这两部是必需的,
OPTION 步骤只要服务器客户端约定好,有哪些方法可用,则option请求可以不要。
如果我们有其他途径得到媒体初始化描述信息,则我们也不需要通过RTSP中的DESCRIPTION请求来完成。
TEARDOWN,可以根据系统需求的设计来决定是否需要。

RTSP的主要命令表:





RTSP状态码:
Status-Code =
| "100" ; Continue
| "200" ; OK
| "201" ; Created
| "250" ; Low on Storage Space
| "300" ; Multiple Choices
| "301" ; Moved Permanently
| "302" ; Moved Temporarily
| "303" ; See Other
| "304" ; Not Modified
| "305" ; Use Proxy
| "400" ; Bad Request
| "401" ; Unauthorized
| "402" ; Payment Required
| "403" ; Forbidden
| "404" ; Not Found
| "405" ; Method Not Allowed
| "406" ; Not Acceptable
| "407" ; Proxy Authentication Required
| "408" ; Request Time-out
| "410" ; Gone
| "411" ; Length Required
| "412" ; Precondition Failed
| "413" ; Request Entity Too Large
| "414" ; Request-URI Too Large
| "415" ; Unsupported Media Type
| "451" ; Parameter Not Understood
| "452" ; Conference Not Found
| "453" ; Not Enough Bandwidth
| "454" ; Session Not Found
| "455" ; Method Not Valid in This State
| "456" ; Header Field Not Valid for Resource
| "457" ; Invalid Range
| "458" ; Parameter Is Read-Only
| "459" ; Aggregate operation not allowed
| "460" ; Only aggregate operation allowed
| "461" ; Unsupported transport
| "462" ; Destination unreachable
| "500" ; Internal Server Error
| "501" ; Not Implemented
| "502" ; Bad Gateway
| "503" ; Service Unavailable
| "504" ; Gateway Time-out
| "505" ; RTSP Version not supported
| "551" ; Option not supported

SDP的格式:
v=<version>                            (协议版本)
o=<username> <session id> <version> <network type> <address type> <address> (所有者/创建者和会话标识符)
s=<session name> (会话名称)
i=<session description> (会话信息)
u=<URI> (URI 描述)
e=<email address> (Email 地址)
p=<phone number> (电话号码)
c=<network type> <address type> <connection address> (连接信息)
b=<modifier>:<bandwidth-value> (带宽信息)
t=<start time> <stop time> (会话活动时间)
r=<repeat interval> <active duration> <list of offsets from start-time>
(0或多次重复次数)
z=<adjustment time> <offset> <adjustment time> <offset>(时间区域调整)
k=<method>:<encryption key> (加密密钥)
a=<attribute>:<value> (0 个或多个会话属性行)
m=<media> <port> <transport> <fmt list> (媒体名称和传输地址)

时间描述:
t = (会话活动时间)
r = * (0或多次重复次数)
媒体描述:
m = (媒体名称和传输地址)
i = * (媒体标题)
c = * (连接信息 — 如果包含在会话层则该字段可选)
b = * (带宽信息)
k = * (加密密钥)
a = * (0 个或多个媒体属性行)
RTP协议:

实时传输协议(Real-time Transport Protocol,RTP)是用来在单播或者多播的情境中传流媒体数据的数据传输协议。通常使用UDP来进行多媒体数据的传输,也不排除使用TCP或者ATM等其它协议作为它的载体,整个RTP 协议由两个密切相关的部分组成:RTP数据协议和RTP控制协议(也就是RTCP协议)。
RTP为Internet上端到端的实时传输提供时间信息和流同步,但它并不保证服务质量,服务质量由RTCP来提供。

  • 使用RTP协议进行数据传输的一个简要RTP的会话过程:

当应用程序建立一个RTP会话时,应用程序将确定一对目的传输地址。目的传输地址由一个网络地址和一对端口组成,有两个端口:一个给RTP包,一个给RTCP包,也就是说RTP和RTCP数据包是分开传输的,这样可以使得RTP/RTCP数据能够正确发送。其中RTP数据发向偶数的UDP端口,而对应的控制信号RTCP数据发向相邻的奇数UDP端口,这样就构成一个UDP端口对。
当发送数据的时候RTP协议从上层接收流媒体信息码流,封装成RTP数据包;RTCP从上层接收控制信息,封装成RTCP控制包。RTP将RTP 数据包发往UDP端口对中偶数端口;RTCP将RTCP控制包发往UDP端口对中的接收端口。
如果在一次会议中同时使用了音频和视频会议,这两种媒体将分别在不同的RTP会话中传送,每一个会话使用不同的传输地址(IP地址+端口)。如果一个用户同时使用了两个会话,则每个会话对应的RTCP包都使用规范化名字CNAME(Canonical Name)。与会者可以根据RTCP包中的CNAME来获取相关联的音频和视频,然后根据RTCP包中的计时信息(Network time protocol)来实现音频和视频的同步。

  • 翻译器和混合器
    在RTP协议中还引入了翻译器和混合器。翻译器和混合器都是RTP级的中继系统。
    混合器的使用情景:
    在Internet上举行视频会议时,可能有少数参加者通过低速链路与使用高速网络的多数参加者相连接。为了不强制所有会议参加者都使用低带宽和低质量的数据编码,RTP允许在低带宽区域附近使用混合器作为RTP级中继器。混合器从一个或多个信源接收RTP报文,对到达的数据报文进行重新同步和重新组合,这些重组的数据流被混合成一个数据流,将数据编码转化为在低带宽上可用的类型,并通过低速链路向低带宽区域转发。为了对多个输入信源进行统一的同步,混合器在多个媒体流之间进行定时调整,产生它自己的定时同步,因此所有从混合器输出的报文都把混合器作为同步信源。为了保证接收者能够正确识别混合器处理前的原始报文发送者,混合器在RTP报头中设置了CSRC标识符队列,以标识那些产生混和报文的原始同步信源。
    翻译器的使用情景
    在Internet环境中,一些会议的参加者可能被隔离在应用级防火墙的外面,这些参加者被禁止直接使用IP组播地址进行访问,虽然他们可能是通过高速链路连接的。在这些情况下,RTP允许使用转换器作为RTP级中继器。在防火墙两端分别安装一个转换器,防火墙之外的转换器过滤所有接收到的组播报文,并通过一条安全的连接传送给防火墙之内的转换器,内部转换器将这些组播报文再转发送给内部网络中的组播组成员
RTP协议报头格式


RTCP协议报头格式

如前面所述RTCP的主要功能是:服务质量的监视与反馈、媒体间的同步,以及多播组中成员的标识。在RTP会话期间,各参与者周期性地传送RTCP包。RTCP包中含有已发送的数据包的数量、丢失的数据包的数量等统计资料,因此,各参与者可以利用这些信息动态地改变传输速率,甚至改变有效载荷类型。RTP和RTCP配合使用,它们能以有效的反馈和最小的开销使传输效率最佳化,因而特别适合传送网上的实时数据。RTCP也是用UDP来传送的,但RTCP封装的仅仅是一些控制信息,因而分组很短,所以可以将多个RTCP分组封装在一个UDP包中。
RTCP有如下五种分组类型:

下面是SR分组的格式:

基于NuPLayer的RTSP 代码流程

setDataSource 阶段的任务这里就不重复介绍了,它主要完成播放引擎的建立以及根据URL格式创建对应的Source,比如这里将要提到的RTSPSource,然后赋值给mSource。

我们直接来看prepare阶段:

先上图再看代码,结合图看会比较清晰



在prepare阶段我们首先会判断是否是SDP,mIsSDP这个变量是在初始化RTSPSource时候传入的,我们这里先分析mIsSDP = false的情况。这种情况下首先创建一个MyHandler,并调用connect,与服务器建立连接。

void NuPlayer::RTSPSource::prepareAsync() {

//..........
sp<AMessage> notify = new AMessage(kWhatNotify, this);

//检查当前状态是否为DISCONNECTED
CHECK_EQ(mState, (int)DISCONNECTED);
//设置当前状态为CONNECTING
mState = CONNECTING;

if (mIsSDP) {
//如果是SDP那么就需要创建一个SDPLoader 从服务器上加载一个描述文件
mSDPLoader = new SDPLoader(notify, (mFlags & kFlagIncognito) ? SDPLoader::kFlagIncognito : 0, mHTTPService);
mSDPLoader->load(mURL.c_str(), mExtraHeaders.isEmpty() ? NULL : &mExtraHeaders);
} else {
//如果不是SDP 那么就使用MyHandler 来进行连接
mHandler = new MyHandler(mURL.c_str(), notify, mUIDValid, mUID);
mLooper->registerHandler(mHandler);
mHandler->connect();
}
//启动缓存
startBufferingIfNecessary();
}

在介绍connect方法之前需要先了解mConn以及mRTPConn这两个成员变量,mConn是一个ARTSPConnection,它主要与服务器相连,发送和接收请求数据,mRTPConn是一个ARTPConnection 用于发送和接收媒体数据。
在connect方法中会使用mConn向服务器发起连接请求。

void connect() {
//mConn(new ARTSPConnection(mUIDValid, mUID)),
looper()->registerHandler(mConn);
//mRTPConn(new ARTPConnection),
(1 ? mNetLooper : looper())->registerHandler(mRTPConn);
sp<AMessage> notify = new AMessage('biny', this);
mConn->observeBinaryData(notify);
//连接服务
sp<AMessage> reply = new AMessage('conn', this);
mConn->connect(mOriginalSessionURL.c_str(), reply);
}
void ARTSPConnection::connect(const char *url, const sp<AMessage> &reply) {
//发送一个kWhatConnect消息,注意这里传递的是一个url和reply
sp<AMessage> msg = new AMessage(kWhatConnect, this);
msg->setString("url", url);
msg->setMessage("reply", reply);
msg->post();
}
case kWhatConnect:
onConnect(msg);
break;

在ARTSPConnection::onConnect中将会从传递过来的URl中解析host,port,path,mUser,mPass,并调用::connect 和服务器取得联系,最后调用postReceiveReponseEvent将请求的回复响应暂存起来。

void ARTSPConnection::onConnect(const sp<AMessage> &msg) {
++mConnectionID;

if (mState != DISCONNECTED) {
if (mUIDValid) {
HTTPBase::UnRegisterSocketUserTag(mSocket);
HTTPBase::UnRegisterSocketUserMark(mSocket);
}
close(mSocket);
mSocket = -1;
flushPendingRequests();
}

mState = CONNECTING;
AString url;
//从消息中取下Url
CHECK(msg->findString("url", &url));
sp<AMessage> reply;
//从消息中取下replay
CHECK(msg->findMessage("reply", &reply));

AString host, path;
unsigned port;
//从URl中解析host,port,path,mUser,mPass
if (!ParseURL(url.c_str(), &host, &port, &path, &mUser, &mPass)
|| (mUser.size() > 0 && mPass.size() == 0)) {

//有用户名,但是没有密码,返回错误信息
// If we have a user name but no password we have to give up
// right here, since we currently have no way of asking the user
// for this information.
ALOGE("Malformed rtsp url %s", uriDebugString(url).c_str());
reply->setInt32("result", ERROR_MALFORMED);
reply->post();
mState = DISCONNECTED;
return;
}

if (mUser.size() > 0) {
ALOGV("user = '%s', pass = '%s'", mUser.c_str(), mPass.c_str());
}

struct hostent *ent = gethostbyname(host.c_str());
if (ent == NULL) {
ALOGE("Unknown host %s", host.c_str());
reply->setInt32("result", -ENOENT);
reply->post();
mState = DISCONNECTED;
return;
}

mSocket = socket(AF_INET, SOCK_STREAM, 0);

if (mUIDValid) {
HTTPBase::RegisterSocketUserTag(mSocket, mUID,(uint32_t)*(uint32_t*) "RTSP");
HTTPBase::RegisterSocketUserMark(mSocket, mUID);
}

MakeSocketBlocking(mSocket, false);

struct sockaddr_in remote;
memset(remote.sin_zero, 0, sizeof(remote.sin_zero));
remote.sin_family = AF_INET;
remote.sin_addr.s_addr = *(in_addr_t *)ent->h_addr;
remote.sin_port = htons(port);
//连接到服务器
int err = ::connect(mSocket, (const struct sockaddr *)&remote, sizeof(remote));

//返回服务器ip
reply->setInt32("server-ip", ntohl(remote.sin_addr.s_addr));

if (err < 0) {
if (errno == EINPROGRESS) {
sp<AMessage> msg = new AMessage(kWhatCompleteConnection, this);
msg->setMessage("reply", reply);
msg->setInt32("connection-id", mConnectionID);
msg->post();
return;
}

reply->setInt32("result", -errno);
mState = DISCONNECTED;

if (mUIDValid) {
HTTPBase::UnRegisterSocketUserTag(mSocket);
HTTPBase::UnRegisterSocketUserMark(mSocket);
}
close(mSocket);
mSocket = -1;
} else {
//成功的花返回result为OK
reply->setInt32("result", OK);
//设置状态为CONNECTED
mState = CONNECTED;
mNextCSeq = 1;
//发送等待返回消息
postReceiveReponseEvent();
}
//‘conn’
reply->post();
}

我们接下来看下postReceiveReponseEvent

void ARTSPConnection::postReceiveReponseEvent() {
if (mReceiveResponseEventPending) {
return;
}
sp<AMessage> msg = new AMessage(kWhatReceiveResponse, this);
msg->post();
mReceiveResponseEventPending = true;
}

调用receiveRTSPReponse获得服务器的回复

void ARTSPConnection::onReceiveResponse() {
mReceiveResponseEventPending = false;
if (mState != CONNECTED) {
return;
}
struct timeval tv;
tv.tv_sec = 0;
tv.tv_usec = kSelectTimeoutUs;
fd_set rs;
FD_ZERO(&rs);
FD_SET(mSocket, &rs);

//选择一个返回的连接
int res = select(mSocket + 1, &rs, NULL, NULL, &tv);

if (res == 1) {
MakeSocketBlocking(mSocket, true);
bool success = receiveRTSPReponse();
MakeSocketBlocking(mSocket, false);
if (!success) {
// Something horrible, irreparable has happened.
flushPendingRequests();
return;
}
}
postReceiveReponseEvent();
}

注意这里的receiveRTSPReponse是有双重功能的,方面可以接收从服务器发来的请求,另一方面可以处理服务器发来的应答信号。

bool ARTSPConnection::receiveRTSPReponse() {

AString statusLine;
if (!receiveLine(&statusLine)) {
return false;
}
if (statusLine == "$") {
sp<ABuffer> buffer = receiveBinaryData();
if (buffer == NULL) {
return false;
}
if (mObserveBinaryMessage != NULL) {
sp<AMessage> notify = mObserveBinaryMessage->dup();
notify->setBuffer("buffer", buffer);
notify->post();
} else {
ALOGW("received binary data, but no one cares.");
}
return true;
}

//RTSP返回对象
sp<ARTSPResponse> response = new ARTSPResponse;
response->mStatusLine = statusLine;
ALOGI("status: %s", response->mStatusLine.c_str());
ssize_t space1 = response->mStatusLine.find(" ");
if (space1 < 0) {
return false;
}
ssize_t space2 = response->mStatusLine.find(" ", space1 + 1);
if (space2 < 0) {
return false;
}

bool isRequest = false;
//判断返回的RTSP版本是否正确
if (!IsRTSPVersion(AString(response->mStatusLine, 0, space1))) {
CHECK(IsRTSPVersion(AString(response->mStatusLine,space2 + 1,response->mStatusLine.size() - space2 - 1)));
isRequest = true;
response->mStatusCode = 0;
} else {
//判断状态码是否正确
AString statusCodeStr(response->mStatusLine, space1 + 1, space2 - space1 - 1);
if (!ParseSingleUnsignedLong(statusCodeStr.c_str(), &response->mStatusCode) || response->mStatusCode < 100 || response->mStatusCode > 999) {
return false;
}
}

AString line;
ssize_t lastDictIndex = -1;
for (;;) {
if (!receiveLine(&line)) {
break;
}
if (line.empty()) {
break;
}
ALOGV("line: '%s'", line.c_str());
if (line.c_str()[0] == ' ' || line.c_str()[0] == '\t') {
// Support for folded header values.
if (lastDictIndex < 0) {
// First line cannot be a continuation of the previous one.
return false;
}
AString &value = response->mHeaders.editValueAt(lastDictIndex);
value.append(line);

continue;
}
ssize_t colonPos = line.find(":");
if (colonPos < 0) {
// Malformed header line.
return false;
}
AString key(line, 0, colonPos);
key.trim();
key.tolower();
line.erase(0, colonPos + 1);
lastDictIndex = response->mHeaders.add(key, line);
}

for (size_t i = 0; i < response->mHeaders.size(); ++i) {
response->mHeaders.editValueAt(i).trim();
}

unsigned long contentLength = 0;

ssize_t i = response->mHeaders.indexOfKey("content-length");

if (i >= 0) {
AString value = response->mHeaders.valueAt(i);
if (!ParseSingleUnsignedLong(value.c_str(), &contentLength)) {
return false;
}
}
//接收mContent
if (contentLength > 0) {
response->mContent = new ABuffer(contentLength);
if (receive(response->mContent->data(), contentLength) != OK) {
return false;
}
}
//isRequest 表示是服务器主动发送的请求,那么将调用handleServerRequest,否则表示是服务器被动响应客户端的请求,那么将通知服务器有响应了notifyResponseListener
return isRequest
? handleServerRequest(response)
: notifyResponseListener(response);
}

isRequest 表示是服务器主动发送的请求,那么将调用handleServerRequest,否则表示是服务器被动响应客户端的请求,那么将通知服务器有响应了notifyResponseListener,我们这里先看下这两个方法的实现:

看到handleServerRequest大家可能会有点失望,因为这里尚未实现这个功能所以只是向服务器返回一个“RTSP/1.0 501 Not Implemented”的消息。

bool ARTSPConnection::handleServerRequest(const sp<ARTSPResponse> &request) {
// Implementation of server->client requests is optional for all methods
// but we do need to respond, even if it's just to say that we don't
// support the method.

//这里我们不实现任何答复行为只是简单反馈我们尚未实现这个功能
ssize_t space1 = request->mStatusLine.find(" ");
CHECK_GE(space1, 0);
AString response;
response.append("RTSP/1.0 501 Not Implemented\r\n");
ssize_t i = request->mHeaders.indexOfKey("cseq");
if (i >= 0) {
AString value = request->mHeaders.valueAt(i);
unsigned long cseq;
if (!ParseSingleUnsignedLong(value.c_str(), &cseq)) {
return false;
}
response.append("CSeq: ");
response.append(cseq);
response.append("\r\n");
}
response.append("\r\n");
size_t numBytesSent = 0;
while (numBytesSent < response.size()) {
ssize_t n =
send(mSocket, response.c_str() + numBytesSent,
response.size() - numBytesSent, 0);
if (n < 0 && errno == EINTR) {
continue;
}
if (n <= 0) {
if (n == 0) {
// Server closed the connection.
ALOGE("Server unexpectedly closed the connection.");
} else {
ALOGE("Error sending rtsp response (%s).", strerror(errno));
}
performDisconnect();
return false;
}
numBytesSent += (size_t)n;
}
return true;
}

notifyResponseListener的实现比较清晰,它会根据服务器发来的应答响应,找出响应该应答的Message,然后将response返回给MyHandler,进行处理。

bool ARTSPConnection::notifyResponseListener(
const sp<ARTSPResponse> &response) {
ssize_t i;
//在队列中查找尚未处理的请求
status_t err = findPendingRequest(response, &i);
if (err != OK) {
return false;
}
//发送服务器的回复给它
sp<AMessage> reply = mPendingRequests.valueAt(i);
mPendingRequests.removeItemsAt(i);
reply->setInt32("result", OK);
reply->setObject("response", response);
reply->post();
return true;
}

好了我们言归正传,我们看下MyHandler中对conn回复怎么处理:

case 'conn':
{
int32_t result;
//取出反馈结果
CHECK(msg->findInt32("result", &result));
if (result == OK) {
//发送请求描述符的消息
AString request;
request = "DESCRIBE ";
request.append(mSessionURL);
request.append(" RTSP/1.0\r\n");
request.append("Accept: application/sdp\r\n");
request.append("\r\n");
sp<AMessage> reply = new AMessage('desc', this);
mConn->sendRequest(request.c_str(), reply);
} else {
(new AMessage('disc', this))->post();
}
break;
}

这里比较简单就是收到答复之后,直接判断结果是OK还是不OK,如果OK那么就发送一个DESCRIBE的请求。我们重点看下,onSendRequest理解这个很重要:
在onSendRequest中会对请求加工处理下,比如添加Cseq等操作,然后就会调用send向服务器发送请求。并将请求以Cseq为键码,replay为回复消息的待处理请求队列中。

void ARTSPConnection::onSendRequest(const sp<AMessage> &msg) {
sp<AMessage> reply;
CHECK(msg->findMessage("reply", &reply));
//对请求进行加工处理
AString request;
CHECK(msg->findString("request", &request));
// Just in case we need to re-issue the request with proper authentication
// later, stash it away.
reply->setString("original-request", request.c_str(), request.size());
addAuthentication(&request);
addUserAgent(&request);
// Find the boundary between headers and the body.
ssize_t i = request.find("\r\n\r\n");
CHECK_GE(i, 0);
int32_t cseq = mNextCSeq++;
AString cseqHeader = "CSeq: ";
cseqHeader.append(cseq);
cseqHeader.append("\r\n");
request.insert(cseqHeader, i + 2);
ALOGV("request: '%s'", request.c_str());

size_t numBytesSent = 0;
while (numBytesSent < request.size()) {
//如果请求还没完全发送结束那么继续发送
ssize_t n = send(mSocket, request.c_str() + numBytesSent,request.size() - numBytesSent, 0);}
//忽略错误处理代码
numBytesSent += (size_t)n;
}
//将请求添加到mPendingRequests,等待服务器回复
mPendingRequests.add(cseq, reply);
}

回到上面提到的notifyResponseListener结合onSendRequest以及findPendingRequest是否看出了整个事件处理的流程?

status_t ARTSPConnection::findPendingRequest(
const sp<ARTSPResponse> &response, ssize_t *index) const {
*index = 0;

ssize_t i = response->mHeaders.indexOfKey("cseq");

if (i < 0) {
// This is an unsolicited server->client message.
*index = -1;
return OK;
}
AString value = response->mHeaders.valueAt(i);

unsigned long cseq;
if (!ParseSingleUnsignedLong(value.c_str(), &cseq)) {
return ERROR_MALFORMED;
}
i = mPendingRequests.indexOfKey(cseq);
if (i < 0) {
return -ENOENT;
}
*index = i;
return OK;
}

onSendRequest 会不断将请求放入mPendingRequests中,而每次服务器给出应答的时候会调用notifyResponseListener,notifyResponseListener会从mPendingRequests中取出一个应答消息,并发送消息给MyHandler进行处理,而notifyResponseListener又会阻塞等待下一个服务器的应答信号。

OK我们接下来看下收到‘desc’信号后的处理:

case 'desc':
{
int32_t result;
CHECK(msg->findInt32("result", &result));

if (result == OK) {
sp<RefBase> obj;
CHECK(msg->findObject("response", &obj));
sp<ARTSPResponse> response = static_cast<ARTSPResponse *>(obj.get());

if (response->mStatusCode == 301 || response->mStatusCode == 302) {
//重定向连接
//............
}

if (response->mStatusCode != 200) {
result = UNKNOWN_ERROR;
} else if (response->mContent == NULL) {
result = ERROR_MALFORMED;
ALOGE("The response has no content.");
} else {
//获得ASessionDescription
mSessionDesc = new ASessionDescription;
mSessionDesc->setTo(response->mContent->data(),response->mContent->size());

if (!mSessionDesc->isValid()) {
//............
} else {
//............
if (mSessionDesc->countTracks() < 2) {
// There's no actual tracks in this session.
// The first "track" is merely session meta
// data.
ALOGW("Session doesn't contain any playable "
"tracks. Aborting.");
result = ERROR_UNSUPPORTED;
} else {
//这里才是真正要处理的代码
setupTrack(1);
}
}
}
}
break;
}

上面代码很长我们忽略不重要的,直接看setupTrack。

void setupTrack(size_t index) {

sp<APacketSource> source = new APacketSource(mSessionDesc, index);
AString url;
CHECK(mSessionDesc->findAttribute(index, "a=control", &url));
AString trackURL;
//获得多媒体文件的Uri
CHECK(MakeURL(mBaseURL.c_str(), url.c_str(), &trackURL));

mTracks.push(TrackInfo());
TrackInfo *info = &mTracks.editItemAt(mTracks.size() - 1);
//设置uri
info->mURL = trackURL;
//设置APacketSource
info->mPacketSource = source;
info->mUsingInterleavedTCP = false;
info->mFirstSeqNumInSegment = 0;
info->mNewSegment = true;
info->mRTPSocket = -1;
info->mRTCPSocket = -1;
info->mRTPAnchor = 0;
info->mNTPAnchorUs = -1;
info->mNormalPlayTimeRTP = 0;
info->mNormalPlayTimeUs = 0ll;

unsigned long PT;
AString formatDesc;
AString formatParams;
mSessionDesc->getFormatType(index, &PT, &formatDesc, &formatParams);

int32_t timescale;
int32_t numChannels;
ASessionDescription::ParseFormatDesc(formatDesc.c_str(), &timescale, &numChannels);

info->mTimeScale = timescale;
info->mEOSReceived = false;

ALOGV("track #%zu URL=%s", mTracks.size(), trackURL.c_str());

//建立SETUP请求
AString request = "SETUP ";
request.append(trackURL);
request.append(" RTSP/1.0\r\n");
if (mTryTCPInterleaving) {
size_t interleaveIndex = 2 * (mTracks.size() - 1);
info->mUsingInterleavedTCP = true;
info->mRTPSocket = interleaveIndex;
info->mRTCPSocket = interleaveIndex + 1;
request.append("Transport: RTP/AVP/TCP;interleaved=");
request.append(interleaveIndex);
request.append("-");
request.append(interleaveIndex + 1);
} else {
unsigned rtpPort;
ARTPConnection::MakePortPair(
&info->mRTPSocket, &info->mRTCPSocket, &rtpPort);
if (mUIDValid) {
HTTPBase::RegisterSocketUserTag(info->mRTPSocket, mUID,
(uint32_t)*(uint32_t*) "RTP_");
HTTPBase::RegisterSocketUserTag(info->mRTCPSocket, mUID,
(uint32_t)*(uint32_t*) "RTP_");
HTTPBase::RegisterSocketUserMark(info->mRTPSocket, mUID);
HTTPBase::RegisterSocketUserMark(info->mRTCPSocket, mUID);
}
request.append("Transport: RTP/AVP/UDP;unicast;client_port=");
request.append(rtpPort);
request.append("-");
request.append(rtpPort + 1);
}
request.append("\r\n");
if (index > 1) {
request.append("Session: ");
request.append(mSessionID);
request.append("\r\n");
}
request.append("\r\n");
sp<AMessage> reply = new AMessage('setu', this);
reply->setSize("index", index);
reply->setSize("track-index", mTracks.size() - 1);
mConn->sendRequest(request.c_str(), reply);
}

这里的逻辑也很简单就是将要获取到的歌曲信息存放到mTracks,并使用sendRequest发起setu请求,sendRequest就不再作详细介绍了,我们直接看下‘setu’返回后的处理:

case 'setu':
{
size_t index;
CHECK(msg->findSize("index", &index));
TrackInfo *track = NULL;
size_t trackIndex;
if (msg->findSize("track-index", &trackIndex)) {
track = &mTracks.editItemAt(trackIndex);
}

int32_t result;
CHECK(msg->findInt32("result", &result));
if (result == OK) {
CHECK(track != NULL);
sp<RefBase> obj;
CHECK(msg->findObject("response", &obj));
sp<ARTSPResponse> response =
static_cast<ARTSPResponse *>(obj.get());

if (response->mStatusCode != 200) {

} else {
ssize_t i = response->mHeaders.indexOfKey("session");
CHECK_GE(i, 0);
//得到SessionID
mSessionID = response->mHeaders.valueAt(i);
mKeepAliveTimeoutUs = kDefaultKeepAliveTimeoutUs;
AString timeoutStr;
//........................
sp<AMessage> notify = new AMessage('accu', this);
notify->setSize("track-index", trackIndex);
i = response->mHeaders.indexOfKey("transport");
CHECK_GE(i, 0);
if (track->mRTPSocket != -1 && track->mRTCPSocket != -1) {
if (!track->mUsingInterleavedTCP) {
AString transport = response->mHeaders.valueAt(i);
// We are going to continue even if we were
// unable to poke a hole into the firewall...
pokeAHole(
track->mRTPSocket,
track->mRTCPSocket,
transport);
}
mRTPConn->addStream(
track->mRTPSocket, track->mRTCPSocket,
mSessionDesc, index,
notify, track->mUsingInterleavedTCP);
mSetupTracksSuccessful = true;
} else {
result = BAD_VALUE;
}
}
}

上面最重要的就是获取SessionID并调用mRTPConn->addStream完ARTPConnection中添加一个流,我们看下addStream:

void ARTPConnection::addStream(
int rtpSocket, int rtcpSocket,
const sp<ASessionDescription> &sessionDesc,
size_t index,
const sp<AMessage> &notify,
bool injected) {
sp<AMessage> msg = new AMessage(kWhatAddStream, this);
msg->setInt32("rtp-socket", rtpSocket);
msg->setInt32("rtcp-socket", rtcpSocket);
msg->setObject("session-desc", sessionDesc);
msg->setSize("index", index);
msg->setMessage("notify", notify);
msg->setInt32("injected", injected);
msg->post();
}
case kWhatAddStream:
{
onAddStream(msg);
break;
}
void ARTPConnection::onAddStream(const sp<AMessage> &msg) {
//将Stream信息添加到mStreams
mStreams.push_back(StreamInfo());
StreamInfo *info = &*--mStreams.end();
int32_t s;
//获得rtp-socket
CHECK(msg->findInt32("rtp-socket", &s));
info->mRTPSocket = s;
//获得rtcp-socket
CHECK(msg->findInt32("rtcp-socket", &s));
info->mRTCPSocket = s;

int32_t injected;
CHECK(msg->findInt32("injected", &injected));

info->mIsInjected = injected;
//获得session-desc
sp<RefBase> obj;
CHECK(msg->findObject("session-desc", &obj));
info->mSessionDesc = static_cast<ASessionDescription *>(obj.get());

CHECK(msg->findSize("index", &info->mIndex));
CHECK(msg->findMessage("notify", &info->mNotifyMsg));

info->mNumRTCPPacketsReceived = 0;
info->mNumRTPPacketsReceived = 0;
memset(&info->mRemoteRTCPAddr, 0, sizeof(info->mRemoteRTCPAddr));

//发送轮询查询事件
if (!injected) {
postPollEvent();
}
}

上面代码中重点关注的是postPollEvent:

void ARTPConnection::postPollEvent() {
if (mPollEventPending) {
return;
}
sp<AMessage> msg = new AMessage(kWhatPollStreams, this);
msg->post();
mPollEventPending = true;
}
case kWhatPollStreams:
{
onPollStreams();
break;
}
void ARTPConnection::onPollStreams() {
mPollEventPending = false;

if (mStreams.empty()) {
return;
}

struct timeval tv;
tv.tv_sec = 0;
tv.tv_usec = kSelectTimeoutUs;

fd_set rs;
FD_ZERO(&rs);

int maxSocket = -1;
for (List<StreamInfo>::iterator it = mStreams.begin();
it != mStreams.end(); ++it) {
if ((*it).mIsInjected) {
continue;
}
FD_SET(it->mRTPSocket, &rs);
FD_SET(it->mRTCPSocket, &rs);
if (it->mRTPSocket > maxSocket) {
maxSocket = it->mRTPSocket;
}
if (it->mRTCPSocket > maxSocket) {
maxSocket = it->mRTCPSocket;
}
}

if (maxSocket == -1) {
return;
}

//选择一个网络请求
int res = select(maxSocket + 1, &rs, NULL, NULL, &tv);

if (res > 0) {
//在这里接收服务器发过来的数据
List<StreamInfo>::iterator it = mStreams.begin();
while (it != mStreams.end()) {
if ((*it).mIsInjected) {
++it;
continue;
}
status_t err = OK;
//接受从服务器发来的数据
if (FD_ISSET(it->mRTPSocket, &rs)) {
//调用的是status_t ARTPConnection::receive(StreamInfo *s, bool receiveRTP)
err = receive(&*it, true);
}
//接受从服务器发来的数据
if (err == OK && FD_ISSET(it->mRTCPSocket, &rs)) {
//调用的是status_t ARTPConnection::receive(StreamInfo *s, bool receiveRTP)
err = receive(&*it, false);
}
++it;
}
}

int64_t nowUs = ALooper::GetNowUs();
if (mLastReceiverReportTimeUs <= 0|| mLastReceiverReportTimeUs + 5000000ll <= nowUs) {
//新建一个缓存区
sp<ABuffer> buffer = new ABuffer(kMaxUDPSize);
List<StreamInfo>::iterator it = mStreams.begin();
while (it != mStreams.end()) {
StreamInfo *s = &*it;
if (s->mIsInjected) {
++it;
continue;
}
if (s->mNumRTCPPacketsReceived == 0) {
// We have never received any RTCP packets on this stream,
// we don't even know where to send a report.
++it;
continue;
}

buffer->setRange(0, 0);
for (size_t i = 0; i < s->mSources.size(); ++i) {
sp<ARTPSource> source = s->mSources.valueAt(i);
//填充buffer
source->addReceiverReport(buffer);
if (mFlags & kRegularlyRequestFIR) {
source->addFIR(buffer);
}
}
if (buffer->size() > 0) {
ALOGV("Sending RR...");
ssize_t n;
do {
//通過RTCPSocket發送
n = sendto(s->mRTCPSocket, buffer->data(), buffer->size(), 0,(const struct sockaddr *)&s->mRemoteRTCPAddr, sizeof(s->mRemoteRTCPAddr));
} while (n < 0 && errno == EINTR);
CHECK_EQ(n, (ssize_t)buffer->size());
mLastReceiverReportTimeUs = nowUs;
}
++it;
}
}

if (!mStreams.empty()) {
postPollEvent();
}
}

再onPollStreams中会阻塞监听服务器发过来的媒体数据,并调用receive对其进行处理,并定期发送RTCP消息给服务器。

status_t ARTPConnection::receive(StreamInfo *s, bool receiveRTP) {

ALOGV("receiving %s", receiveRTP ? "RTP" : "RTCP");

CHECK(!s->mIsInjected);

sp<ABuffer> buffer = new ABuffer(65536);

socklen_t remoteAddrLen =
(!receiveRTP && s->mNumRTCPPacketsReceived == 0)
? sizeof(s->mRemoteRTCPAddr) : 0;

ssize_t nbytes;
do {
//从服务器接收数据
nbytes = recvfrom(
receiveRTP ? s->mRTPSocket : s->mRTCPSocket,
buffer->data(),
buffer->capacity(),
0,
remoteAddrLen > 0 ? (struct sockaddr *)&s->mRemoteRTCPAddr : NULL,
remoteAddrLen > 0 ? &remoteAddrLen : NULL);
} while (nbytes < 0 && errno == EINTR);

if (nbytes <= 0) {
return -ECONNRESET;
}

buffer->setRange(0, nbytes);

// ALOGI("received %d bytes.", buffer->size());

status_t err;
//解析RTP 或者 parseRTCP
if (receiveRTP) {
err = parseRTP(s, buffer);
} else {
err = parseRTCP(s, buffer);
}

return err;
}

receive方法中会调用recvfrom。将数据从服务器中读取到缓存,并调用parseRTP或者parseRTCP对缓存中的数据进行处理

status_t ARTPConnection::parseRTP(StreamInfo *s, const sp<ABuffer> &buffer) {

const uint8_t *data = buffer->data();

if ((data[0] >> 6) != 2) {
// Unsupported version.
return -1;
}

if (data[0] & 0x20) {
// Padding present.

size_t paddingLength = data[size - 1];

if (paddingLength + 12 > size) {
// If we removed this much padding we'd end up with something
// that's too short to be a valid RTP header.
return -1;
}

size -= paddingLength;
}

int numCSRCs = data[0] & 0x0f;

size_t payloadOffset = 12 + 4 * numCSRCs;

if (size < payloadOffset) {
// Not enough data to fit the basic header and all the CSRC entries.
return -1;
}

if (data[0] & 0x10) {
// Header eXtension present.

if (size < payloadOffset + 4) {
// Not enough data to fit the basic header, all CSRC entries
// and the first 4 bytes of the extension header.

return -1;
}

const uint8_t *extensionData = &data[payloadOffset];

size_t extensionLength =
4 * (extensionData[2] << 8 | extensionData[3]);

if (size < payloadOffset + 4 + extensionLength) {
return -1;
}

payloadOffset += 4 + extensionLength;
}

uint32_t srcId = u32at(&data[8]);

sp<ARTPSource> source = findSource(s, srcId);

uint32_t rtpTime = u32at(&data[4]);

sp<AMessage> meta = buffer->meta();
meta->setInt32("ssrc", srcId);
meta->setInt32("rtp-time", rtpTime);
meta->setInt32("PT", data[1] & 0x7f);
meta->setInt32("M", data[1] >> 7);

buffer->setInt32Data(u16at(&data[2]));
buffer->setRange(payloadOffset, size - payloadOffset);
//这里十分重要void ARTPSource::processRTPPacket(const sp<ABuffer> &buffer)
source->processRTPPacket(buffer);
return OK;
}

在parsRTP中根据RTP格式对缓存区中的数据进行解析,最后调用ARTPSource::processRTPPacket进行后续处理。

void ARTPSource::processRTPPacket(const sp<ABuffer> &buffer) {
if (queuePacket(buffer) && mAssembler != NULL) {
mAssembler->onPacketReceived(this);
}
}

processRTPPacket中调用Assembler来将数据进行重组,这里最重要的方法是assembleMore

void ARTPAssembler::onPacketReceived(const sp<ARTPSource> &source) {
AssemblyStatus status;
for (;;) {
//assembleMore
status = assembleMore(source);
if (status == WRONG_SEQUENCE_NUMBER) {
if (mFirstFailureTimeUs >= 0) {
if (ALooper::GetNowUs() - mFirstFailureTimeUs > 10000ll) {
mFirstFailureTimeUs = -1;
// LOG(VERBOSE) << "waited too long for packet.";
packetLost();
continue;
}
} else {
mFirstFailureTimeUs = ALooper::GetNowUs();
}
break;
} else {
mFirstFailureTimeUs = -1;
if (status == NOT_ENOUGH_DATA) {
break;
}
}
}
}
ARTPAssembler::AssemblyStatus AMPEG4AudioAssembler::assembleMore(
const sp<ARTPSource> &source) {
//调用addPacket
AssemblyStatus status = addPacket(source);
if (status == MALFORMED_PACKET) {
mAccessUnitDamaged = true;
}
return status;
}

这里实际上是对无序的数据包进行排序,并调用submitAccessUnit提交AU数据。

ARTPAssembler::AssemblyStatus AMPEG4AudioAssembler::addPacket(
const sp<ARTPSource> &source) {

List<sp<ABuffer> > *queue = source->queue();
if (queue->empty()) {
return NOT_ENOUGH_DATA;
}
if (mNextExpectedSeqNoValid) {
List<sp<ABuffer> >::iterator it = queue->begin();
while (it != queue->end()) {
if ((uint32_t)(*it)->int32Data() >= mNextExpectedSeqNo) {
break;
}
it = queue->erase(it);
}
if (queue->empty()) {
return NOT_ENOUGH_DATA;
}
}

sp<ABuffer> buffer = *queue->begin();

if (!mNextExpectedSeqNoValid) {
mNextExpectedSeqNoValid = true;
mNextExpectedSeqNo = (uint32_t)buffer->int32Data();
} else if ((uint32_t)buffer->int32Data() != mNextExpectedSeqNo) {
#if VERBOSE
LOG(VERBOSE) << "Not the sequence number I expected";
#endif
return WRONG_SEQUENCE_NUMBER;
}

uint32_t rtpTime;
CHECK(buffer->meta()->findInt32("rtp-time", (int32_t *)&rtpTime));

//提交AccessUnit
if (mPackets.size() > 0 && rtpTime != mAccessUnitRTPTime) {
submitAccessUnit();
}
mAccessUnitRTPTime = rtpTime;
//将缓存添加到mPackets
mPackets.push_back(buffer);
queue->erase(queue->begin());
++mNextExpectedSeqNo;
return OK;
}

submitAccessUnit中回调‘accu’,交给MyHandler处理

void AMPEG4AudioAssembler::submitAccessUnit() {
CHECK(!mPackets.empty());

#if VERBOSE
LOG(VERBOSE) << "Access unit complete (" << mPackets.size() << " packets)";
#endif

sp<ABuffer> accessUnit = MakeCompoundFromPackets(mPackets);
accessUnit = removeLATMFraming(accessUnit);
CopyTimes(accessUnit, *mPackets.begin());

if (mAccessUnitDamaged) {
accessUnit->meta()->setInt32("damaged", true);
}

mPackets.clear();
mAccessUnitDamaged = false;
//回调‘accu’
sp<AMessage> msg = mNotifyMsg->dup();
msg->setBuffer("access-unit", accessUnit);
msg->post();
}
case 'accu':
{
int32_t timeUpdate;
if (msg->findInt32("time-update", &timeUpdate) && timeUpdate) {
size_t trackIndex;
CHECK(msg->findSize("track-index", &trackIndex));

uint32_t rtpTime;
uint64_t ntpTime;
CHECK(msg->findInt32("rtp-time", (int32_t *)&rtpTime));
CHECK(msg->findInt64("ntp-time", (int64_t *)&ntpTime));

onTimeUpdate(trackIndex, rtpTime, ntpTime);
break;
}

int32_t first;
if (msg->findInt32("first-rtcp", &first)) {
mReceivedFirstRTCPPacket = true;
break;
}

if (msg->findInt32("first-rtp", &first)) {
mReceivedFirstRTPPacket = true;
break;
}

++mNumAccessUnitsReceived;
postAccessUnitTimeoutCheck();

size_t trackIndex;
CHECK(msg->findSize("track-index", &trackIndex));

if (trackIndex >= mTracks.size()) {
ALOGV("late packets ignored.");
break;
}

TrackInfo *track = &mTracks.editItemAt(trackIndex);

int32_t eos;
if (msg->findInt32("eos", &eos)) {
ALOGI("received BYE on track index %zu", trackIndex);
if (!mAllTracksHaveTime && dataReceivedOnAllChannels()) {
ALOGI("No time established => fake existing data");

track->mEOSReceived = true;
mTryFakeRTCP = true;
mReceivedFirstRTCPPacket = true;
fakeTimestamps();
} else {
postQueueEOS(trackIndex, ERROR_END_OF_STREAM);
}
return;
}

sp<ABuffer> accessUnit;
//取出accessUnit
CHECK(msg->findBuffer("access-unit", &accessUnit));

uint32_t seqNum = (uint32_t)accessUnit->int32Data();

if (mSeekPending) {
ALOGV("we're seeking, dropping stale packet.");
break;
}

if (seqNum < track->mFirstSeqNumInSegment) {
ALOGV("dropping stale access-unit (%d < %d)",
seqNum, track->mFirstSeqNumInSegment);
break;
}

if (track->mNewSegment) {
track->mNewSegment = false;
}
//调用onAccessUnitComplete
onAccessUnitComplete(trackIndex, accessUnit);
break;
}

‘accu’取出AU数据后调用onAccessUnitComplete进行处理,我们接下来看下这部分逻辑:

void onAccessUnitComplete(
int32_t trackIndex, const sp<ABuffer> &accessUnit) {
ALOGV("onAccessUnitComplete track %d", trackIndex);

if(!mPlayResponseParsed){
ALOGI("play response is not parsed, storing accessunit");
TrackInfo *track = &mTracks.editItemAt(trackIndex);
track->mPackets.push_back(accessUnit);
return;
}

handleFirstAccessUnit();

TrackInfo *track = &mTracks.editItemAt(trackIndex);

if (!mAllTracksHaveTime) {
ALOGV("storing accessUnit, no time established yet");
track->mPackets.push_back(accessUnit);
return;
}

while (!track->mPackets.empty()) {
sp<ABuffer> accessUnit = *track->mPackets.begin();
track->mPackets.erase(track->mPackets.begin());

if (addMediaTimestamp(trackIndex, track, accessUnit)) {
//postQueueAccessUnit
postQueueAccessUnit(trackIndex, accessUnit);
}
}

if (addMediaTimestamp(trackIndex, track, accessUnit)) {
postQueueAccessUnit(trackIndex, accessUnit);
}

if (track->mEOSReceived) {
postQueueEOS(trackIndex, ERROR_END_OF_STREAM);
track->mEOSReceived = false;
}
}
void postQueueAccessUnit(
size_t trackIndex, const sp<ABuffer> &accessUnit) {
sp<AMessage> msg = mNotify->dup();
msg->setInt32("what", kWhatAccessUnit);
msg->setSize("trackIndex", trackIndex);
msg->setBuffer("accessUnit", accessUnit);
msg->post();
}

在RTSPSource中调用AnotherPacketSource queueAccessUnit(accessUnit)

case MyHandler::kWhatAccessUnit:
{
size_t trackIndex;
CHECK(msg->findSize("trackIndex", &trackIndex));

if (mTSParser == NULL) {
CHECK_LT(trackIndex, mTracks.size());
} else {
CHECK_EQ(trackIndex, 0u);
}

sp<ABuffer> accessUnit;
CHECK(msg->findBuffer("accessUnit", &accessUnit));

int32_t damaged;
if (accessUnit->meta()->findInt32("damaged", &damaged)
&& damaged) {
ALOGI("dropping damaged access unit.");
break;
}

if (mTSParser != NULL) {
size_t offset = 0;
status_t err = OK;
while (offset + 188 <= accessUnit->size()) {
err = mTSParser->feedTSPacket(
accessUnit->data() + offset, 188);
if (err != OK) {
break;
}

offset += 188;
}

if (offset < accessUnit->size()) {
err = ERROR_MALFORMED;
}

if (err != OK) {
sp<AnotherPacketSource> source = getSource(false /* audio */);
if (source != NULL) {
source->signalEOS(err);
}

source = getSource(true /* audio */);
if (source != NULL) {
source->signalEOS(err);
}
}
break;
}

TrackInfo *info = &mTracks.editItemAt(trackIndex);

sp<AnotherPacketSource> source = info->mSource;
if (source != NULL) {
uint32_t rtpTime;
CHECK(accessUnit->meta()->findInt32("rtp-time", (int32_t *)&rtpTime));

if (!info->mNPTMappingValid) {
// This is a live stream, we didn't receive any normal
// playtime mapping. We won't map to npt time.
source->queueAccessUnit(accessUnit);
break;
}

int64_t nptUs =
((double)rtpTime - (double)info->mRTPTime)
/ info->mTimeScale
* 1000000ll
+ info->mNormalPlaytimeUs;

accessUnit->meta()->setInt64("timeUs", nptUs);
//。。。。。。。。。。。。。。。
source->queueAccessUnit(accessUnit);
}
break;
}

queueAccessUnit(accessUnit);将AU数据存放到AnotherPacketSource 的mBuffers中供解码器解码播放:

void AnotherPacketSource::queueAccessUnit(const sp<ABuffer> &buffer) {
int32_t damaged;
if (buffer->meta()->findInt32("damaged", &damaged) && damaged) {
// LOG(VERBOSE) << "discarding damaged AU";
return;
}

Mutex::Autolock autoLock(mLock);
mBuffers.push_back(buffer);
mCondition.signal();

int32_t discontinuity;
if (buffer->meta()->findInt32("discontinuity", &discontinuity)){
ALOGV("queueing a discontinuity with queueAccessUnit");

mLastQueuedTimeUs = 0ll;
mEOSResult = OK;
mLatestEnqueuedMeta = NULL;

mDiscontinuitySegments.push_back(DiscontinuitySegment());
return;
}

int64_t lastQueuedTimeUs;
CHECK(buffer->meta()->findInt64("timeUs", &lastQueuedTimeUs));
mLastQueuedTimeUs = lastQueuedTimeUs;
ALOGV("queueAccessUnit timeUs=%" PRIi64 " us (%.2f secs)",
mLastQueuedTimeUs, mLastQueuedTimeUs / 1E6);

// CHECK(!mDiscontinuitySegments.empty());
DiscontinuitySegment &tailSeg = *(--mDiscontinuitySegments.end());
if (lastQueuedTimeUs > tailSeg.mMaxEnqueTimeUs) {
tailSeg.mMaxEnqueTimeUs = lastQueuedTimeUs;
}
if (tailSeg.mMaxDequeTimeUs == -1) {
tailSeg.mMaxDequeTimeUs = lastQueuedTimeUs;
}

if (mLatestEnqueuedMeta == NULL) {
mLatestEnqueuedMeta = buffer->meta()->dup();
} else {
int64_t latestTimeUs = 0;
int64_t frameDeltaUs = 0;
CHECK(mLatestEnqueuedMeta->findInt64("timeUs", &latestTimeUs));
if (lastQueuedTimeUs > latestTimeUs) {
mLatestEnqueuedMeta = buffer->meta()->dup();
frameDeltaUs = lastQueuedTimeUs - latestTimeUs;
mLatestEnqueuedMeta->setInt64("durationUs", frameDeltaUs);
} else if (!mLatestEnqueuedMeta->findInt64("durationUs", &frameDeltaUs)) {
// For B frames
frameDeltaUs = latestTimeUs - lastQueuedTimeUs;
mLatestEnqueuedMeta->setInt64("durationUs", frameDeltaUs);
}
}
}

开始播放流程,这部分和介绍HLS的时候是重复的,方便查看,所以粘贴了过来,大体的任务就是初始化解码器,然后开始从输入缓冲区往解码器中添加数据。

void NuPlayer::start() {
(new AMessage(kWhatStart, this))->post();
}
case kWhatStart:
{
ALOGV("kWhatStart");
if (mStarted) {
//...............
} else {
onStart();
}
mPausedByClient = false;
break;
}
void NuPlayer::onStart(int64_t startPositionUs) {
if (!mSourceStarted) {
mSourceStarted = true;
mSource->start();
}

mOffloadAudio = false;
mAudioEOS = false;
mVideoEOS = false;
mStarted = true;

uint32_t flags = 0;

sp<MetaData> audioMeta = mSource->getFormatMeta(true /* audio */);
audio_stream_type_t streamType = AUDIO_STREAM_MUSIC;
if (mAudioSink != NULL) {
streamType = mAudioSink->getAudioStreamType();
}

sp<AMessage> videoFormat = mSource->getFormat(false /* audio */);

sp<AMessage> notify = new AMessage(kWhatRendererNotify, this);
++mRendererGeneration;
notify->setInt32("generation", mRendererGeneration);
mRenderer = new Renderer(mAudioSink, notify, flags);
mRendererLooper = new ALooper;
mRendererLooper->setName("NuPlayerRenderer");
mRendererLooper->start(false, false, ANDROID_PRIORITY_AUDIO);
mRendererLooper->registerHandler(mRenderer);

status_t err = mRenderer->setPlaybackSettings(mPlaybackSettings);

float rate = getFrameRate();
if (rate > 0) {
mRenderer->setVideoFrameRate(rate);
}

if (mVideoDecoder != NULL) {
mVideoDecoder->setRenderer(mRenderer);
}
if (mAudioDecoder != NULL) {
mAudioDecoder->setRenderer(mRenderer);
}

postScanSources();
}

紧接着我们看下初始化编码器部分:

void NuPlayer::postScanSources() {
if (mScanSourcesPending) {
return;
}
sp<AMessage> msg = new AMessage(kWhatScanSources, this);
msg->setInt32("generation", mScanSourcesGeneration);
msg->post();
mScanSourcesPending = true;
}
case kWhatScanSources:
{
int32_t generation;

mScanSourcesPending = false;

bool mHadAnySourcesBefore =
(mAudioDecoder != NULL) || (mVideoDecoder != NULL);

// initialize video before audio because successful initialization of
// video may change deep buffer mode of audio.
if (mSurface != NULL) {
instantiateDecoder(false, &mVideoDecoder);
}

// Don't try to re-open audio sink if there's an existing decoder.
if (mAudioSink != NULL && mAudioDecoder == NULL) {
instantiateDecoder(true, &mAudioDecoder);
}
}
status_t NuPlayer::instantiateDecoder(bool audio, sp<DecoderBase> *decoder) {

//获取格式
sp<AMessage> format = mSource->getFormat(audio);
format->setInt32("priority", 0 /* realtime */);

if (audio) {
sp<AMessage> notify = new AMessage(kWhatAudioNotify, this);
++mAudioDecoderGeneration;
notify->setInt32("generation", mAudioDecoderGeneration);
determineAudioModeChange();
if (mOffloadAudio) {
//....................
} else {
*decoder = new Decoder(notify, mSource, mPID, mRenderer);
}
} else {
sp<AMessage> notify = new AMessage(kWhatVideoNotify, this);
++mVideoDecoderGeneration;
notify->setInt32("generation", mVideoDecoderGeneration);
*decoder = new Decoder(notify, mSource, mPID, mRenderer, mSurface, mCCDecoder);
//...........................
}
//解码器初始化
(*decoder)->init();
//配置解码器
(*decoder)->configure(format);
//.........
return OK;
}

在这里创建出解码器并初始化它。

void NuPlayer::DecoderBase::configure(const sp<AMessage> &format) {
sp<AMessage> msg = new AMessage(kWhatConfigure, this);
msg->setMessage("format", format);
msg->post();
}

void NuPlayer::DecoderBase::init() {
mDecoderLooper->registerHandler(this);
}

void NuPlayer::Decoder::onConfigure(const sp<AMessage> &format) {

//创建MediaCodec
mCodec = MediaCodec::CreateByType(mCodecLooper, mime.c_str(), false /* encoder */, NULL /* err */, mPid);
//配置MediaCodec
err = mCodec->configure(format, mSurface, NULL /* crypto */, 0 /* flags */);
//如果是视频文件则设置宽高
if (!mIsAudio) {
int32_t width, height;
if (mOutputFormat->findInt32("width", &width)&& mOutputFormat->findInt32("height", &height)) {
mStats->setInt32("width", width);
mStats->setInt32("height", height);
}
}
//启动MediaCodec
err = mCodec->start();
}
sp<MediaCodec> MediaCodec::CreateByType(const sp<ALooper> &looper, const char *mime, bool encoder, status_t *err, pid_t pid) {
sp<MediaCodec> codec = new MediaCodec(looper, pid);
const status_t ret = codec->init(mime, true /* nameIsType */, encoder);
return ret == OK ? codec : NULL; // NULL deallocates codec.
}

这里说明mCodec是一个ACodec对象

status_t MediaCodec::init(const AString &name, bool nameIsType, bool encoder) {
mResourceManagerService->init();
if (nameIsType || !strncasecmp(name.c_str(), "omx.", 4)) {
//根据名称创建Codec
mCodec = new ACodec;
} else if (!nameIsType&& !strncasecmp(name.c_str(), "android.filter.", 15)) {
} else {
}
sp<AMessage> msg = new AMessage(kWhatInit, this);
msg->setString("name", name);
msg->setInt32("nameIsType", nameIsType);
if (nameIsType) {
msg->setInt32("encoder", encoder);
}
return err;
}
case kWhatInit:
{
//....................
mCodec->initiateAllocateComponent(format);
break;
}
void ACodec::initiateAllocateComponent(const sp<AMessage> &msg) {
msg->setWhat(kWhatAllocateComponent);
msg->setTarget(this);
msg->post();
}
case ACodec::kWhatAllocateComponent:
{
onAllocateComponent(msg);
handled = true;
break;
}

这里开始实例化编码器并设置状态

bool ACodec::UninitializedState::onAllocateComponent(const sp<AMessage> &msg) {

Vector<OMXCodec::CodecNameAndQuirks> matchingCodecs;
AString mime;
AString componentName;
uint32_t quirks = 0;
int32_t encoder = false;
if (msg->findString("componentName", &componentName)) {
ssize_t index = matchingCodecs.add();
OMXCodec::CodecNameAndQuirks *entry = &matchingCodecs.editItemAt(index);
entry->mName = String8(componentName.c_str());

if (!OMXCodec::findCodecQuirks(componentName.c_str(), &entry->mQuirks)) {
entry->mQuirks = 0;
}
} else {
CHECK(msg->findString("mime", &mime));
if (!msg->findInt32("encoder", &encoder)) {
encoder = false;
}
OMXCodec::findMatchingCodecs(
mime.c_str(),
encoder, // createEncoder
NULL, // matchComponentName
0, // flags
&matchingCodecs);
}

sp<CodecObserver> observer = new CodecObserver;
IOMX::node_id node = 0;

status_t err = NAME_NOT_FOUND;
for (size_t matchIndex = 0; matchIndex < matchingCodecs.size();++matchIndex) {
componentName = matchingCodecs.itemAt(matchIndex).mName.string();
quirks = matchingCodecs.itemAt(matchIndex).mQuirks;

pid_t tid = gettid();
int prevPriority = androidGetThreadPriority(tid);
androidSetThreadPriority(tid, ANDROID_PRIORITY_FOREGROUND);
err = omx->allocateNode(componentName.c_str(), observer, &node);
androidSetThreadPriority(tid, prevPriority);
node = 0;
}

notify = new AMessage(kWhatOMXMessageList, mCodec);
observer->setNotificationMessage(notify);

mCodec->mComponentName = componentName;
mCodec->mRenderTracker.setComponentName(componentName);
mCodec->mFlags = 0;
mCodec->mQuirks = quirks;
mCodec->mOMX = omx;
mCodec->mNode = node;

{
sp<AMessage> notify = mCodec->mNotify->dup();
notify->setInt32("what", CodecBase::kWhatComponentAllocated);
notify->setString("componentName", mCodec->mComponentName.c_str());
notify->post();
}

mCodec->changeState(mCodec->mLoadedState);
return true;
}

解码器的配置

status_t MediaCodec::configure(
const sp<AMessage> &format,
const sp<Surface> &surface,
const sp<ICrypto> &crypto,
uint32_t flags) {
sp<AMessage> msg = new AMessage(kWhatConfigure, this);

if (mIsVideo) {
format->findInt32("width", &mVideoWidth);
format->findInt32("height", &mVideoHeight);
if (!format->findInt32("rotation-degrees", &mRotationDegrees)) {
mRotationDegrees = 0;
}
}

msg->setMessage("format", format);
msg->setInt32("flags", flags);
msg->setObject("surface", surface);

//.....................
// save msg for reset
mConfigureMsg = msg;
//.....................
for (int i = 0; i <= kMaxRetry; ++i) {
if (i > 0) {
// Don't try to reclaim resource for the first time.
if (!mResourceManagerService->reclaimResource(resources)) {
break;
}
}
sp<AMessage> response;
err = PostAndAwaitResponse(msg, &response);
//.....................
}
return err;
}
case kWhatConfigure:
{
sp<AReplyToken> replyID;
CHECK(msg->senderAwaitsResponse(&replyID));


sp<RefBase> obj;
CHECK(msg->findObject("surface", &obj));

sp<AMessage> format;
CHECK(msg->findMessage("format", &format));

int32_t push;
if (msg->findInt32("push-blank-buffers-on-shutdown", &push) && push != 0) {
mFlags |= kFlagPushBlankBuffersOnShutdown;
}

if (obj != NULL) {
format->setObject("native-window", obj);
status_t err = handleSetSurface(static_cast<Surface *>(obj.get()));
if (err != OK) {
PostReplyWithError(replyID, err);
break;
}
} else {
handleSetSurface(NULL);
}

mReplyID = replyID;
setState(CONFIGURING);

void *crypto;

uint32_t flags;
CHECK(msg->findInt32("flags", (int32_t *)&flags));

if (flags & CONFIGURE_FLAG_ENCODE) {
format->setInt32("encoder", true);
mFlags |= kFlagIsEncoder;
}
//这里最重要
mCodec->initiateConfigureComponent(format);
break;
}
void ACodec::initiateConfigureComponent(const sp<AMessage> &msg) {
msg->setWhat(kWhatConfigureComponent);
msg->setTarget(this);
msg->post();
}
case ACodec::kWhatConfigureComponent:
{
onConfigureComponent(msg);
handled = true;
break;
}
bool ACodec::LoadedState::onConfigureComponent(
const sp<AMessage> &msg) {
ALOGV("onConfigureComponent");

CHECK(mCodec->mNode != 0);

status_t err = OK;
AString mime;
if (!msg->findString("mime", &mime)) {
err = BAD_VALUE;
} else {
err = mCodec->configureCodec(mime.c_str(), msg);
}
{
sp<AMessage> notify = mCodec->mNotify->dup();
notify->setInt32("what", CodecBase::kWhatComponentConfigured);
notify->setMessage("input-format", mCodec->mInputFormat);
notify->setMessage("output-format", mCodec->mOutputFormat);
notify->post();
}

return true;
}
case CodecBase::kWhatComponentConfigured:
{
if (mState == UNINITIALIZED || mState == INITIALIZED) {
// In case a kWhatError message came in and replied with error,
// we log a warning and ignore.
ALOGW("configure interrupted by error, current state %d", mState);
break;
}
CHECK_EQ(mState, CONFIGURING);

// reset input surface flag
mHaveInputSurface = false;

CHECK(msg->findMessage("input-format", &mInputFormat));
CHECK(msg->findMessage("output-format", &mOutputFormat));

int32_t usingSwRenderer;
if (mOutputFormat->findInt32("using-sw-renderer", &usingSwRenderer)
&& usingSwRenderer) {
mFlags |= kFlagUsesSoftwareRenderer;
}
setState(CONFIGURED);
(new AMessage)->postReply(mReplyID);
break;
}

这里才是解码器最详细的配置,有时间好好针对这个展开研究,这篇博客先针对整个流程进行分析:

status_t ACodec::configureCodec(
const char *mime, const sp<AMessage> &msg) {
int32_t encoder;
if (!msg->findInt32("encoder", &encoder)) {
encoder = false;
}

sp<AMessage> inputFormat = new AMessage();
sp<AMessage> outputFormat = mNotify->dup(); // will use this for kWhatOutputFormatChanged

mIsEncoder = encoder;

mInputMetadataType = kMetadataBufferTypeInvalid;
mOutputMetadataType = kMetadataBufferTypeInvalid;

status_t err = setComponentRole(encoder /* isEncoder */, mime);

if (err != OK) {
return err;
}

int32_t bitRate = 0;
// FLAC encoder doesn't need a bitrate, other encoders do
if (encoder && strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_FLAC)
&& !msg->findInt32("bitrate", &bitRate)) {
return INVALID_OPERATION;
}

int32_t storeMeta;
if (encoder
&& msg->findInt32("store-metadata-in-buffers", &storeMeta)
&& storeMeta != 0) {
err = mOMX->storeMetaDataInBuffers(mNode, kPortIndexInput, OMX_TRUE, &mInputMetadataType);
if (err != OK) {
ALOGE("[%s] storeMetaDataInBuffers (input) failed w/ err %d",
mComponentName.c_str(), err);

return err;
}
// For this specific case we could be using camera source even if storeMetaDataInBuffers
// returns Gralloc source. Pretend that we are; this will force us to use nBufferSize.
if (mInputMetadataType == kMetadataBufferTypeGrallocSource) {
mInputMetadataType = kMetadataBufferTypeCameraSource;
}

uint32_t usageBits;
if (mOMX->getParameter(
mNode, (OMX_INDEXTYPE)OMX_IndexParamConsumerUsageBits,
&usageBits, sizeof(usageBits)) == OK) {
inputFormat->setInt32(
"using-sw-read-often", !!(usageBits & GRALLOC_USAGE_SW_READ_OFTEN));
}
}

int32_t prependSPSPPS = 0;
if (encoder
&& msg->findInt32("prepend-sps-pps-to-idr-frames", &prependSPSPPS)
&& prependSPSPPS != 0) {
OMX_INDEXTYPE index;
err = mOMX->getExtensionIndex(
mNode,
"OMX.google.android.index.prependSPSPPSToIDRFrames",
&index);

if (err == OK) {
PrependSPSPPSToIDRFramesParams params;
InitOMXParams(&params);
params.bEnable = OMX_TRUE;

err = mOMX->setParameter(
mNode, index, &params, sizeof(params));
}

if (err != OK) {
ALOGE("Encoder could not be configured to emit SPS/PPS before "
"IDR frames. (err %d)", err);

return err;
}
}

// Only enable metadata mode on encoder output if encoder can prepend
// sps/pps to idr frames, since in metadata mode the bitstream is in an
// opaque handle, to which we don't have access.
int32_t video = !strncasecmp(mime, "video/", 6);
mIsVideo = video;
if (encoder && video) {
OMX_BOOL enable = (OMX_BOOL) (prependSPSPPS
&& msg->findInt32("store-metadata-in-buffers-output", &storeMeta)
&& storeMeta != 0);

err = mOMX->storeMetaDataInBuffers(mNode, kPortIndexOutput, enable, &mOutputMetadataType);
if (err != OK) {
ALOGE("[%s] storeMetaDataInBuffers (output) failed w/ err %d",
mComponentName.c_str(), err);
}

if (!msg->findInt64(
"repeat-previous-frame-after",
&mRepeatFrameDelayUs)) {
mRepeatFrameDelayUs = -1ll;
}

if (!msg->findInt64("max-pts-gap-to-encoder", &mMaxPtsGapUs)) {
mMaxPtsGapUs = -1ll;
}

if (!msg->findFloat("max-fps-to-encoder", &mMaxFps)) {
mMaxFps = -1;
}

if (!msg->findInt64("time-lapse", &mTimePerCaptureUs)) {
mTimePerCaptureUs = -1ll;
}

if (!msg->findInt32(
"create-input-buffers-suspended",
(int32_t*)&mCreateInputBuffersSuspended)) {
mCreateInputBuffersSuspended = false;
}
}

// NOTE: we only use native window for video decoders
sp<RefBase> obj;
bool haveNativeWindow = msg->findObject("native-window", &obj)
&& obj != NULL && video && !encoder;
mLegacyAdaptiveExperiment = false;
if (video && !encoder) {
inputFormat->setInt32("adaptive-playback", false);

int32_t usageProtected;
if (msg->findInt32("protected", &usageProtected) && usageProtected) {
if (!haveNativeWindow) {
ALOGE("protected output buffers must be sent to an ANativeWindow");
return PERMISSION_DENIED;
}
mFlags |= kFlagIsGrallocUsageProtected;
mFlags |= kFlagPushBlankBuffersToNativeWindowOnShutdown;
}
}
if (haveNativeWindow) {
sp<ANativeWindow> nativeWindow =
static_cast<ANativeWindow *>(static_cast<Surface *>(obj.get()));

// START of temporary support for automatic FRC - THIS WILL BE REMOVED
int32_t autoFrc;
if (msg->findInt32("auto-frc", &autoFrc)) {
bool enabled = autoFrc;
OMX_CONFIG_BOOLEANTYPE config;
InitOMXParams(&config);
config.bEnabled = (OMX_BOOL)enabled;
status_t temp = mOMX->setConfig(
mNode, (OMX_INDEXTYPE)OMX_IndexConfigAutoFramerateConversion,
&config, sizeof(config));
if (temp == OK) {
outputFormat->setInt32("auto-frc", enabled);
} else if (enabled) {
ALOGI("codec does not support requested auto-frc (err %d)", temp);
}
}
// END of temporary support for automatic FRC

int32_t tunneled;
if (msg->findInt32("feature-tunneled-playback", &tunneled) &&
tunneled != 0) {
ALOGI("Configuring TUNNELED video playback.");
mTunneled = true;

int32_t audioHwSync = 0;
if (!msg->findInt32("audio-hw-sync", &audioHwSync)) {
ALOGW("No Audio HW Sync provided for video tunnel");
}
err = configureTunneledVideoPlayback(audioHwSync, nativeWindow);
if (err != OK) {
ALOGE("configureTunneledVideoPlayback(%d,%p) failed!",
audioHwSync, nativeWindow.get());
return err;
}

int32_t maxWidth = 0, maxHeight = 0;
if (msg->findInt32("max-width", &maxWidth) &&
msg->findInt32("max-height", &maxHeight)) {

err = mOMX->prepareForAdaptivePlayback(
mNode, kPortIndexOutput, OMX_TRUE, maxWidth, maxHeight);
if (err != OK) {
ALOGW("[%s] prepareForAdaptivePlayback failed w/ err %d",
mComponentName.c_str(), err);
// allow failure
err = OK;
} else {
inputFormat->setInt32("max-width", maxWidth);
inputFormat->setInt32("max-height", maxHeight);
inputFormat->setInt32("adaptive-playback", true);
}
}
} else {
ALOGV("Configuring CPU controlled video playback.");
mTunneled = false;

// Explicity reset the sideband handle of the window for
// non-tunneled video in case the window was previously used
// for a tunneled video playback.
err = native_window_set_sideband_stream(nativeWindow.get(), NULL);
if (err != OK) {
ALOGE("set_sideband_stream(NULL) failed! (err %d).", err);
return err;
}

// Always try to enable dynamic output buffers on native surface
err = mOMX->storeMetaDataInBuffers(
mNode, kPortIndexOutput, OMX_TRUE, &mOutputMetadataType);
if (err != OK) {
ALOGE("[%s] storeMetaDataInBuffers failed w/ err %d",
mComponentName.c_str(), err);

// if adaptive playback has been requested, try JB fallback
// NOTE: THIS FALLBACK MECHANISM WILL BE REMOVED DUE TO ITS
// LARGE MEMORY REQUIREMENT

// we will not do adaptive playback on software accessed
// surfaces as they never had to respond to changes in the
// crop window, and we don't trust that they will be able to.
int usageBits = 0;
bool canDoAdaptivePlayback;

if (nativeWindow->query(
nativeWindow.get(),
NATIVE_WINDOW_CONSUMER_USAGE_BITS,
&usageBits) != OK) {
canDoAdaptivePlayback = false;
} else {
canDoAdaptivePlayback =
(usageBits &
(GRALLOC_USAGE_SW_READ_MASK |
GRALLOC_USAGE_SW_WRITE_MASK)) == 0;
}

int32_t maxWidth = 0, maxHeight = 0;
if (canDoAdaptivePlayback &&
msg->findInt32("max-width", &maxWidth) &&
msg->findInt32("max-height", &maxHeight)) {
ALOGV("[%s] prepareForAdaptivePlayback(%dx%d)",
mComponentName.c_str(), maxWidth, maxHeight);

err = mOMX->prepareForAdaptivePlayback(
mNode, kPortIndexOutput, OMX_TRUE, maxWidth,
maxHeight);
ALOGW_IF(err != OK,
"[%s] prepareForAdaptivePlayback failed w/ err %d",
mComponentName.c_str(), err);

if (err == OK) {
inputFormat->setInt32("max-width", maxWidth);
inputFormat->setInt32("max-height", maxHeight);
inputFormat->setInt32("adaptive-playback", true);
}
}
// allow failure
err = OK;
} else {
ALOGV("[%s] storeMetaDataInBuffers succeeded",
mComponentName.c_str());
CHECK(storingMetadataInDecodedBuffers());
mLegacyAdaptiveExperiment = ADebug::isExperimentEnabled(
"legacy-adaptive", !msg->contains("no-experiments"));

inputFormat->setInt32("adaptive-playback", true);
}

int32_t push;
if (msg->findInt32("push-blank-buffers-on-shutdown", &push)
&& push != 0) {
mFlags |= kFlagPushBlankBuffersToNativeWindowOnShutdown;
}
}

int32_t rotationDegrees;
if (msg->findInt32("rotation-degrees", &rotationDegrees)) {
mRotationDegrees = rotationDegrees;
} else {
mRotationDegrees = 0;
}
}

if (video) {
// determine need for software renderer
bool usingSwRenderer = false;
if (haveNativeWindow && mComponentName.startsWith("OMX.google.")) {
usingSwRenderer = true;
haveNativeWindow = false;
}

if (encoder) {
err = setupVideoEncoder(mime, msg);
} else {
err = setupVideoDecoder(mime, msg, haveNativeWindow);
}

if (err != OK) {
return err;
}

if (haveNativeWindow) {
mNativeWindow = static_cast<Surface *>(obj.get());
}

// initialize native window now to get actual output format
// TODO: this is needed for some encoders even though they don't use native window
err = initNativeWindow();
if (err != OK) {
return err;
}

// fallback for devices that do not handle flex-YUV for native buffers
if (haveNativeWindow) {
int32_t requestedColorFormat = OMX_COLOR_FormatUnused;
if (msg->findInt32("color-format", &requestedColorFormat) &&
requestedColorFormat == OMX_COLOR_FormatYUV420Flexible) {
status_t err = getPortFormat(kPortIndexOutput, outputFormat);
if (err != OK) {
return err;
}
int32_t colorFormat = OMX_COLOR_FormatUnused;
OMX_U32 flexibleEquivalent = OMX_COLOR_FormatUnused;
if (!outputFormat->findInt32("color-format", &colorFormat)) {
ALOGE("ouptut port did not have a color format (wrong domain?)");
return BAD_VALUE;
}
ALOGD("[%s] Requested output format %#x and got %#x.",
mComponentName.c_str(), requestedColorFormat, colorFormat);
if (!isFlexibleColorFormat(
mOMX, mNode, colorFormat, haveNativeWindow, &flexibleEquivalent)
|| flexibleEquivalent != (OMX_U32)requestedColorFormat) {
// device did not handle flex-YUV request for native window, fall back
// to SW renderer
ALOGI("[%s] Falling back to software renderer", mComponentName.c_str());
mNativeWindow.clear();
mNativeWindowUsageBits = 0;
haveNativeWindow = false;
usingSwRenderer = true;
if (storingMetadataInDecodedBuffers()) {
err = mOMX->storeMetaDataInBuffers(
mNode, kPortIndexOutput, OMX_FALSE, &mOutputMetadataType);
mOutputMetadataType = kMetadataBufferTypeInvalid; // just in case
// TODO: implement adaptive-playback support for bytebuffer mode.
// This is done by SW codecs, but most HW codecs don't support it.
inputFormat->setInt32("adaptive-playback", false);
}
if (err == OK) {
err = mOMX->enableGraphicBuffers(mNode, kPortIndexOutput, OMX_FALSE);
}
if (mFlags & kFlagIsGrallocUsageProtected) {
// fallback is not supported for protected playback
err = PERMISSION_DENIED;
} else if (err == OK) {
err = setupVideoDecoder(mime, msg, false);
}
}
}
}

if (usingSwRenderer) {
outputFormat->setInt32("using-sw-renderer", 1);
}
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_MPEG)) {
int32_t numChannels, sampleRate;
if (!msg->findInt32("channel-count", &numChannels)
|| !msg->findInt32("sample-rate", &sampleRate)) {
// Since we did not always check for these, leave them optional
// and have the decoder figure it all out.
err = OK;
} else {
err = setupRawAudioFormat(
encoder ? kPortIndexInput : kPortIndexOutput,
sampleRate,
numChannels);
}
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AAC)) {
int32_t numChannels, sampleRate;
if (!msg->findInt32("channel-count", &numChannels)
|| !msg->findInt32("sample-rate", &sampleRate)) {
err = INVALID_OPERATION;
} else {
int32_t isADTS, aacProfile;
int32_t sbrMode;
int32_t maxOutputChannelCount;
int32_t pcmLimiterEnable;
drcParams_t drc;
if (!msg->findInt32("is-adts", &isADTS)) {
isADTS = 0;
}
if (!msg->findInt32("aac-profile", &aacProfile)) {
aacProfile = OMX_AUDIO_AACObjectNull;
}
if (!msg->findInt32("aac-sbr-mode", &sbrMode)) {
sbrMode = -1;
}

if (!msg->findInt32("aac-max-output-channel_count", &maxOutputChannelCount)) {
maxOutputChannelCount = -1;
}
if (!msg->findInt32("aac-pcm-limiter-enable", &pcmLimiterEnable)) {
// value is unknown
pcmLimiterEnable = -1;
}
if (!msg->findInt32("aac-encoded-target-level", &drc.encodedTargetLevel)) {
// value is unknown
drc.encodedTargetLevel = -1;
}
if (!msg->findInt32("aac-drc-cut-level", &drc.drcCut)) {
// value is unknown
drc.drcCut = -1;
}
if (!msg->findInt32("aac-drc-boost-level", &drc.drcBoost)) {
// value is unknown
drc.drcBoost = -1;
}
if (!msg->findInt32("aac-drc-heavy-compression", &drc.heavyCompression)) {
// value is unknown
drc.heavyCompression = -1;
}
if (!msg->findInt32("aac-target-ref-level", &drc.targetRefLevel)) {
// value is unknown
drc.targetRefLevel = -1;
}

err = setupAACCodec(
encoder, numChannels, sampleRate, bitRate, aacProfile,
isADTS != 0, sbrMode, maxOutputChannelCount, drc,
pcmLimiterEnable);
}
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AMR_NB)) {
err = setupAMRCodec(encoder, false /* isWAMR */, bitRate);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AMR_WB)) {
err = setupAMRCodec(encoder, true /* isWAMR */, bitRate);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_G711_ALAW)
|| !strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_G711_MLAW)) {
// These are PCM-like formats with a fixed sample rate but
// a variable number of channels.

int32_t numChannels;
if (!msg->findInt32("channel-count", &numChannels)) {
err = INVALID_OPERATION;
} else {
int32_t sampleRate;
if (!msg->findInt32("sample-rate", &sampleRate)) {
sampleRate = 8000;
}
err = setupG711Codec(encoder, sampleRate, numChannels);
}
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_FLAC)) {
int32_t numChannels = 0, sampleRate = 0, compressionLevel = -1;
if (encoder &&
(!msg->findInt32("channel-count", &numChannels)
|| !msg->findInt32("sample-rate", &sampleRate))) {
ALOGE("missing channel count or sample rate for FLAC encoder");
err = INVALID_OPERATION;
} else {
if (encoder) {
if (!msg->findInt32(
"complexity", &compressionLevel) &&
!msg->findInt32(
"flac-compression-level", &compressionLevel)) {
compressionLevel = 5; // default FLAC compression level
} else if (compressionLevel < 0) {
ALOGW("compression level %d outside [0..8] range, "
"using 0",
compressionLevel);
compressionLevel = 0;
} else if (compressionLevel > 8) {
ALOGW("compression level %d outside [0..8] range, "
"using 8",
compressionLevel);
compressionLevel = 8;
}
}
err = setupFlacCodec(
encoder, numChannels, sampleRate, compressionLevel);
}
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_RAW)) {
int32_t numChannels, sampleRate;
if (encoder
|| !msg->findInt32("channel-count", &numChannels)
|| !msg->findInt32("sample-rate", &sampleRate)) {
err = INVALID_OPERATION;
} else {
err = setupRawAudioFormat(kPortIndexInput, sampleRate, numChannels);
}
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AC3)) {
int32_t numChannels;
int32_t sampleRate;
if (!msg->findInt32("channel-count", &numChannels)
|| !msg->findInt32("sample-rate", &sampleRate)) {
err = INVALID_OPERATION;
} else {
err = setupAC3Codec(encoder, numChannels, sampleRate);
}
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_EAC3)) {
int32_t numChannels;
int32_t sampleRate;
if (!msg->findInt32("channel-count", &numChannels)
|| !msg->findInt32("sample-rate", &sampleRate)) {
err = INVALID_OPERATION;
} else {
err = setupEAC3Codec(encoder, numChannels, sampleRate);
}
}

if (err != OK) {
return err;
}

if (!msg->findInt32("encoder-delay", &mEncoderDelay)) {
mEncoderDelay = 0;
}

if (!msg->findInt32("encoder-padding", &mEncoderPadding)) {
mEncoderPadding = 0;
}

if (msg->findInt32("channel-mask", &mChannelMask)) {
mChannelMaskPresent = true;
} else {
mChannelMaskPresent = false;
}

int32_t maxInputSize;
if (msg->findInt32("max-input-size", &maxInputSize)) {
err = setMinBufferSize(kPortIndexInput, (size_t)maxInputSize);
} else if (!strcmp("OMX.Nvidia.aac.decoder", mComponentName.c_str())) {
err = setMinBufferSize(kPortIndexInput, 8192); // XXX
}

int32_t priority;
if (msg->findInt32("priority", &priority)) {
err = setPriority(priority);
}

int32_t rateInt = -1;
float rateFloat = -1;
if (!msg->findFloat("operating-rate", &rateFloat)) {
msg->findInt32("operating-rate", &rateInt);
rateFloat = (float)rateInt; // 16MHz (FLINTMAX) is OK for upper bound.
}
if (rateFloat > 0) {
err = setOperatingRate(rateFloat, video);
}

mBaseOutputFormat = outputFormat;

err = getPortFormat(kPortIndexInput, inputFormat);
if (err == OK) {
err = getPortFormat(kPortIndexOutput, outputFormat);
if (err == OK) {
mInputFormat = inputFormat;
mOutputFormat = outputFormat;
}
}
return err;
}

到了这里整个解码器的初始化和配置已经结束了,我们看下解码器的start阶段:

status_t MediaCodec::start() {
sp<AMessage> msg = new AMessage(kWhatStart, this);

status_t err;
Vector<MediaResource> resources;
const char *type = (mFlags & kFlagIsSecure) ?
kResourceSecureCodec : kResourceNonSecureCodec;
const char *subtype = mIsVideo ? kResourceVideoCodec : kResourceAudioCodec;
resources.push_back(MediaResource(String8(type), String8(subtype), 1));
// Don't know the buffer size at this point, but it's fine to use 1 because
// the reclaimResource call doesn't consider the requester's buffer size for now.
resources.push_back(MediaResource(String8(kResourceGraphicMemory), 1));
for (int i = 0; i <= kMaxRetry; ++i) {
if (i > 0) {
// Don't try to reclaim resource for the first time.
if (!mResourceManagerService->reclaimResource(resources)) {
break;
}
// Recover codec from previous error before retry start.
err = reset();
if (err != OK) {
ALOGE("retrying start: failed to reset codec");
break;
}
sp<AMessage> response;
err = PostAndAwaitResponse(mConfigureMsg, &response);
if (err != OK) {
ALOGE("retrying start: failed to configure codec");
break;
}
}
sp<AMessage> response;
err = PostAndAwaitResponse(msg, &response);
if (!isResourceError(err)) {
break;
}
}
return err;
}
case kWhatStart:
{
sp<AReplyToken> replyID;
CHECK(msg->senderAwaitsResponse(&replyID));

if (mState == FLUSHED) {
setState(STARTED);
if (mHavePendingInputBuffers) {
onInputBufferAvailable();
mHavePendingInputBuffers = false;
}
//我们重点看这里
mCodec->signalResume();
//..................
PostReplyWithError(replyID, OK);
break;
} else if (mState != CONFIGURED) {
PostReplyWithError(replyID, INVALID_OPERATION);
break;
}

mReplyID = replyID;
setState(STARTING);

mCodec->initiateStart();
break;
}

首先調用initiateStart初始化解码器状态

void ACodec::initiateStart() {
(new AMessage(kWhatStart, this))->post();
}
case ACodec::kWhatStart:
{
onStart();
handled = true;
break;
}
void ACodec::LoadedState::onStart() {
ALOGV("onStart");

status_t err = mCodec->mOMX->sendCommand(mCodec->mNode, OMX_CommandStateSet, OMX_StateIdle);
if (err != OK) {
mCodec->signalError(OMX_ErrorUndefined, makeNoSideEffectStatus(err));
} else {
mCodec->changeState(mCodec->mLoadedToIdleState);
}
}

接着开始获取数据进行解码

void ACodec::signalResume() {
(new AMessage(kWhatResume, this))->post();
}
case kWhatResume:
{
resume();
handled = true;
break;
}
void ACodec::ExecutingState::resume() {

submitOutputBuffers();
// Post all available input buffers
if (mCodec->mBuffers[kPortIndexInput].size() == 0u) {
ALOGW("[%s] we don't have any input buffers to resume", mCodec->mComponentName.c_str());
}

for (size_t i = 0; i < mCodec->mBuffers[kPortIndexInput].size(); i++) {
BufferInfo *info = &mCodec->mBuffers[kPortIndexInput].editItemAt(i);
if (info->mStatus == BufferInfo::OWNED_BY_US) {
postFillThisBuffer(info);
}
}
mActive = true;
}
void ACodec::BaseState::postFillThisBuffer(BufferInfo *info) {
if (mCodec->mPortEOS[kPortIndexInput]) {
return;
}

CHECK_EQ((int)info->mStatus, (int)BufferInfo::OWNED_BY_US);
sp<AMessage> notify = mCodec->mNotify->dup();
notify->setInt32("what", CodecBase::kWhatFillThisBuffer);
notify->setInt32("buffer-id", info->mBufferID);
info->mData->meta()->clear();
notify->setBuffer("buffer", info->mData);
sp<AMessage> reply = new AMessage(kWhatInputBufferFilled, mCodec);
reply->setInt32("buffer-id", info->mBufferID);
notify->setMessage("reply", reply);
notify->post();
info->mStatus = BufferInfo::OWNED_BY_UPSTREAM;
}
case CodecBase::kWhatFillThisBuffer:
{

//..........
if (mFlags & kFlagIsAsync) {
if (!mHaveInputSurface) {
if (mState == FLUSHED) {
mHavePendingInputBuffers = true;
} else {
onInputBufferAvailable();
}
}
} else if (mFlags & kFlagDequeueInputPending) {
CHECK(handleDequeueInputBuffer(mDequeueInputReplyID));
++mDequeueInputTimeoutGeneration;
mFlags &= ~kFlagDequeueInputPending;
mDequeueInputReplyID = 0;
} else {
postActivityNotificationIfPossible();
}
break;
}
void MediaCodec::onInputBufferAvailable() {
int32_t index;
while ((index = dequeuePortBuffer(kPortIndexInput)) >= 0) {
sp<AMessage> msg = mCallback->dup();
msg->setInt32("callbackID", CB_INPUT_AVAILABLE);
msg->setInt32("index", index);
msg->post();
}
}

还记得这个mCallback怎么来的吗?

void NuPlayer::Decoder::onConfigure(const sp<AMessage> &format) {

//.................
sp<AMessage> reply = new AMessage(kWhatCodecNotify, this);
mCodec->setCallback(reply);
//..................
}
status_t MediaCodec::setCallback(const sp<AMessage> &callback) {
sp<AMessage> msg = new AMessage(kWhatSetCallback, this);
msg->setMessage("callback", callback);

sp<AMessage> response;
return PostAndAwaitResponse(msg, &response);
}
case kWhatSetCallback:
{
sp<AReplyToken> replyID;
CHECK(msg->senderAwaitsResponse(&replyID));
sp<AMessage> callback;
CHECK(msg->findMessage("callback", &callback));

mCallback = callback;

if (mCallback != NULL) {
mFlags |= kFlagIsAsync;
} else {
mFlags &= ~kFlagIsAsync;
}

sp<AMessage> response = new AMessage;
response->postReply(replyID);
break;
}

所以根据上面我们可以知道接下来i调用的是kWhatCodecNotify 下的 CB_INPUT_AVAILABLE

case MediaCodec::CB_INPUT_AVAILABLE:
{
int32_t index;
CHECK(msg->findInt32("index", &index));

handleAnInputBuffer(index);
break;
}
bool NuPlayer::Decoder::handleAnInputBuffer(size_t index) {
if (isDiscontinuityPending()) {
return false;
}

sp<ABuffer> buffer;
mCodec->getInputBuffer(index, &buffer);

if (buffer == NULL) {
handleError(UNKNOWN_ERROR);
return false;
}

if (index >= mInputBuffers.size()) {
for (size_t i = mInputBuffers.size(); i <= index; ++i) {
mInputBuffers.add();
mMediaBuffers.add();
mInputBufferIsDequeued.add();
mMediaBuffers.editItemAt(i) = NULL;
mInputBufferIsDequeued.editItemAt(i) = false;
}
}
mInputBuffers.editItemAt(index) = buffer;

//CHECK_LT(bufferIx, mInputBuffers.size());

if (mMediaBuffers[index] != NULL) {
mMediaBuffers[index]->release();
mMediaBuffers.editItemAt(index) = NULL;
}
mInputBufferIsDequeued.editItemAt(index) = true;

if (!mCSDsToSubmit.isEmpty()) {
sp<AMessage> msg = new AMessage();
msg->setSize("buffer-ix", index);

sp<ABuffer> buffer = mCSDsToSubmit.itemAt(0);
ALOGI("[%s] resubmitting CSD", mComponentName.c_str());
msg->setBuffer("buffer", buffer);
mCSDsToSubmit.removeAt(0);
CHECK(onInputBufferFetched(msg));
return true;
}

while (!mPendingInputMessages.empty()) {
sp<AMessage> msg = *mPendingInputMessages.begin();
if (!onInputBufferFetched(msg)) {
break;
}
mPendingInputMessages.erase(mPendingInputMessages.begin());
}

if (!mInputBufferIsDequeued.editItemAt(index)) {
return true;
}

mDequeuedInputBuffers.push_back(index);

onRequestInputBuffers();
return true;
}
void NuPlayer::DecoderBase::onRequestInputBuffers() {
if (mRequestInputBuffersPending) {
return;
}

// doRequestBuffers() return true if we should request more data
if (doRequestBuffers()) {
mRequestInputBuffersPending = true;

sp<AMessage> msg = new AMessage(kWhatRequestInputBuffers, this);
msg->post(10 * 1000ll);
}
}
bool NuPlayer::Decoder::doRequestBuffers() {
// mRenderer is only NULL if we have a legacy widevine source that
// is not yet ready. In this case we must not fetch input.
if (isDiscontinuityPending() || mRenderer == NULL) {
return false;
}
status_t err = OK;
while (err == OK && !mDequeuedInputBuffers.empty()) {
size_t bufferIx = *mDequeuedInputBuffers.begin();
sp<AMessage> msg = new AMessage();
msg->setSize("buffer-ix", bufferIx);
err = fetchInputData(msg);
if (err != OK && err != ERROR_END_OF_STREAM) {
// if EOS, need to queue EOS buffer
break;
}
mDequeuedInputBuffers.erase(mDequeuedInputBuffers.begin());

if (!mPendingInputMessages.empty()
|| !onInputBufferFetched(msg)) {
mPendingInputMessages.push_back(msg);
}
}

return err == -EWOULDBLOCK
&& mSource->feedMoreTSData() == OK;
}
status_t NuPlayer::Decoder::fetchInputData(sp<AMessage> &reply) {
sp<ABuffer> accessUnit;
bool dropAccessUnit;
do {
status_t err = mSource->dequeueAccessUnit(mIsAudio, &accessUnit);

if (err == -EWOULDBLOCK) {
return err;
} else if (err != OK) {
if (err == INFO_DISCONTINUITY) {
int32_t type;
CHECK(accessUnit->meta()->findInt32("discontinuity", &type));

bool formatChange =
(mIsAudio &&
(type & ATSParser::DISCONTINUITY_AUDIO_FORMAT))
|| (!mIsAudio &&
(type & ATSParser::DISCONTINUITY_VIDEO_FORMAT));

bool timeChange = (type & ATSParser::DISCONTINUITY_TIME) != 0;

ALOGI("%s discontinuity (format=%d, time=%d)",
mIsAudio ? "audio" : "video", formatChange, timeChange);

bool seamlessFormatChange = false;
sp<AMessage> newFormat = mSource->getFormat(mIsAudio);
if (formatChange) {
seamlessFormatChange =
supportsSeamlessFormatChange(newFormat);
// treat seamless format change separately
formatChange = !seamlessFormatChange;
}

// For format or time change, return EOS to queue EOS input,
// then wait for EOS on output.
if (formatChange /* not seamless */) {
mFormatChangePending = true;
err = ERROR_END_OF_STREAM;
} else if (timeChange) {
rememberCodecSpecificData(newFormat);
mTimeChangePending = true;
err = ERROR_END_OF_STREAM;
} else if (seamlessFormatChange) {
// reuse existing decoder and don't flush
rememberCodecSpecificData(newFormat);
continue;
} else {
// This stream is unaffected by the discontinuity
return -EWOULDBLOCK;
}
}

// reply should only be returned without a buffer set
// when there is an error (including EOS)
CHECK(err != OK);

reply->setInt32("err", err);
return ERROR_END_OF_STREAM;
}

dropAccessUnit = false;
if (!mIsAudio
&& !mIsSecure
&& mRenderer->getVideoLateByUs() > 100000ll
&& mIsVideoAVC
&& !IsAVCReferenceFrame(accessUnit)) {
dropAccessUnit = true;
++mNumInputFramesDropped;
}
} while (dropAccessUnit);

// ALOGV("returned a valid buffer of %s data", mIsAudio ? "mIsAudio" : "video");
#if 0
int64_t mediaTimeUs;
CHECK(accessUnit->meta()->findInt64("timeUs", &mediaTimeUs));
ALOGV("[%s] feeding input buffer at media time %.3f",
mIsAudio ? "audio" : "video",
mediaTimeUs / 1E6);
#endif

if (mCCDecoder != NULL) {
mCCDecoder->decode(accessUnit);
}

reply->setBuffer("buffer", accessUnit);

return OK;
}

HLS 概述

HTTP Live Streaming(HLS)是苹果公司实现的基于HTTP的流媒体直播和点播协议,主要应用在iOS系统。相对于普通的流媒体,例如RTMP协议、RTSP协议、MMS协议等,HLS最大的优点是可以根据网络状况自动切换到不同码率的视频,如果网络状况较好,则会切换到高码率的视频,若发现网络状况不佳,则会逐渐过渡到低码率的视频,这个我们下面将会结合代码对其进行说明。

HLS框架介绍

我们接下来看下HLS系统的整体结构图:

我们首先将要直播的视频送到编码器中,编码器分别对视频和音频进行编码,然后输出到一个MPEG-2格式的传输流中,再由分段器将MPEG-2传输流进行分段,产生一系列等间隔的媒体片段,这些媒体片段一般很小并且保存成后缀为.ts的文件,同时生成一个指向这些媒体文件的索引文件,也就是我们很经常听到的.M3U8文件。完成分段之后将这些索引文件以及媒体文件上传到Web服务器上。客户端读取索引文件,然后按顺序请求下载索引文件中列出的媒体文件。下载后是一个ts文件。需要进行解压获得对应的媒体数据并解码后进行播放。由于在直播过程中服务器端会不断地将最新的直播数据生成新的小文件,并上传所以只要客户端不断地按顺序下载并播放从服务器获取到的文件,从整个过程上看就相当于实现了直播。而且由于分段文件的很短,客户端可以根据实际的带宽情况切换到不同码率的直播源,从而实现多码率的适配的目的。

M3U8 标签介绍:

这部分可以看下下面这篇博客:
http://blog.csdn.net/jwzhangjie/article/details/9744027

HLS播放流程

  1. 获取不同带宽下对应的网络资源URI及音视频编解码,视频分辨率等信息的文件
    #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=899152,RESOLUTION=480x270,CODECS="avc1.4d4015,mp4a.40.5"
    http://hls.ftdp.com/video1_widld/m3u8/01.m3u8
  2. 根据上述获取的信息初始化对应的编解码器
  3. 获取第一个网络资源对应的分段索引列表(index文件)
    #EXTM3U
    #EXT-X-VERSION:3
    #EXT-X-TARGETDURATION:10
    #EXT-X-MEDIA-SEQUENCE:6532
    #EXT-X-KEY:METHOD=AES-128,URI="18319965201.key"
    #EXTINF:10,
    20125484T125708-01-6533.ts
    #EXT-X-KEY:METHOD=AES-128,URI="14319965205.key"
    #EXTINF:10,
    20125484T125708-01-6534.ts
    ....
    #EXTINF:8,
    20140804T125708-01-6593.ts
  4. 获取某一个分片的Key
  5. 请求下载某一个分片
  6. 根据当前的带宽决定是否切换视频资源
  7. 将下载的分片资源解密后送到解码器进行解码

关于NuPlayerDrvier的创建以及SetDataSource的流程和Stagefight Player大体一致,区别在于setDataSource的时候是根据url的不同会创建三种不同的DataSource:HttpLiveSource,RTSPSource,以及GenericSource。这里就不做大篇幅的介绍了,就直接上图吧:


我们直接从prepare结合HLS原理开始分析:

status_t NuPlayerDriver::prepare() {
ALOGV("prepare(%p)", this);
Mutex::Autolock autoLock(mLock);
return prepare_l();
}

status_t NuPlayerDriver::prepare_l() {
switch (mState) {
case STATE_UNPREPARED:
mState = STATE_PREPARING;
// Make sure we're not posting any notifications, success or
// failure information is only communicated through our result
// code.
mIsAsyncPrepare = false;
mPlayer->prepareAsync();
while (mState == STATE_PREPARING) {
mCondition.wait(mLock);
}
return (mState == STATE_PREPARED) ? OK : UNKNOWN_ERROR;
case STATE_STOPPED:
//......
default:
return INVALID_OPERATION;
};
}

首先我们在经过setDataSource阶段会将状态变量mState设置为STATE_UNPREPARED,那么在NuPlayerDriver::prepare_l()中我们实际上调用的是mPlayer->prepareAsync(),也就是Nuplayer的prepareAsync方法。

void NuPlayer::prepareAsync() {
//发送一个kWhatPrepare消息
(new AMessage(kWhatPrepare, this))->post();
}

在NuPlayer::prepareAsync中只是发送了一个kWhatPrepare的消息。找到对应的Handler查看处理流程如下:

void NuPlayer::onMessageReceived(const sp<AMessage> &msg) {
//ignore other fuck source
case kWhatPrepare:
{
//调用Source的prepareAsync 我们这里看下HttpliveSource
mSource->prepareAsync();
break;
}
//ignore other fuck source
}

这里直接调用的是Source的prepareAsync,这个mSource是在setDataSource阶段设置的,我们这里只分析HLS的情形所以需要查看HttpliveSource的prepareAsync。

void NuPlayer::HTTPLiveSource::prepareAsync() {
//创建并启动一个Looper
if (mLiveLooper == NULL) {
mLiveLooper = new ALooper;
mLiveLooper->setName("http live");
mLiveLooper->start();
mLiveLooper->registerHandler(this);
}
//创建一个kWhatSessionNotify赋值给LiveSession用于通知
sp<AMessage> notify = new AMessage(kWhatSessionNotify, this);
//创建一个LiveSession
mLiveSession = new LiveSession(
notify,
(mFlags & kFlagIncognito) ? LiveSession::kFlagIncognito : 0,
mHTTPService);
mLiveLooper->registerHandler(mLiveSession);
//使用LiveSession进行异步连接
mLiveSession->connectAsync(mURL.c_str(), mExtraHeaders.isEmpty() ? NULL : &mExtraHeaders);
}
void LiveSession::connectAsync(const char *url, const KeyedVector<String8, String8> *headers) {
//创建一个kWhatConnect并传入url
sp<AMessage> msg = new AMessage(kWhatConnect, this);
msg->setString("url", url);
if (headers != NULL) {
msg->setPointer("headers",new KeyedVector<String8, String8>(*headers));
}
msg->post();
}
void LiveSession::onMessageReceived(const sp<AMessage> &msg) {
case kWhatConnect:
{
//调用onConnect
onConnect(msg);
break;
}
}
void LiveSession::onConnect(const sp<AMessage> &msg) {
//获取传过来的Uri
CHECK(msg->findString("url", &mMasterURL));
KeyedVector<String8, String8> *headers = NULL;
if (!msg->findPointer("headers", (void **)&headers)) {
mExtraHeaders.clear();
} else {
mExtraHeaders = *headers;
delete headers;
headers = NULL;
}
//创建一个mFetcherLooper
if (mFetcherLooper == NULL) {
mFetcherLooper = new ALooper();
mFetcherLooper->setName("Fetcher");
mFetcherLooper->start(false, false);
}
//获取不同带宽下对应的网络资源URI及音视频编解码信息
addFetcher(mMasterURL.c_str())->fetchPlaylistAsync();
}

这里就开始获取不同带宽下对应的网络资源URI及音视频编解码信息了

sp<PlaylistFetcher> LiveSession::addFetcher(const char *uri) {

ssize_t index = mFetcherInfos.indexOfKey(uri);
sp<AMessage> notify = new AMessage(kWhatFetcherNotify, this);
notify->setString("uri", uri);
notify->setInt32("switchGeneration", mSwitchGeneration);
FetcherInfo info;
//创建一个PlaylistFetcher并返回
info.mFetcher = new PlaylistFetcher(notify, this, uri, mCurBandwidthIndex, mSubtitleGeneration);
info.mDurationUs = -1ll;
info.mToBeRemoved = false;
info.mToBeResumed = false;
mFetcherLooper->registerHandler(info.mFetcher);
mFetcherInfos.add(uri, info);
//这里的info.mFetcher是上面new 出来的PlaylistFetcher
return info.mFetcher;
}

我们通过这里返回的PlaylistFetcher调用fetchPlaylistAsync来获取playlists

void PlaylistFetcher::fetchPlaylistAsync() {
(new AMessage(kWhatFetchPlaylist, this))->post();
}

void PlaylistFetcher::onMessageReceived(const sp<AMessage> &msg) {
case kWhatFetchPlaylist:
{
bool unchanged;
//获取一个M3U8Parser
sp<M3UParser> playlist = mHTTPDownloader->fetchPlaylist(mURI.c_str(), NULL /* curPlaylistHash */, &unchanged);
sp<AMessage> notify = mNotify->dup();
notify->setInt32("what", kWhatPlaylistFetched);
//将playlist返回
notify->setObject("playlist", playlist);
notify->post();
break;
}
}

我们接下来看下fetchFile过程:首先会通过fetchFile从服务器端获取到m3u8 playlist内容存放到buffer缓存区,然后将获取到的缓存数据包装成M3UParser

sp<M3UParser> HTTPDownloader::fetchPlaylist(
const char *url, uint8_t *curPlaylistHash, bool *unchanged) {

*unchanged = false;
sp<ABuffer> buffer;
String8 actualUrl;
//调用fetchFile
ssize_t err = fetchFile(url, &buffer, &actualUrl);
//断开连接
mHTTPDataSource->disconnect();
//将获取到的缓存数据包装成M3UParser
sp<M3UParser> playlist = new M3UParser(actualUrl.string(), buffer->data(), buffer->size());
return playlist;
}
ssize_t HTTPDownloader::fetchFile(
const char *url, sp<ABuffer> *out, String8 *actualUrl) {
ssize_t err = fetchBlock(url, out, 0, -1, 0, actualUrl, true /* reconnect */);
// close off the connection after use
mHTTPDataSource->disconnect();
return err;
}

我们这里看下M3UParser构造方法:

M3UParser::M3UParser(
const char *baseURI, const void *data, size_t size)
: mInitCheck(NO_INIT),
mBaseURI(baseURI),
mIsExtM3U(false),
mIsVariantPlaylist(false),
mIsComplete(false),
mIsEvent(false),
mFirstSeqNumber(-1),
mLastSeqNumber(-1),
mTargetDurationUs(-1ll),
mDiscontinuitySeq(0),
mDiscontinuityCount(0),
mSelectedIndex(-1) {
mInitCheck = parse(data, size);
}

在最后的时候会调用parse对缓存数据进行解析:

status_t M3UParser::parse(const void *_data, size_t size) {
int32_t lineNo = 0;
sp<AMessage> itemMeta;
const char *data = (const char *)_data;
size_t offset = 0;
uint64_t segmentRangeOffset = 0;
while (offset < size) {
size_t offsetLF = offset;
while (offsetLF < size && data[offsetLF] != '\n') {
++offsetLF;
}
AString line;
if (offsetLF > offset && data[offsetLF - 1] == '\r') {
line.setTo(&data[offset], offsetLF - offset - 1);
} else {
line.setTo(&data[offset], offsetLF - offset);
}

if (line.empty()) {
offset = offsetLF + 1;
continue;
}
if (lineNo == 0 && line == "#EXTM3U") {
mIsExtM3U = true;
}
if (mIsExtM3U) {
status_t err = OK;
if (line.startsWith("#EXT-X-TARGETDURATION")) {
if (mIsVariantPlaylist) {
return ERROR_MALFORMED;
}
err = parseMetaData(line, &mMeta, "target-duration");
} else if (line.startsWith("#EXT-X-MEDIA-SEQUENCE")) {
if (mIsVariantPlaylist) {
return ERROR_MALFORMED;
}
err = parseMetaData(line, &mMeta, "media-sequence");
} else if (line.startsWith("#EXT-X-KEY")) {
if (mIsVariantPlaylist) {
return ERROR_MALFORMED;
}
err = parseCipherInfo(line, &itemMeta, mBaseURI);
} else if (line.startsWith("#EXT-X-ENDLIST")) {
mIsComplete = true;
} else if (line.startsWith("#EXT-X-PLAYLIST-TYPE:EVENT")) {
mIsEvent = true;
} else if (line.startsWith("#EXTINF")) {
if (mIsVariantPlaylist) {
return ERROR_MALFORMED;
}
err = parseMetaDataDuration(line, &itemMeta, "durationUs");
} else if (line.startsWith("#EXT-X-DISCONTINUITY")) {
if (mIsVariantPlaylist) {
return ERROR_MALFORMED;
}
if (itemMeta == NULL) {
itemMeta = new AMessage;
}
itemMeta->setInt32("discontinuity", true);
++mDiscontinuityCount;
} else if (line.startsWith("#EXT-X-STREAM-INF")) {
if (mMeta != NULL) {
return ERROR_MALFORMED;
}
mIsVariantPlaylist = true;
err = parseStreamInf(line, &itemMeta);
} else if (line.startsWith("#EXT-X-BYTERANGE")) {
if (mIsVariantPlaylist) {
return ERROR_MALFORMED;
}
uint64_t length, offset;
err = parseByteRange(line, segmentRangeOffset, &length, &offset);
if (err == OK) {
if (itemMeta == NULL) {
itemMeta = new AMessage;
}
itemMeta->setInt64("range-offset", offset);
itemMeta->setInt64("range-length", length);
segmentRangeOffset = offset + length;
}
} else if (line.startsWith("#EXT-X-MEDIA")) {
err = parseMedia(line);
} else if (line.startsWith("#EXT-X-DISCONTINUITY-SEQUENCE")) {
if (mIsVariantPlaylist) {
return ERROR_MALFORMED;
}
size_t seq;
err = parseDiscontinuitySequence(line, &seq);
if (err == OK) {
mDiscontinuitySeq = seq;
}
}
if (err != OK) {
return err;
}
}
if (!line.startsWith("#")) {
if (!mIsVariantPlaylist) {
int64_t durationUs;
if (itemMeta == NULL
|| !itemMeta->findInt64("durationUs", &durationUs)) {
return ERROR_MALFORMED;
}
itemMeta->setInt32("discontinuity-sequence",
mDiscontinuitySeq + mDiscontinuityCount);
}
mItems.push();
Item *item = &mItems.editItemAt(mItems.size() - 1);
CHECK(MakeURL(mBaseURI.c_str(), line.c_str(), &item->mURI));
item->mMeta = itemMeta;
itemMeta.clear();
}
offset = offsetLF + 1;
++lineNo;
}

if (!mIsVariantPlaylist) {
int32_t targetDurationSecs;
if (mMeta == NULL || !mMeta->findInt32(
"target-duration", &targetDurationSecs)) {
ALOGE("Media playlist missing #EXT-X-TARGETDURATION");
return ERROR_MALFORMED;
}
mTargetDurationUs = targetDurationSecs * 1000000ll;
mFirstSeqNumber = 0;
if (mMeta != NULL) {
mMeta->findInt32("media-sequence", &mFirstSeqNumber);
}
mLastSeqNumber = mFirstSeqNumber + mItems.size() - 1;
}
return OK;
}

好了我们现在已经获取到了类型为M3UParser的播放列表文件了,这时候会发送一个kWhatPlaylistFetched,这个在哪里被处理呢?当然是LiveSession啊。

case PlaylistFetcher::kWhatPlaylistFetched:
{
onMasterPlaylistFetched(msg);
break;
}

获取到播放列表后要干啥呢?我们接下来看:

void LiveSession::onMasterPlaylistFetched(const sp<AMessage> &msg) {

AString uri;
CHECK(msg->findString("uri", &uri));
ssize_t index = mFetcherInfos.indexOfKey(uri);
// no longer useful, remove
mFetcherLooper->unregisterHandler(mFetcherInfos[index].mFetcher->id());
mFetcherInfos.removeItemsAt(index);
//取走获取到的playlist
CHECK(msg->findObject("playlist", (sp<RefBase> *)&mPlaylist));

// We trust the content provider to make a reasonable choice of preferred
// initial bandwidth by listing it first in the variant playlist.
// At startup we really don't have a good estimate on the available
// network bandwidth since we haven't tranferred any data yet. Once
// we have we can make a better informed choice.
size_t initialBandwidth = 0;
size_t initialBandwidthIndex = 0;
int32_t maxWidth = 0;
int32_t maxHeight = 0;
//判断获取到的playlist是否有效,无效就没啥用了,我们这里假设有效
if (mPlaylist->isVariantPlaylist()) {
Vector<BandwidthItem> itemsWithVideo;
for (size_t i = 0; i < mPlaylist->size(); ++i) {
BandwidthItem item;
item.mPlaylistIndex = i;
item.mLastFailureUs = -1ll;
sp<AMessage> meta;
AString uri;
mPlaylist->itemAt(i, &uri, &meta);
//获取带宽
CHECK(meta->findInt32("bandwidth", (int32_t *)&item.mBandwidth));
//获取最大分辨率
int32_t width, height;
if (meta->findInt32("width", &width)) {
maxWidth = max(maxWidth, width);
}
if (meta->findInt32("height", &height)) {
maxHeight = max(maxHeight, height);
}
mBandwidthItems.push(item);
if (mPlaylist->hasType(i, "video")) {
itemsWithVideo.push(item);
}
}
//移除只有声音的信息
if (!itemsWithVideo.empty()&& itemsWithVideo.size() < mBandwidthItems.size()) {
mBandwidthItems.clear();
for (size_t i = 0; i < itemsWithVideo.size(); ++i) {
mBandwidthItems.push(itemsWithVideo[i]);
}
}
CHECK_GT(mBandwidthItems.size(), 0u);
initialBandwidth = mBandwidthItems[0].mBandwidth;
//按照带宽进行排序
mBandwidthItems.sort(SortByBandwidth);
for (size_t i = 0; i < mBandwidthItems.size(); ++i) {
if (mBandwidthItems.itemAt(i).mBandwidth == initialBandwidth) {
initialBandwidthIndex = i;
break;
}
}
} else {
//......
}
//获取到最大的分辨率
mMaxWidth = maxWidth > 0 ? maxWidth : mMaxWidth;
mMaxHeight = maxHeight > 0 ? maxHeight : mMaxHeight;
mPlaylist->pickRandomMediaItems();
changeConfiguration(0ll /* timeUs */, initialBandwidthIndex, false /* pickTrack */);
}
void LiveSession::changeConfiguration(int64_t timeUs, ssize_t bandwidthIndex, bool pickTrack) {

//取消带宽切换
cancelBandwidthSwitch();
mReconfigurationInProgress = true;
//由mOrigBandwidthIndex切换到mCurBandwidthIndex
if (bandwidthIndex >= 0) {
//将当前的带宽设置为当前带宽
mOrigBandwidthIndex = mCurBandwidthIndex;
mCurBandwidthIndex = bandwidthIndex;
if (mOrigBandwidthIndex != mCurBandwidthIndex) {
//开始切换带宽
ALOGI("#### Starting Bandwidth Switch: %zd => %zd",mOrigBandwidthIndex, mCurBandwidthIndex);
}
}
CHECK_LT(mCurBandwidthIndex, mBandwidthItems.size());
//获取当前的BandwidthItem
const BandwidthItem &item = mBandwidthItems.itemAt(mCurBandwidthIndex);
uint32_t streamMask = 0; // streams that should be fetched by the new fetcher
uint32_t resumeMask = 0; // streams that should be fetched by the original fetcher
AString URIs[kMaxStreams];
for (size_t i = 0; i < kMaxStreams; ++i) {
if (mPlaylist->getTypeURI(item.mPlaylistIndex, mStreams[i].mType, &URIs[i])) {
streamMask |= indexToType(i);
}
}

// 停止我们不需要的,暂停我们将要复用的,第一次的时候这里是没有的所以跳过
for (size_t i = 0; i < mFetcherInfos.size(); ++i) {
//.........................
}

sp<AMessage> msg;
if (timeUs < 0ll) {
// skip onChangeConfiguration2 (decoder destruction) if not seeking.
msg = new AMessage(kWhatChangeConfiguration3, this);
} else {
msg = new AMessage(kWhatChangeConfiguration2, this);
}
msg->setInt32("streamMask", streamMask);
msg->setInt32("resumeMask", resumeMask);
msg->setInt32("pickTrack", pickTrack);
msg->setInt64("timeUs", timeUs);
for (size_t i = 0; i < kMaxStreams; ++i) {
if ((streamMask | resumeMask) & indexToType(i)) {
msg->setString(mStreams[i].uriKey().c_str(), URIs[i].c_str());
}
}

// Every time a fetcher acknowledges the stopAsync or pauseAsync request
// we'll decrement mContinuationCounter, once it reaches zero, i.e. all
// fetchers have completed their asynchronous operation, we'll post
// mContinuation, which then is handled below in onChangeConfiguration2.
//每次fetcher 调用了stopAsync和pauseAsync mContinuationCounter 数值都会减去1,一旦减到0 那么将会在onChangeConfiguration2处理
mContinuationCounter = mFetcherInfos.size();
mContinuation = msg;
if (mContinuationCounter == 0) {
msg->post();
}
}
void LiveSession::onChangeConfiguration2(const sp<AMessage> &msg) {


int64_t timeUs;
CHECK(msg->findInt64("timeUs", &timeUs));

if (timeUs >= 0) {
mLastSeekTimeUs = timeUs;
mLastDequeuedTimeUs = timeUs;
for (size_t i = 0; i < mPacketSources.size(); i++) {
sp<AnotherPacketSource> packetSource = mPacketSources.editValueAt(i);
sp<MetaData> format = packetSource->getFormat();
packetSource->clear();
packetSource->setFormat(format);
}
for (size_t i = 0; i < kMaxStreams; ++i) {
mStreams[i].reset();
}
mDiscontinuityOffsetTimesUs.clear();
mDiscontinuityAbsStartTimesUs.clear();

if (mSeekReplyID != NULL) {
CHECK(mSeekReply != NULL);
mSeekReply->setInt32("err", OK);
mSeekReply->postReply(mSeekReplyID);
mSeekReplyID.clear();
mSeekReply.clear();
}
restartPollBuffering();
}

uint32_t streamMask, resumeMask;
CHECK(msg->findInt32("streamMask", (int32_t *)&streamMask));
CHECK(msg->findInt32("resumeMask", (int32_t *)&resumeMask));

streamMask |= resumeMask;

AString URIs[kMaxStreams];
for (size_t i = 0; i < kMaxStreams; ++i) {
if (streamMask & indexToType(i)) {
const AString &uriKey = mStreams[i].uriKey();
CHECK(msg->findString(uriKey.c_str(), &URIs[i]));
ALOGV("%s = '%s'", uriKey.c_str(), URIs[i].c_str());
}
}

uint32_t changedMask = 0;
for (size_t i = 0; i < kMaxStreams && i != kSubtitleIndex; ++i) {
// stream URI could change even if onChangeConfiguration2 is only
// used for seek. Seek could happen during a bw switch, in this
// case bw switch will be cancelled, but the seekTo position will
// fetch from the new URI.
if ((mStreamMask & streamMask & indexToType(i))
&& !mStreams[i].mUri.empty()
&& !(URIs[i] == mStreams[i].mUri)) {
ALOGV("stream %zu changed: oldURI %s, newURI %s", i,
mStreams[i].mUri.c_str(), URIs[i].c_str());
sp<AnotherPacketSource> source = mPacketSources.valueFor(indexToType(i));
if (source->getLatestDequeuedMeta() != NULL) {
source->queueDiscontinuity(ATSParser::DISCONTINUITY_FORMATCHANGE, NULL, true);
}
}
// Determine which decoders to shutdown on the player side,
// a decoder has to be shutdown if its streamtype was active
// before but now longer isn't.
if ((mStreamMask & ~streamMask & indexToType(i))) {
changedMask |= indexToType(i);
}
}

//这里会触发kWhatStreamsChanged
sp<AMessage> notify = mNotify->dup();
notify->setInt32("what", kWhatStreamsChanged);
notify->setInt32("changedMask", changedMask);
//将kWhatChangeConfiguration3作为回复消息
msg->setWhat(kWhatChangeConfiguration3);
msg->setTarget(this);
notify->setMessage("reply", msg);
notify->post();
}
case LiveSession::kWhatStreamsChanged:
{
uint32_t changedMask;
CHECK(msg->findInt32("changedMask", (int32_t *)&changedMask));
//判断什么流改变
bool audio = changedMask & LiveSession::STREAMTYPE_AUDIO;
bool video = changedMask & LiveSession::STREAMTYPE_VIDEO;
sp<AMessage> reply;
CHECK(msg->findMessage("reply", &reply));
sp<AMessage> notify = dupNotify();
notify->setInt32("what", kWhatQueueDecoderShutdown);
notify->setInt32("audio", audio);
notify->setInt32("video", video);
notify->setMessage("reply", reply);
notify->post();
break;
}
case Source::kWhatQueueDecoderShutdown:
{
int32_t audio, video;
CHECK(msg->findInt32("audio", &audio));
CHECK(msg->findInt32("video", &video));
sp<AMessage> reply;
CHECK(msg->findMessage("reply", &reply));
queueDecoderShutdown(audio, video, reply);
break;
}
void NuPlayer::queueDecoderShutdown(
bool audio, bool video, const sp<AMessage> &reply) {
ALOGI("queueDecoderShutdown audio=%d, video=%d", audio, video);
mDeferredActions.push_back(new FlushDecoderAction(audio ? FLUSH_CMD_SHUTDOWN : FLUSH_CMD_NONE,video ? FLUSH_CMD_SHUTDOWN : FLUSH_CMD_NONE));
mDeferredActions.push_back(new SimpleAction(&NuPlayer::performScanSources));
mDeferredActions.push_back(new PostMessageAction(reply));
processDeferredActions();
}

调用performDecoderFlush

struct NuPlayer::FlushDecoderAction : public Action {
FlushDecoderAction(FlushCommand audio, FlushCommand video)
: mAudio(audio),
mVideo(video) {
}
virtual void execute(NuPlayer *player) {
player->performDecoderFlush(mAudio, mVideo);
}
private:
FlushCommand mAudio;
FlushCommand mVideo;
DISALLOW_EVIL_CONSTRUCTORS(FlushDecoderAction);
};
void NuPlayer::performDecoderFlush(FlushCommand audio, FlushCommand video) {
ALOGV("performDecoderFlush audio=%d, video=%d", audio, video);
if ((audio == FLUSH_CMD_NONE || mAudioDecoder == NULL)&& (video == FLUSH_CMD_NONE || mVideoDecoder == NULL)) {
return;
}
if (audio != FLUSH_CMD_NONE && mAudioDecoder != NULL) {
flushDecoder(true /* audio */, (audio == FLUSH_CMD_SHUTDOWN));
}
if (video != FLUSH_CMD_NONE && mVideoDecoder != NULL) {
flushDecoder(false /* audio */, (video == FLUSH_CMD_SHUTDOWN));
}
}
void NuPlayer::flushDecoder(bool audio, bool needShutdown) {
ALOGV("[%s] flushDecoder needShutdown=%d",
audio ? "audio" : "video", needShutdown);

const sp<DecoderBase> &decoder = getDecoder(audio);
const sp<DecoderBase> &decoder = getDecoder(audio);
if (decoder == NULL) {
ALOGI("flushDecoder %s without decoder present",audio ? "audio" : "video");
return;
}
//...........
}

紧接着我们看下初始化编码器部分:

void NuPlayer::postScanSources() {
if (mScanSourcesPending) {
return;
}
sp<AMessage> msg = new AMessage(kWhatScanSources, this);
msg->setInt32("generation", mScanSourcesGeneration);
msg->post();
mScanSourcesPending = true;
}
case kWhatScanSources:
{
int32_t generation;

mScanSourcesPending = false;

bool mHadAnySourcesBefore =
(mAudioDecoder != NULL) || (mVideoDecoder != NULL);

// initialize video before audio because successful initialization of
// video may change deep buffer mode of audio.
if (mSurface != NULL) {
instantiateDecoder(false, &mVideoDecoder);
}

// Don't try to re-open audio sink if there's an existing decoder.
if (mAudioSink != NULL && mAudioDecoder == NULL) {
instantiateDecoder(true, &mAudioDecoder);
}
}
status_t NuPlayer::instantiateDecoder(bool audio, sp<DecoderBase> *decoder) {

//获取格式
sp<AMessage> format = mSource->getFormat(audio);
format->setInt32("priority", 0 /* realtime */);

if (audio) {
sp<AMessage> notify = new AMessage(kWhatAudioNotify, this);
++mAudioDecoderGeneration;
notify->setInt32("generation", mAudioDecoderGeneration);
determineAudioModeChange();
if (mOffloadAudio) {
//....................
} else {
*decoder = new Decoder(notify, mSource, mPID, mRenderer);
}
} else {
sp<AMessage> notify = new AMessage(kWhatVideoNotify, this);
++mVideoDecoderGeneration;
notify->setInt32("generation", mVideoDecoderGeneration);
*decoder = new Decoder(notify, mSource, mPID, mRenderer, mSurface, mCCDecoder);
//...........................
}
//解码器初始化
(*decoder)->init();
//配置解码器
(*decoder)->configure(format);
//.........
return OK;
}

在这里创建出解码器并初始化它。

void NuPlayer::DecoderBase::configure(const sp<AMessage> &format) {
sp<AMessage> msg = new AMessage(kWhatConfigure, this);
msg->setMessage("format", format);
msg->post();
}

void NuPlayer::DecoderBase::init() {
mDecoderLooper->registerHandler(this);
}

void NuPlayer::Decoder::onConfigure(const sp<AMessage> &format) {

//创建MediaCodec
mCodec = MediaCodec::CreateByType(mCodecLooper, mime.c_str(), false /* encoder */, NULL /* err */, mPid);
//配置MediaCodec
err = mCodec->configure(format, mSurface, NULL /* crypto */, 0 /* flags */);
//如果是视频文件则设置宽高
if (!mIsAudio) {
int32_t width, height;
if (mOutputFormat->findInt32("width", &width)&& mOutputFormat->findInt32("height", &height)) {
mStats->setInt32("width", width);
mStats->setInt32("height", height);
}
}
//启动MediaCodec
err = mCodec->start();
}
sp<MediaCodec> MediaCodec::CreateByType(const sp<ALooper> &looper, const char *mime, bool encoder, status_t *err, pid_t pid) {
sp<MediaCodec> codec = new MediaCodec(looper, pid);
const status_t ret = codec->init(mime, true /* nameIsType */, encoder);
return ret == OK ? codec : NULL; // NULL deallocates codec.
}

这里说明mCodec是一个ACodec对象

status_t MediaCodec::init(const AString &name, bool nameIsType, bool encoder) {
mResourceManagerService->init();
if (nameIsType || !strncasecmp(name.c_str(), "omx.", 4)) {
//根据名称创建Codec
mCodec = new ACodec;
} else if (!nameIsType&& !strncasecmp(name.c_str(), "android.filter.", 15)) {
} else {
}
sp<AMessage> msg = new AMessage(kWhatInit, this);
msg->setString("name", name);
msg->setInt32("nameIsType", nameIsType);
if (nameIsType) {
msg->setInt32("encoder", encoder);
}
return err;
}
case kWhatInit:
{
//....................
mCodec->initiateAllocateComponent(format);
break;
}
void ACodec::initiateAllocateComponent(const sp<AMessage> &msg) {
msg->setWhat(kWhatAllocateComponent);
msg->setTarget(this);
msg->post();
}
case ACodec::kWhatAllocateComponent:
{
onAllocateComponent(msg);
handled = true;
break;
}

这里开始实例化编码器并设置状态

bool ACodec::UninitializedState::onAllocateComponent(const sp<AMessage> &msg) {

Vector<OMXCodec::CodecNameAndQuirks> matchingCodecs;
AString mime;
AString componentName;
uint32_t quirks = 0;
int32_t encoder = false;
if (msg->findString("componentName", &componentName)) {
ssize_t index = matchingCodecs.add();
OMXCodec::CodecNameAndQuirks *entry = &matchingCodecs.editItemAt(index);
entry->mName = String8(componentName.c_str());

if (!OMXCodec::findCodecQuirks(componentName.c_str(), &entry->mQuirks)) {
entry->mQuirks = 0;
}
} else {
CHECK(msg->findString("mime", &mime));
if (!msg->findInt32("encoder", &encoder)) {
encoder = false;
}
OMXCodec::findMatchingCodecs(
mime.c_str(),
encoder, // createEncoder
NULL, // matchComponentName
0, // flags
&matchingCodecs);
}

sp<CodecObserver> observer = new CodecObserver;
IOMX::node_id node = 0;

status_t err = NAME_NOT_FOUND;
for (size_t matchIndex = 0; matchIndex < matchingCodecs.size();++matchIndex) {
componentName = matchingCodecs.itemAt(matchIndex).mName.string();
quirks = matchingCodecs.itemAt(matchIndex).mQuirks;

pid_t tid = gettid();
int prevPriority = androidGetThreadPriority(tid);
androidSetThreadPriority(tid, ANDROID_PRIORITY_FOREGROUND);
err = omx->allocateNode(componentName.c_str(), observer, &node);
androidSetThreadPriority(tid, prevPriority);
node = 0;
}

notify = new AMessage(kWhatOMXMessageList, mCodec);
observer->setNotificationMessage(notify);

mCodec->mComponentName = componentName;
mCodec->mRenderTracker.setComponentName(componentName);
mCodec->mFlags = 0;
mCodec->mQuirks = quirks;
mCodec->mOMX = omx;
mCodec->mNode = node;

{
sp<AMessage> notify = mCodec->mNotify->dup();
notify->setInt32("what", CodecBase::kWhatComponentAllocated);
notify->setString("componentName", mCodec->mComponentName.c_str());
notify->post();
}

mCodec->changeState(mCodec->mLoadedState);
return true;
}

解码器的配置

status_t MediaCodec::configure(
const sp<AMessage> &format,
const sp<Surface> &surface,
const sp<ICrypto> &crypto,
uint32_t flags) {
sp<AMessage> msg = new AMessage(kWhatConfigure, this);

if (mIsVideo) {
format->findInt32("width", &mVideoWidth);
format->findInt32("height", &mVideoHeight);
if (!format->findInt32("rotation-degrees", &mRotationDegrees)) {
mRotationDegrees = 0;
}
}

msg->setMessage("format", format);
msg->setInt32("flags", flags);
msg->setObject("surface", surface);

//.....................
// save msg for reset
mConfigureMsg = msg;
//.....................
for (int i = 0; i <= kMaxRetry; ++i) {
if (i > 0) {
// Don't try to reclaim resource for the first time.
if (!mResourceManagerService->reclaimResource(resources)) {
break;
}
}
sp<AMessage> response;
err = PostAndAwaitResponse(msg, &response);
//.....................
}
return err;
}
case kWhatConfigure:
{
sp<AReplyToken> replyID;
CHECK(msg->senderAwaitsResponse(&replyID));


sp<RefBase> obj;
CHECK(msg->findObject("surface", &obj));

sp<AMessage> format;
CHECK(msg->findMessage("format", &format));

int32_t push;
if (msg->findInt32("push-blank-buffers-on-shutdown", &push) && push != 0) {
mFlags |= kFlagPushBlankBuffersOnShutdown;
}

if (obj != NULL) {
format->setObject("native-window", obj);
status_t err = handleSetSurface(static_cast<Surface *>(obj.get()));
if (err != OK) {
PostReplyWithError(replyID, err);
break;
}
} else {
handleSetSurface(NULL);
}

mReplyID = replyID;
setState(CONFIGURING);

void *crypto;

uint32_t flags;
CHECK(msg->findInt32("flags", (int32_t *)&flags));

if (flags & CONFIGURE_FLAG_ENCODE) {
format->setInt32("encoder", true);
mFlags |= kFlagIsEncoder;
}
//这里最重要
mCodec->initiateConfigureComponent(format);
break;
}
void ACodec::initiateConfigureComponent(const sp<AMessage> &msg) {
msg->setWhat(kWhatConfigureComponent);
msg->setTarget(this);
msg->post();
}
case ACodec::kWhatConfigureComponent:
{
onConfigureComponent(msg);
handled = true;
break;
}
bool ACodec::LoadedState::onConfigureComponent(
const sp<AMessage> &msg) {
ALOGV("onConfigureComponent");

CHECK(mCodec->mNode != 0);

status_t err = OK;
AString mime;
if (!msg->findString("mime", &mime)) {
err = BAD_VALUE;
} else {
err = mCodec->configureCodec(mime.c_str(), msg);
}
{
sp<AMessage> notify = mCodec->mNotify->dup();
notify->setInt32("what", CodecBase::kWhatComponentConfigured);
notify->setMessage("input-format", mCodec->mInputFormat);
notify->setMessage("output-format", mCodec->mOutputFormat);
notify->post();
}

return true;
}
case CodecBase::kWhatComponentConfigured:
{
if (mState == UNINITIALIZED || mState == INITIALIZED) {
// In case a kWhatError message came in and replied with error,
// we log a warning and ignore.
ALOGW("configure interrupted by error, current state %d", mState);
break;
}
CHECK_EQ(mState, CONFIGURING);

// reset input surface flag
mHaveInputSurface = false;

CHECK(msg->findMessage("input-format", &mInputFormat));
CHECK(msg->findMessage("output-format", &mOutputFormat));

int32_t usingSwRenderer;
if (mOutputFormat->findInt32("using-sw-renderer", &usingSwRenderer)
&& usingSwRenderer) {
mFlags |= kFlagUsesSoftwareRenderer;
}
setState(CONFIGURED);
(new AMessage)->postReply(mReplyID);
break;
}

这里才是解码器最详细的配置,有时间好好针对这个展开研究,这篇博客先针对整个流程进行分析:

status_t ACodec::configureCodec(
const char *mime, const sp<AMessage> &msg) {
int32_t encoder;
if (!msg->findInt32("encoder", &encoder)) {
encoder = false;
}

sp<AMessage> inputFormat = new AMessage();
sp<AMessage> outputFormat = mNotify->dup(); // will use this for kWhatOutputFormatChanged

mIsEncoder = encoder;

mInputMetadataType = kMetadataBufferTypeInvalid;
mOutputMetadataType = kMetadataBufferTypeInvalid;

status_t err = setComponentRole(encoder /* isEncoder */, mime);

if (err != OK) {
return err;
}

int32_t bitRate = 0;
// FLAC encoder doesn't need a bitrate, other encoders do
if (encoder && strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_FLAC)
&& !msg->findInt32("bitrate", &bitRate)) {
return INVALID_OPERATION;
}

int32_t storeMeta;
if (encoder
&& msg->findInt32("store-metadata-in-buffers", &storeMeta)
&& storeMeta != 0) {
err = mOMX->storeMetaDataInBuffers(mNode, kPortIndexInput, OMX_TRUE, &mInputMetadataType);
if (err != OK) {
ALOGE("[%s] storeMetaDataInBuffers (input) failed w/ err %d",
mComponentName.c_str(), err);

return err;
}
// For this specific case we could be using camera source even if storeMetaDataInBuffers
// returns Gralloc source. Pretend that we are; this will force us to use nBufferSize.
if (mInputMetadataType == kMetadataBufferTypeGrallocSource) {
mInputMetadataType = kMetadataBufferTypeCameraSource;
}

uint32_t usageBits;
if (mOMX->getParameter(
mNode, (OMX_INDEXTYPE)OMX_IndexParamConsumerUsageBits,
&usageBits, sizeof(usageBits)) == OK) {
inputFormat->setInt32(
"using-sw-read-often", !!(usageBits & GRALLOC_USAGE_SW_READ_OFTEN));
}
}

int32_t prependSPSPPS = 0;
if (encoder
&& msg->findInt32("prepend-sps-pps-to-idr-frames", &prependSPSPPS)
&& prependSPSPPS != 0) {
OMX_INDEXTYPE index;
err = mOMX->getExtensionIndex(
mNode,
"OMX.google.android.index.prependSPSPPSToIDRFrames",
&index);

if (err == OK) {
PrependSPSPPSToIDRFramesParams params;
InitOMXParams(&params);
params.bEnable = OMX_TRUE;

err = mOMX->setParameter(
mNode, index, &params, sizeof(params));
}

if (err != OK) {
ALOGE("Encoder could not be configured to emit SPS/PPS before "
"IDR frames. (err %d)", err);

return err;
}
}

// Only enable metadata mode on encoder output if encoder can prepend
// sps/pps to idr frames, since in metadata mode the bitstream is in an
// opaque handle, to which we don't have access.
int32_t video = !strncasecmp(mime, "video/", 6);
mIsVideo = video;
if (encoder && video) {
OMX_BOOL enable = (OMX_BOOL) (prependSPSPPS
&& msg->findInt32("store-metadata-in-buffers-output", &storeMeta)
&& storeMeta != 0);

err = mOMX->storeMetaDataInBuffers(mNode, kPortIndexOutput, enable, &mOutputMetadataType);
if (err != OK) {
ALOGE("[%s] storeMetaDataInBuffers (output) failed w/ err %d",
mComponentName.c_str(), err);
}

if (!msg->findInt64(
"repeat-previous-frame-after",
&mRepeatFrameDelayUs)) {
mRepeatFrameDelayUs = -1ll;
}

if (!msg->findInt64("max-pts-gap-to-encoder", &mMaxPtsGapUs)) {
mMaxPtsGapUs = -1ll;
}

if (!msg->findFloat("max-fps-to-encoder", &mMaxFps)) {
mMaxFps = -1;
}

if (!msg->findInt64("time-lapse", &mTimePerCaptureUs)) {
mTimePerCaptureUs = -1ll;
}

if (!msg->findInt32(
"create-input-buffers-suspended",
(int32_t*)&mCreateInputBuffersSuspended)) {
mCreateInputBuffersSuspended = false;
}
}

// NOTE: we only use native window for video decoders
sp<RefBase> obj;
bool haveNativeWindow = msg->findObject("native-window", &obj)
&& obj != NULL && video && !encoder;
mLegacyAdaptiveExperiment = false;
if (video && !encoder) {
inputFormat->setInt32("adaptive-playback", false);

int32_t usageProtected;
if (msg->findInt32("protected", &usageProtected) && usageProtected) {
if (!haveNativeWindow) {
ALOGE("protected output buffers must be sent to an ANativeWindow");
return PERMISSION_DENIED;
}
mFlags |= kFlagIsGrallocUsageProtected;
mFlags |= kFlagPushBlankBuffersToNativeWindowOnShutdown;
}
}
if (haveNativeWindow) {
sp<ANativeWindow> nativeWindow =
static_cast<ANativeWindow *>(static_cast<Surface *>(obj.get()));

// START of temporary support for automatic FRC - THIS WILL BE REMOVED
int32_t autoFrc;
if (msg->findInt32("auto-frc", &autoFrc)) {
bool enabled = autoFrc;
OMX_CONFIG_BOOLEANTYPE config;
InitOMXParams(&config);
config.bEnabled = (OMX_BOOL)enabled;
status_t temp = mOMX->setConfig(
mNode, (OMX_INDEXTYPE)OMX_IndexConfigAutoFramerateConversion,
&config, sizeof(config));
if (temp == OK) {
outputFormat->setInt32("auto-frc", enabled);
} else if (enabled) {
ALOGI("codec does not support requested auto-frc (err %d)", temp);
}
}
// END of temporary support for automatic FRC

int32_t tunneled;
if (msg->findInt32("feature-tunneled-playback", &tunneled) &&
tunneled != 0) {
ALOGI("Configuring TUNNELED video playback.");
mTunneled = true;

int32_t audioHwSync = 0;
if (!msg->findInt32("audio-hw-sync", &audioHwSync)) {
ALOGW("No Audio HW Sync provided for video tunnel");
}
err = configureTunneledVideoPlayback(audioHwSync, nativeWindow);
if (err != OK) {
ALOGE("configureTunneledVideoPlayback(%d,%p) failed!",
audioHwSync, nativeWindow.get());
return err;
}

int32_t maxWidth = 0, maxHeight = 0;
if (msg->findInt32("max-width", &maxWidth) &&
msg->findInt32("max-height", &maxHeight)) {

err = mOMX->prepareForAdaptivePlayback(
mNode, kPortIndexOutput, OMX_TRUE, maxWidth, maxHeight);
if (err != OK) {
ALOGW("[%s] prepareForAdaptivePlayback failed w/ err %d",
mComponentName.c_str(), err);
// allow failure
err = OK;
} else {
inputFormat->setInt32("max-width", maxWidth);
inputFormat->setInt32("max-height", maxHeight);
inputFormat->setInt32("adaptive-playback", true);
}
}
} else {
ALOGV("Configuring CPU controlled video playback.");
mTunneled = false;

// Explicity reset the sideband handle of the window for
// non-tunneled video in case the window was previously used
// for a tunneled video playback.
err = native_window_set_sideband_stream(nativeWindow.get(), NULL);
if (err != OK) {
ALOGE("set_sideband_stream(NULL) failed! (err %d).", err);
return err;
}

// Always try to enable dynamic output buffers on native surface
err = mOMX->storeMetaDataInBuffers(
mNode, kPortIndexOutput, OMX_TRUE, &mOutputMetadataType);
if (err != OK) {
ALOGE("[%s] storeMetaDataInBuffers failed w/ err %d",
mComponentName.c_str(), err);

// if adaptive playback has been requested, try JB fallback
// NOTE: THIS FALLBACK MECHANISM WILL BE REMOVED DUE TO ITS
// LARGE MEMORY REQUIREMENT

// we will not do adaptive playback on software accessed
// surfaces as they never had to respond to changes in the
// crop window, and we don't trust that they will be able to.
int usageBits = 0;
bool canDoAdaptivePlayback;

if (nativeWindow->query(
nativeWindow.get(),
NATIVE_WINDOW_CONSUMER_USAGE_BITS,
&usageBits) != OK) {
canDoAdaptivePlayback = false;
} else {
canDoAdaptivePlayback =
(usageBits &
(GRALLOC_USAGE_SW_READ_MASK |
GRALLOC_USAGE_SW_WRITE_MASK)) == 0;
}

int32_t maxWidth = 0, maxHeight = 0;
if (canDoAdaptivePlayback &&
msg->findInt32("max-width", &maxWidth) &&
msg->findInt32("max-height", &maxHeight)) {
ALOGV("[%s] prepareForAdaptivePlayback(%dx%d)",
mComponentName.c_str(), maxWidth, maxHeight);

err = mOMX->prepareForAdaptivePlayback(
mNode, kPortIndexOutput, OMX_TRUE, maxWidth,
maxHeight);
ALOGW_IF(err != OK,
"[%s] prepareForAdaptivePlayback failed w/ err %d",
mComponentName.c_str(), err);

if (err == OK) {
inputFormat->setInt32("max-width", maxWidth);
inputFormat->setInt32("max-height", maxHeight);
inputFormat->setInt32("adaptive-playback", true);
}
}
// allow failure
err = OK;
} else {
ALOGV("[%s] storeMetaDataInBuffers succeeded",
mComponentName.c_str());
CHECK(storingMetadataInDecodedBuffers());
mLegacyAdaptiveExperiment = ADebug::isExperimentEnabled(
"legacy-adaptive", !msg->contains("no-experiments"));

inputFormat->setInt32("adaptive-playback", true);
}

int32_t push;
if (msg->findInt32("push-blank-buffers-on-shutdown", &push)
&& push != 0) {
mFlags |= kFlagPushBlankBuffersToNativeWindowOnShutdown;
}
}

int32_t rotationDegrees;
if (msg->findInt32("rotation-degrees", &rotationDegrees)) {
mRotationDegrees = rotationDegrees;
} else {
mRotationDegrees = 0;
}
}

if (video) {
// determine need for software renderer
bool usingSwRenderer = false;
if (haveNativeWindow && mComponentName.startsWith("OMX.google.")) {
usingSwRenderer = true;
haveNativeWindow = false;
}

if (encoder) {
err = setupVideoEncoder(mime, msg);
} else {
err = setupVideoDecoder(mime, msg, haveNativeWindow);
}

if (err != OK) {
return err;
}

if (haveNativeWindow) {
mNativeWindow = static_cast<Surface *>(obj.get());
}

// initialize native window now to get actual output format
// TODO: this is needed for some encoders even though they don't use native window
err = initNativeWindow();
if (err != OK) {
return err;
}

// fallback for devices that do not handle flex-YUV for native buffers
if (haveNativeWindow) {
int32_t requestedColorFormat = OMX_COLOR_FormatUnused;
if (msg->findInt32("color-format", &requestedColorFormat) &&
requestedColorFormat == OMX_COLOR_FormatYUV420Flexible) {
status_t err = getPortFormat(kPortIndexOutput, outputFormat);
if (err != OK) {
return err;
}
int32_t colorFormat = OMX_COLOR_FormatUnused;
OMX_U32 flexibleEquivalent = OMX_COLOR_FormatUnused;
if (!outputFormat->findInt32("color-format", &colorFormat)) {
ALOGE("ouptut port did not have a color format (wrong domain?)");
return BAD_VALUE;
}
ALOGD("[%s] Requested output format %#x and got %#x.",
mComponentName.c_str(), requestedColorFormat, colorFormat);
if (!isFlexibleColorFormat(
mOMX, mNode, colorFormat, haveNativeWindow, &flexibleEquivalent)
|| flexibleEquivalent != (OMX_U32)requestedColorFormat) {
// device did not handle flex-YUV request for native window, fall back
// to SW renderer
ALOGI("[%s] Falling back to software renderer", mComponentName.c_str());
mNativeWindow.clear();
mNativeWindowUsageBits = 0;
haveNativeWindow = false;
usingSwRenderer = true;
if (storingMetadataInDecodedBuffers()) {
err = mOMX->storeMetaDataInBuffers(
mNode, kPortIndexOutput, OMX_FALSE, &mOutputMetadataType);
mOutputMetadataType = kMetadataBufferTypeInvalid; // just in case
// TODO: implement adaptive-playback support for bytebuffer mode.
// This is done by SW codecs, but most HW codecs don't support it.
inputFormat->setInt32("adaptive-playback", false);
}
if (err == OK) {
err = mOMX->enableGraphicBuffers(mNode, kPortIndexOutput, OMX_FALSE);
}
if (mFlags & kFlagIsGrallocUsageProtected) {
// fallback is not supported for protected playback
err = PERMISSION_DENIED;
} else if (err == OK) {
err = setupVideoDecoder(mime, msg, false);
}
}
}
}

if (usingSwRenderer) {
outputFormat->setInt32("using-sw-renderer", 1);
}
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_MPEG)) {
int32_t numChannels, sampleRate;
if (!msg->findInt32("channel-count", &numChannels)
|| !msg->findInt32("sample-rate", &sampleRate)) {
// Since we did not always check for these, leave them optional
// and have the decoder figure it all out.
err = OK;
} else {
err = setupRawAudioFormat(
encoder ? kPortIndexInput : kPortIndexOutput,
sampleRate,
numChannels);
}
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AAC)) {
int32_t numChannels, sampleRate;
if (!msg->findInt32("channel-count", &numChannels)
|| !msg->findInt32("sample-rate", &sampleRate)) {
err = INVALID_OPERATION;
} else {
int32_t isADTS, aacProfile;
int32_t sbrMode;
int32_t maxOutputChannelCount;
int32_t pcmLimiterEnable;
drcParams_t drc;
if (!msg->findInt32("is-adts", &isADTS)) {
isADTS = 0;
}
if (!msg->findInt32("aac-profile", &aacProfile)) {
aacProfile = OMX_AUDIO_AACObjectNull;
}
if (!msg->findInt32("aac-sbr-mode", &sbrMode)) {
sbrMode = -1;
}

if (!msg->findInt32("aac-max-output-channel_count", &maxOutputChannelCount)) {
maxOutputChannelCount = -1;
}
if (!msg->findInt32("aac-pcm-limiter-enable", &pcmLimiterEnable)) {
// value is unknown
pcmLimiterEnable = -1;
}
if (!msg->findInt32("aac-encoded-target-level", &drc.encodedTargetLevel)) {
// value is unknown
drc.encodedTargetLevel = -1;
}
if (!msg->findInt32("aac-drc-cut-level", &drc.drcCut)) {
// value is unknown
drc.drcCut = -1;
}
if (!msg->findInt32("aac-drc-boost-level", &drc.drcBoost)) {
// value is unknown
drc.drcBoost = -1;
}
if (!msg->findInt32("aac-drc-heavy-compression", &drc.heavyCompression)) {
// value is unknown
drc.heavyCompression = -1;
}
if (!msg->findInt32("aac-target-ref-level", &drc.targetRefLevel)) {
// value is unknown
drc.targetRefLevel = -1;
}

err = setupAACCodec(
encoder, numChannels, sampleRate, bitRate, aacProfile,
isADTS != 0, sbrMode, maxOutputChannelCount, drc,
pcmLimiterEnable);
}
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AMR_NB)) {
err = setupAMRCodec(encoder, false /* isWAMR */, bitRate);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AMR_WB)) {
err = setupAMRCodec(encoder, true /* isWAMR */, bitRate);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_G711_ALAW)
|| !strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_G711_MLAW)) {
// These are PCM-like formats with a fixed sample rate but
// a variable number of channels.

int32_t numChannels;
if (!msg->findInt32("channel-count", &numChannels)) {
err = INVALID_OPERATION;
} else {
int32_t sampleRate;
if (!msg->findInt32("sample-rate", &sampleRate)) {
sampleRate = 8000;
}
err = setupG711Codec(encoder, sampleRate, numChannels);
}
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_FLAC)) {
int32_t numChannels = 0, sampleRate = 0, compressionLevel = -1;
if (encoder &&
(!msg->findInt32("channel-count", &numChannels)
|| !msg->findInt32("sample-rate", &sampleRate))) {
ALOGE("missing channel count or sample rate for FLAC encoder");
err = INVALID_OPERATION;
} else {
if (encoder) {
if (!msg->findInt32(
"complexity", &compressionLevel) &&
!msg->findInt32(
"flac-compression-level", &compressionLevel)) {
compressionLevel = 5; // default FLAC compression level
} else if (compressionLevel < 0) {
ALOGW("compression level %d outside [0..8] range, "
"using 0",
compressionLevel);
compressionLevel = 0;
} else if (compressionLevel > 8) {
ALOGW("compression level %d outside [0..8] range, "
"using 8",
compressionLevel);
compressionLevel = 8;
}
}
err = setupFlacCodec(
encoder, numChannels, sampleRate, compressionLevel);
}
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_RAW)) {
int32_t numChannels, sampleRate;
if (encoder
|| !msg->findInt32("channel-count", &numChannels)
|| !msg->findInt32("sample-rate", &sampleRate)) {
err = INVALID_OPERATION;
} else {
err = setupRawAudioFormat(kPortIndexInput, sampleRate, numChannels);
}
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AC3)) {
int32_t numChannels;
int32_t sampleRate;
if (!msg->findInt32("channel-count", &numChannels)
|| !msg->findInt32("sample-rate", &sampleRate)) {
err = INVALID_OPERATION;
} else {
err = setupAC3Codec(encoder, numChannels, sampleRate);
}
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_EAC3)) {
int32_t numChannels;
int32_t sampleRate;
if (!msg->findInt32("channel-count", &numChannels)
|| !msg->findInt32("sample-rate", &sampleRate)) {
err = INVALID_OPERATION;
} else {
err = setupEAC3Codec(encoder, numChannels, sampleRate);
}
}

if (err != OK) {
return err;
}

if (!msg->findInt32("encoder-delay", &mEncoderDelay)) {
mEncoderDelay = 0;
}

if (!msg->findInt32("encoder-padding", &mEncoderPadding)) {
mEncoderPadding = 0;
}

if (msg->findInt32("channel-mask", &mChannelMask)) {
mChannelMaskPresent = true;
} else {
mChannelMaskPresent = false;
}

int32_t maxInputSize;
if (msg->findInt32("max-input-size", &maxInputSize)) {
err = setMinBufferSize(kPortIndexInput, (size_t)maxInputSize);
} else if (!strcmp("OMX.Nvidia.aac.decoder", mComponentName.c_str())) {
err = setMinBufferSize(kPortIndexInput, 8192); // XXX
}

int32_t priority;
if (msg->findInt32("priority", &priority)) {
err = setPriority(priority);
}

int32_t rateInt = -1;
float rateFloat = -1;
if (!msg->findFloat("operating-rate", &rateFloat)) {
msg->findInt32("operating-rate", &rateInt);
rateFloat = (float)rateInt; // 16MHz (FLINTMAX) is OK for upper bound.
}
if (rateFloat > 0) {
err = setOperatingRate(rateFloat, video);
}

mBaseOutputFormat = outputFormat;

err = getPortFormat(kPortIndexInput, inputFormat);
if (err == OK) {
err = getPortFormat(kPortIndexOutput, outputFormat);
if (err == OK) {
mInputFormat = inputFormat;
mOutputFormat = outputFormat;
}
}
return err;
}

到了这里整个解码器的初始化和配置已经结束了,我们看下解码器的start阶段:

status_t MediaCodec::start() {
sp<AMessage> msg = new AMessage(kWhatStart, this);

status_t err;
Vector<MediaResource> resources;
const char *type = (mFlags & kFlagIsSecure) ?
kResourceSecureCodec : kResourceNonSecureCodec;
const char *subtype = mIsVideo ? kResourceVideoCodec : kResourceAudioCodec;
resources.push_back(MediaResource(String8(type), String8(subtype), 1));
// Don't know the buffer size at this point, but it's fine to use 1 because
// the reclaimResource call doesn't consider the requester's buffer size for now.
resources.push_back(MediaResource(String8(kResourceGraphicMemory), 1));
for (int i = 0; i <= kMaxRetry; ++i) {
if (i > 0) {
// Don't try to reclaim resource for the first time.
if (!mResourceManagerService->reclaimResource(resources)) {
break;
}
// Recover codec from previous error before retry start.
err = reset();
if (err != OK) {
ALOGE("retrying start: failed to reset codec");
break;
}
sp<AMessage> response;
err = PostAndAwaitResponse(mConfigureMsg, &response);
if (err != OK) {
ALOGE("retrying start: failed to configure codec");
break;
}
}
sp<AMessage> response;
err = PostAndAwaitResponse(msg, &response);
if (!isResourceError(err)) {
break;
}
}
return err;
}
case kWhatStart:
{
sp<AReplyToken> replyID;
CHECK(msg->senderAwaitsResponse(&replyID));

if (mState == FLUSHED) {
setState(STARTED);
if (mHavePendingInputBuffers) {
onInputBufferAvailable();
mHavePendingInputBuffers = false;
}
//我们重点看这里
mCodec->signalResume();
//..................
PostReplyWithError(replyID, OK);
break;
} else if (mState != CONFIGURED) {
PostReplyWithError(replyID, INVALID_OPERATION);
break;
}

mReplyID = replyID;
setState(STARTING);

mCodec->initiateStart();
break;
}

首先調用initiateStart初始化解码器状态

void ACodec::initiateStart() {
(new AMessage(kWhatStart, this))->post();
}
case ACodec::kWhatStart:
{
onStart();
handled = true;
break;
}
void ACodec::LoadedState::onStart() {
ALOGV("onStart");

status_t err = mCodec->mOMX->sendCommand(mCodec->mNode, OMX_CommandStateSet, OMX_StateIdle);
if (err != OK) {
mCodec->signalError(OMX_ErrorUndefined, makeNoSideEffectStatus(err));
} else {
mCodec->changeState(mCodec->mLoadedToIdleState);
}
}

接着开始获取数据进行解码

void ACodec::signalResume() {
(new AMessage(kWhatResume, this))->post();
}
case kWhatResume:
{
resume();
handled = true;
break;
}
void ACodec::ExecutingState::resume() {

submitOutputBuffers();
// Post all available input buffers
if (mCodec->mBuffers[kPortIndexInput].size() == 0u) {
ALOGW("[%s] we don't have any input buffers to resume", mCodec->mComponentName.c_str());
}

for (size_t i = 0; i < mCodec->mBuffers[kPortIndexInput].size(); i++) {
BufferInfo *info = &mCodec->mBuffers[kPortIndexInput].editItemAt(i);
if (info->mStatus == BufferInfo::OWNED_BY_US) {
postFillThisBuffer(info);
}
}
mActive = true;
}
void ACodec::BaseState::postFillThisBuffer(BufferInfo *info) {
if (mCodec->mPortEOS[kPortIndexInput]) {
return;
}

CHECK_EQ((int)info->mStatus, (int)BufferInfo::OWNED_BY_US);
sp<AMessage> notify = mCodec->mNotify->dup();
notify->setInt32("what", CodecBase::kWhatFillThisBuffer);
notify->setInt32("buffer-id", info->mBufferID);
info->mData->meta()->clear();
notify->setBuffer("buffer", info->mData);
sp<AMessage> reply = new AMessage(kWhatInputBufferFilled, mCodec);
reply->setInt32("buffer-id", info->mBufferID);
notify->setMessage("reply", reply);
notify->post();
info->mStatus = BufferInfo::OWNED_BY_UPSTREAM;
}
case CodecBase::kWhatFillThisBuffer:
{

//..........
if (mFlags & kFlagIsAsync) {
if (!mHaveInputSurface) {
if (mState == FLUSHED) {
mHavePendingInputBuffers = true;
} else {
onInputBufferAvailable();
}
}
} else if (mFlags & kFlagDequeueInputPending) {
CHECK(handleDequeueInputBuffer(mDequeueInputReplyID));
++mDequeueInputTimeoutGeneration;
mFlags &= ~kFlagDequeueInputPending;
mDequeueInputReplyID = 0;
} else {
postActivityNotificationIfPossible();
}
break;
}
void MediaCodec::onInputBufferAvailable() {
int32_t index;
while ((index = dequeuePortBuffer(kPortIndexInput)) >= 0) {
sp<AMessage> msg = mCallback->dup();
msg->setInt32("callbackID", CB_INPUT_AVAILABLE);
msg->setInt32("index", index);
msg->post();
}
}

还记得这个mCallback怎么来的吗?

void NuPlayer::Decoder::onConfigure(const sp<AMessage> &format) {

//.................
sp<AMessage> reply = new AMessage(kWhatCodecNotify, this);
mCodec->setCallback(reply);
//..................
}
status_t MediaCodec::setCallback(const sp<AMessage> &callback) {
sp<AMessage> msg = new AMessage(kWhatSetCallback, this);
msg->setMessage("callback", callback);

sp<AMessage> response;
return PostAndAwaitResponse(msg, &response);
}
case kWhatSetCallback:
{
sp<AReplyToken> replyID;
CHECK(msg->senderAwaitsResponse(&replyID));
sp<AMessage> callback;
CHECK(msg->findMessage("callback", &callback));

mCallback = callback;

if (mCallback != NULL) {
mFlags |= kFlagIsAsync;
} else {
mFlags &= ~kFlagIsAsync;
}

sp<AMessage> response = new AMessage;
response->postReply(replyID);
break;
}

所以根据上面我们可以知道接下来i调用的是kWhatCodecNotify 下的 CB_INPUT_AVAILABLE

case MediaCodec::CB_INPUT_AVAILABLE:
{
int32_t index;
CHECK(msg->findInt32("index", &index));

handleAnInputBuffer(index);
break;
}
bool NuPlayer::Decoder::handleAnInputBuffer(size_t index) {
if (isDiscontinuityPending()) {
return false;
}

sp<ABuffer> buffer;
mCodec->getInputBuffer(index, &buffer);

if (buffer == NULL) {
handleError(UNKNOWN_ERROR);
return false;
}

if (index >= mInputBuffers.size()) {
for (size_t i = mInputBuffers.size(); i <= index; ++i) {
mInputBuffers.add();
mMediaBuffers.add();
mInputBufferIsDequeued.add();
mMediaBuffers.editItemAt(i) = NULL;
mInputBufferIsDequeued.editItemAt(i) = false;
}
}
mInputBuffers.editItemAt(index) = buffer;

//CHECK_LT(bufferIx, mInputBuffers.size());

if (mMediaBuffers[index] != NULL) {
mMediaBuffers[index]->release();
mMediaBuffers.editItemAt(index) = NULL;
}
mInputBufferIsDequeued.editItemAt(index) = true;

if (!mCSDsToSubmit.isEmpty()) {
sp<AMessage> msg = new AMessage();
msg->setSize("buffer-ix", index);

sp<ABuffer> buffer = mCSDsToSubmit.itemAt(0);
ALOGI("[%s] resubmitting CSD", mComponentName.c_str());
msg->setBuffer("buffer", buffer);
mCSDsToSubmit.removeAt(0);
CHECK(onInputBufferFetched(msg));
return true;
}

while (!mPendingInputMessages.empty()) {
sp<AMessage> msg = *mPendingInputMessages.begin();
if (!onInputBufferFetched(msg)) {
break;
}
mPendingInputMessages.erase(mPendingInputMessages.begin());
}

if (!mInputBufferIsDequeued.editItemAt(index)) {
return true;
}

mDequeuedInputBuffers.push_back(index);

onRequestInputBuffers();
return true;
}
void NuPlayer::DecoderBase::onRequestInputBuffers() {
if (mRequestInputBuffersPending) {
return;
}

// doRequestBuffers() return true if we should request more data
if (doRequestBuffers()) {
mRequestInputBuffersPending = true;

sp<AMessage> msg = new AMessage(kWhatRequestInputBuffers, this);
msg->post(10 * 1000ll);
}
}
bool NuPlayer::Decoder::doRequestBuffers() {
// mRenderer is only NULL if we have a legacy widevine source that
// is not yet ready. In this case we must not fetch input.
if (isDiscontinuityPending() || mRenderer == NULL) {
return false;
}
status_t err = OK;
while (err == OK && !mDequeuedInputBuffers.empty()) {
size_t bufferIx = *mDequeuedInputBuffers.begin();
sp<AMessage> msg = new AMessage();
msg->setSize("buffer-ix", bufferIx);
err = fetchInputData(msg);
if (err != OK && err != ERROR_END_OF_STREAM) {
// if EOS, need to queue EOS buffer
break;
}
mDequeuedInputBuffers.erase(mDequeuedInputBuffers.begin());

if (!mPendingInputMessages.empty()
|| !onInputBufferFetched(msg)) {
mPendingInputMessages.push_back(msg);
}
}

return err == -EWOULDBLOCK
&& mSource->feedMoreTSData() == OK;
}
status_t NuPlayer::Decoder::fetchInputData(sp<AMessage> &reply) {
sp<ABuffer> accessUnit;
bool dropAccessUnit;
do {
status_t err = mSource->dequeueAccessUnit(mIsAudio, &accessUnit);

if (err == -EWOULDBLOCK) {
return err;
} else if (err != OK) {
if (err == INFO_DISCONTINUITY) {
int32_t type;
CHECK(accessUnit->meta()->findInt32("discontinuity", &type));

bool formatChange =
(mIsAudio &&
(type & ATSParser::DISCONTINUITY_AUDIO_FORMAT))
|| (!mIsAudio &&
(type & ATSParser::DISCONTINUITY_VIDEO_FORMAT));

bool timeChange = (type & ATSParser::DISCONTINUITY_TIME) != 0;

ALOGI("%s discontinuity (format=%d, time=%d)",
mIsAudio ? "audio" : "video", formatChange, timeChange);

bool seamlessFormatChange = false;
sp<AMessage> newFormat = mSource->getFormat(mIsAudio);
if (formatChange) {
seamlessFormatChange =
supportsSeamlessFormatChange(newFormat);
// treat seamless format change separately
formatChange = !seamlessFormatChange;
}

// For format or time change, return EOS to queue EOS input,
// then wait for EOS on output.
if (formatChange /* not seamless */) {
mFormatChangePending = true;
err = ERROR_END_OF_STREAM;
} else if (timeChange) {
rememberCodecSpecificData(newFormat);
mTimeChangePending = true;
err = ERROR_END_OF_STREAM;
} else if (seamlessFormatChange) {
// reuse existing decoder and don't flush
rememberCodecSpecificData(newFormat);
continue;
} else {
// This stream is unaffected by the discontinuity
return -EWOULDBLOCK;
}
}

// reply should only be returned without a buffer set
// when there is an error (including EOS)
CHECK(err != OK);

reply->setInt32("err", err);
return ERROR_END_OF_STREAM;
}

dropAccessUnit = false;
if (!mIsAudio
&& !mIsSecure
&& mRenderer->getVideoLateByUs() > 100000ll
&& mIsVideoAVC
&& !IsAVCReferenceFrame(accessUnit)) {
dropAccessUnit = true;
++mNumInputFramesDropped;
}
} while (dropAccessUnit);

// ALOGV("returned a valid buffer of %s data", mIsAudio ? "mIsAudio" : "video");
#if 0
int64_t mediaTimeUs;
CHECK(accessUnit->meta()->findInt64("timeUs", &mediaTimeUs));
ALOGV("[%s] feeding input buffer at media time %.3f",
mIsAudio ? "audio" : "video",
mediaTimeUs / 1E6);
#endif

if (mCCDecoder != NULL) {
mCCDecoder->decode(accessUnit);
}

reply->setBuffer("buffer", accessUnit);

return OK;
}

我们接着看下如何获取索引列表,首先看下onChangeConfiguration3,在这部分代码很长,大家有兴趣可以看下这里面的代码,它的任务主要有如下几点

  1. 判断audio及Video是否发送变化
  2. 根据当前的mFetcherInfos更新resumeMask
  3. 如果是有新的Fetcher那么需要新建FetcherInfo
  4. 启动对应的Fetcher
  5. 检查当前带宽根据带宽切换资源
    但是最关键的代码在于fetcher->startAsync,
    void LiveSession::onChangeConfiguration3(const sp<AMessage> &msg) {
    //........
    fetcher->startAsync(
    sources[kAudioIndex],
    sources[kVideoIndex],
    sources[kSubtitleIndex],
    getMetadataSource(sources, mNewStreamMask, switching),
    startTime.mTimeUs < 0 ? mLastSeekTimeUs : startTime.mTimeUs,
    startTime.getSegmentTimeUs(),
    startTime.mSeq,
    seekMode);
    //.......
    }

void PlaylistFetcher::startAsync(
const sp<AnotherPacketSource> &audioSource,
const sp<AnotherPacketSource> &videoSource,
const sp<AnotherPacketSource> &subtitleSource,
const sp<AnotherPacketSource> &metadataSource,
int64_t startTimeUs,
int64_t segmentStartTimeUs,
int32_t startDiscontinuitySeq,
LiveSession::SeekMode seekMode) {


sp<AMessage> msg = new AMessage(kWhatStart, this);
//.................
msg->post();
}
case kWhatStart:
{
status_t err = onStart(msg);

sp<AMessage> notify = mNotify->dup();
notify->setInt32("what", kWhatStarted);
notify->setInt32("err", err);
notify->post();
break;
}
status_t PlaylistFetcher::onStart(const sp<AMessage> &msg) {

//..........
if (streamTypeMask & LiveSession::STREAMTYPE_AUDIO) {
void *ptr;
CHECK(msg->findPointer("audioSource", &ptr));
mPacketSources.add(LiveSession::STREAMTYPE_AUDIO,static_cast<AnotherPacketSource *>(ptr));
}

if (streamTypeMask & LiveSession::STREAMTYPE_VIDEO) {
void *ptr;
CHECK(msg->findPointer("videoSource", &ptr));

mPacketSources.add(LiveSession::STREAMTYPE_VIDEO,static_cast<AnotherPacketSource *>(ptr));
}

if (streamTypeMask & LiveSession::STREAMTYPE_SUBTITLES) {
void *ptr;
CHECK(msg->findPointer("subtitleSource", &ptr));
mPacketSources.add(LiveSession::STREAMTYPE_SUBTITLES,static_cast<AnotherPacketSource *>(ptr));
}

void *ptr;
// metadataSource is not part of streamTypeMask
if ((streamTypeMask & (LiveSession::STREAMTYPE_AUDIO | LiveSession::STREAMTYPE_VIDEO))
&& msg->findPointer("metadataSource", &ptr)) {
mPacketSources.add(LiveSession::STREAMTYPE_METADATA,static_cast<AnotherPacketSource *>(ptr));
}

//...............

postMonitorQueue();

return OK;
}

void PlaylistFetcher::postMonitorQueue(int64_t delayUs, int64_t minDelayUs) {
int64_t maxDelayUs = delayUsToRefreshPlaylist();
if (maxDelayUs < minDelayUs) {
maxDelayUs = minDelayUs;
}
if (delayUs > maxDelayUs) {
FLOGV("Need to refresh playlist in %lld", (long long)maxDelayUs);
delayUs = maxDelayUs;
}
sp<AMessage> msg = new AMessage(kWhatMonitorQueue, this);
msg->setInt32("generation", mMonitorQueueGeneration);
msg->post(delayUs);
}
case kWhatMonitorQueue:
case kWhatDownloadNext:
{
int32_t generation;
CHECK(msg->findInt32("generation", &generation));

if (generation != mMonitorQueueGeneration) {
// Stale event
break;
}

if (msg->what() == kWhatMonitorQueue) {
onMonitorQueue();
} else {
onDownloadNext();
}
break;
}
void PlaylistFetcher::onMonitorQueue() {

//.......................
if (finalResult == OK && bufferedDurationUs < kMinBufferedDurationUs) {
FLOGV("monitoring, buffered=%lld < %lld",
(long long)bufferedDurationUs, (long long)kMinBufferedDurationUs);

// delay the next download slightly; hopefully this gives other concurrent fetchers
// a better chance to run.
// onDownloadNext();
sp<AMessage> msg = new AMessage(kWhatDownloadNext, this);
msg->setInt32("generation", mMonitorQueueGeneration);
msg->post(1000l);
} else {
// We'd like to maintain buffering above durationToBufferUs, so try
// again when buffer just about to go below durationToBufferUs
// (or after targetDurationUs / 2, whichever is smaller).
int64_t delayUs = bufferedDurationUs - kMinBufferedDurationUs + 1000000ll;
if (delayUs > targetDurationUs / 2) {
delayUs = targetDurationUs / 2;
}

FLOGV("pausing for %lld, buffered=%lld > %lld",
(long long)delayUs,
(long long)bufferedDurationUs,
(long long)kMinBufferedDurationUs);

postMonitorQueue(delayUs);
}
}

initDownloadState 用于在获取TS包之前获取对应的Uri

bool PlaylistFetcher::initDownloadState(
AString &uri,
sp<AMessage> &itemMeta,
int32_t &firstSeqNumberInPlaylist,
int32_t &lastSeqNumberInPlaylist) {
status_t err = refreshPlaylist();
firstSeqNumberInPlaylist = 0;
lastSeqNumberInPlaylist = 0;
bool discontinuity = false;

if (mPlaylist != NULL) {
mPlaylist->getSeqNumberRange(
&firstSeqNumberInPlaylist, &lastSeqNumberInPlaylist);

if (mDiscontinuitySeq < 0) {
mDiscontinuitySeq = mPlaylist->getDiscontinuitySeq();
}
}

mSegmentFirstPTS = -1ll;

if (mPlaylist != NULL && mSeqNumber < 0) {
CHECK_GE(mStartTimeUs, 0ll);

if (mSegmentStartTimeUs < 0) {
if (!mPlaylist->isComplete() && !mPlaylist->isEvent()) {
// If this is a live session, start 3 segments from the end on connect
mSeqNumber = lastSeqNumberInPlaylist - 3;
if (mSeqNumber < firstSeqNumberInPlaylist) {
mSeqNumber = firstSeqNumberInPlaylist;
}
} else {
// When seeking mSegmentStartTimeUs is unavailable (< 0), we
// use mStartTimeUs (client supplied timestamp) to determine both start segment
// and relative position inside a segment
mSeqNumber = getSeqNumberForTime(mStartTimeUs);
mStartTimeUs -= getSegmentStartTimeUs(mSeqNumber);
}
mStartTimeUsRelative = true;
FLOGV("Initial sequence number for time %lld is %d from (%d .. %d)",
(long long)mStartTimeUs, mSeqNumber, firstSeqNumberInPlaylist,
lastSeqNumberInPlaylist);
} else {
// When adapting or track switching, mSegmentStartTimeUs (relative
// to media time 0) is used to determine the start segment; mStartTimeUs (absolute
// timestamps coming from the media container) is used to determine the position
// inside a segments.
if (mStreamTypeMask != LiveSession::STREAMTYPE_SUBTITLES
&& mSeekMode != LiveSession::kSeekModeNextSample) {
// avoid double fetch/decode
// Use (mSegmentStartTimeUs + 1/2 * targetDurationUs) to search
// for the starting segment in new variant.
// If the two variants' segments are aligned, this gives the
// next segment. If they're not aligned, this gives the segment
// that overlaps no more than 1/2 * targetDurationUs.
mSeqNumber = getSeqNumberForTime(mSegmentStartTimeUs
+ mPlaylist->getTargetDuration() / 2);
} else {
mSeqNumber = getSeqNumberForTime(mSegmentStartTimeUs);
}
ssize_t minSeq = getSeqNumberForDiscontinuity(mDiscontinuitySeq);
if (mSeqNumber < minSeq) {
mSeqNumber = minSeq;
}

if (mSeqNumber < firstSeqNumberInPlaylist) {
mSeqNumber = firstSeqNumberInPlaylist;
}

if (mSeqNumber > lastSeqNumberInPlaylist) {
mSeqNumber = lastSeqNumberInPlaylist;
}
FLOGV("Initial sequence number is %d from (%d .. %d)",
mSeqNumber, firstSeqNumberInPlaylist,
lastSeqNumberInPlaylist);
}
}

// if mPlaylist is NULL then err must be non-OK; but the other way around might not be true
if (mSeqNumber < firstSeqNumberInPlaylist
|| mSeqNumber > lastSeqNumberInPlaylist
|| err != OK) {
if ((err != OK || !mPlaylist->isComplete()) && mNumRetries < kMaxNumRetries) {
++mNumRetries;

if (mSeqNumber > lastSeqNumberInPlaylist || err != OK) {
// make sure we reach this retry logic on refresh failures
// by adding an err != OK clause to all enclosing if's.

// refresh in increasing fraction (1/2, 1/3, ...) of the
// playlist's target duration or 3 seconds, whichever is less
int64_t delayUs = kMaxMonitorDelayUs;
if (mPlaylist != NULL) {
delayUs = mPlaylist->size() * mPlaylist->getTargetDuration()
/ (1 + mNumRetries);
}
if (delayUs > kMaxMonitorDelayUs) {
delayUs = kMaxMonitorDelayUs;
}
FLOGV("sequence number high: %d from (%d .. %d), "
"monitor in %lld (retry=%d)",
mSeqNumber, firstSeqNumberInPlaylist,
lastSeqNumberInPlaylist, (long long)delayUs, mNumRetries);
postMonitorQueue(delayUs);
return false;
}

if (err != OK) {
notifyError(err);
return false;
}

// we've missed the boat, let's start 3 segments prior to the latest sequence
// number available and signal a discontinuity.

ALOGI("We've missed the boat, restarting playback."
" mStartup=%d, was looking for %d in %d-%d",
mStartup, mSeqNumber, firstSeqNumberInPlaylist,
lastSeqNumberInPlaylist);
if (mStopParams != NULL) {
// we should have kept on fetching until we hit the boundaries in mStopParams,
// but since the segments we are supposed to fetch have already rolled off
// the playlist, i.e. we have already missed the boat, we inevitably have to
// skip.
notifyStopReached();
return false;
}
mSeqNumber = lastSeqNumberInPlaylist - 3;
if (mSeqNumber < firstSeqNumberInPlaylist) {
mSeqNumber = firstSeqNumberInPlaylist;
}
discontinuity = true;

// fall through
} else {
if (mPlaylist != NULL) {
ALOGE("Cannot find sequence number %d in playlist "
"(contains %d - %d)",
mSeqNumber, firstSeqNumberInPlaylist,
firstSeqNumberInPlaylist + (int32_t)mPlaylist->size() - 1);

if (mTSParser != NULL) {
mTSParser->signalEOS(ERROR_END_OF_STREAM);
// Use an empty buffer; we don't have any new data, just want to extract
// potential new access units after flush. Reset mSeqNumber to
// lastSeqNumberInPlaylist such that we set the correct access unit
// properties in extractAndQueueAccessUnitsFromTs.
sp<ABuffer> buffer = new ABuffer(0);
mSeqNumber = lastSeqNumberInPlaylist;
extractAndQueueAccessUnitsFromTs(buffer);
}
notifyError(ERROR_END_OF_STREAM);
} else {
// It's possible that we were never able to download the playlist.
// In this case we should notify error, instead of EOS, as EOS during
// prepare means we succeeded in downloading everything.
ALOGE("Failed to download playlist!");
notifyError(ERROR_IO);
}

return false;
}
}

mNumRetries = 0;

CHECK(mPlaylist->itemAt(
mSeqNumber - firstSeqNumberInPlaylist,
&uri,
&itemMeta));

CHECK(itemMeta->findInt32("discontinuity-sequence", &mDiscontinuitySeq));

int32_t val;
if (itemMeta->findInt32("discontinuity", &val) && val != 0) {
discontinuity = true;
} else if (mLastDiscontinuitySeq >= 0
&& mDiscontinuitySeq != mLastDiscontinuitySeq) {
// Seek jumped to a new discontinuity sequence. We need to signal
// a format change to decoder. Decoder needs to shutdown and be
// created again if seamless format change is unsupported.
FLOGV("saw discontinuity: mStartup %d, mLastDiscontinuitySeq %d, "
"mDiscontinuitySeq %d, mStartTimeUs %lld",
mStartup, mLastDiscontinuitySeq, mDiscontinuitySeq, (long long)mStartTimeUs);
discontinuity = true;
}
mLastDiscontinuitySeq = -1;

// decrypt a junk buffer to prefetch key; since a session uses only one http connection,
// this avoids interleaved connections to the key and segment file.
{
sp<ABuffer> junk = new ABuffer(16);
junk->setRange(0, 16);
status_t err = decryptBuffer(mSeqNumber - firstSeqNumberInPlaylist, junk,
true /* first */);
if (err == ERROR_NOT_CONNECTED) {
return false;
} else if (err != OK) {
notifyError(err);
return false;
}
}

if ((mStartup && !mTimeChangeSignaled) || discontinuity) {
// We need to signal a time discontinuity to ATSParser on the
// first segment after start, or on a discontinuity segment.
// Setting mNextPTSTimeUs informs extractAndQueueAccessUnitsXX()
// to send the time discontinuity.
if (mPlaylist->isComplete() || mPlaylist->isEvent()) {
// If this was a live event this made no sense since
// we don't have access to all the segment before the current
// one.
mNextPTSTimeUs = getSegmentStartTimeUs(mSeqNumber);
}

// Setting mTimeChangeSignaled to true, so that if start time
// searching goes into 2nd segment (without a discontinuity),
// we don't reset time again. It causes corruption when pending
// data in ATSParser is cleared.
mTimeChangeSignaled = true;
}

if (discontinuity) {
ALOGI("queueing discontinuity (explicit=%d)", discontinuity);

// Signal a format discontinuity to ATSParser to clear partial data
// from previous streams. Not doing this causes bitstream corruption.
if (mTSParser != NULL) {
mTSParser->signalDiscontinuity(
ATSParser::DISCONTINUITY_FORMATCHANGE, NULL /* extra */);
}

queueDiscontinuity(
ATSParser::DISCONTINUITY_FORMAT_ONLY,
NULL /* extra */);

if (mStartup && mStartTimeUsRelative && mFirstPTSValid) {
// This means we guessed mStartTimeUs to be in the previous
// segment (likely very close to the end), but either video or
// audio has not found start by the end of that segment.
//
// If this new segment is not a discontinuity, keep searching.
//
// If this new segment even got a discontinuity marker, just
// set mStartTimeUs=0, and take all samples from now on.
mStartTimeUs = 0;
mFirstPTSValid = false;
mIDRFound = false;
mVideoBuffer->clear();
}
}

FLOGV("fetching segment %d from (%d .. %d)",
mSeqNumber, firstSeqNumberInPlaylist, lastSeqNumberInPlaylist);
return true;
}
void PlaylistFetcher::onDownloadNext() {
AString uri;
sp<AMessage> itemMeta;
sp<ABuffer> buffer;
sp<ABuffer> tsBuffer;
int32_t firstSeqNumberInPlaylist = 0;
int32_t lastSeqNumberInPlaylist = 0;
bool connectHTTP = true;

if (mDownloadState->hasSavedState()) {
mDownloadState->restoreState(
uri,
itemMeta,
buffer,
tsBuffer,
firstSeqNumberInPlaylist,
lastSeqNumberInPlaylist);
connectHTTP = false;
FLOGV("resuming: '%s'", uri.c_str());
} else {
if (!initDownloadState(
uri,
itemMeta,
firstSeqNumberInPlaylist,
lastSeqNumberInPlaylist)) {
return;
}
FLOGV("fetching: '%s'", uri.c_str());
}

int64_t range_offset, range_length;
if (!itemMeta->findInt64("range-offset", &range_offset)
|| !itemMeta->findInt64("range-length", &range_length)) {
range_offset = 0;
range_length = -1;
}

// block-wise download
bool shouldPause = false;
ssize_t bytesRead;
do {
int64_t startUs = ALooper::GetNowUs();
//下载
bytesRead = mHTTPDownloader->fetchBlock(
uri.c_str(), &buffer, range_offset, range_length, kDownloadBlockSize,
NULL /* actualURL */, connectHTTP);
int64_t delayUs = ALooper::GetNowUs() - startUs;

if (bytesRead == ERROR_NOT_CONNECTED) {
return;
}
if (bytesRead < 0) {
status_t err = bytesRead;
ALOGE("failed to fetch .ts segment at url '%s'", uri.c_str());
notifyError(err);
return;
}

// add sample for bandwidth estimation, excluding samples from subtitles (as
// its too small), or during startup/resumeUntil (when we could have more than
// one connection open which affects bandwidth)
if (!mStartup && mStopParams == NULL && bytesRead > 0
&& (mStreamTypeMask
& (LiveSession::STREAMTYPE_AUDIO
| LiveSession::STREAMTYPE_VIDEO))) {
mSession->addBandwidthMeasurement(bytesRead, delayUs);
if (delayUs > 2000000ll) {
FLOGV("bytesRead %zd took %.2f seconds - abnormal bandwidth dip",
bytesRead, (double)delayUs / 1.0e6);
}
}

connectHTTP = false;

CHECK(buffer != NULL);

size_t size = buffer->size();
// Set decryption range.
buffer->setRange(size - bytesRead, bytesRead);
//通过获取的key解密buffer
status_t err = decryptBuffer(mSeqNumber - firstSeqNumberInPlaylist, buffer,
buffer->offset() == 0 /* first */);
// Unset decryption range.
buffer->setRange(0, size);

if (err != OK) {
ALOGE("decryptBuffer failed w/ error %d", err);

notifyError(err);
return;
}

bool startUp = mStartup; // save current start up state

err = OK;
if (bufferStartsWithTsSyncByte(buffer)) {
// Incremental extraction is only supported for MPEG2 transport streams.
if (tsBuffer == NULL) {
tsBuffer = new ABuffer(buffer->data(), buffer->capacity());
tsBuffer->setRange(0, 0);
} else if (tsBuffer->capacity() != buffer->capacity()) {
size_t tsOff = tsBuffer->offset(), tsSize = tsBuffer->size();
tsBuffer = new ABuffer(buffer->data(), buffer->capacity());
tsBuffer->setRange(tsOff, tsSize);
}
tsBuffer->setRange(tsBuffer->offset(), tsBuffer->size() + bytesRead);
//将解密后的buffer递给解码器
err = extractAndQueueAccessUnitsFromTs(tsBuffer);
}

if (err == -EAGAIN) {
// starting sequence number too low/high
mTSParser.clear();
for (size_t i = 0; i < mPacketSources.size(); i++) {
sp<AnotherPacketSource> packetSource = mPacketSources.valueAt(i);
packetSource->clear();
}
postMonitorQueue();
return;
} else if (err == ERROR_OUT_OF_RANGE) {
// reached stopping point
notifyStopReached();
return;
} else if (err != OK) {
notifyError(err);
return;
}
// If we're switching, post start notification
// this should only be posted when the last chunk is full processed by TSParser
if (mSeekMode != LiveSession::kSeekModeExactPosition && startUp != mStartup) {
CHECK(mStartTimeUsNotify != NULL);
mStartTimeUsNotify->post();
mStartTimeUsNotify.clear();
shouldPause = true;
}
if (shouldPause || shouldPauseDownload()) {
// save state and return if this is not the last chunk,
// leaving the fetcher in paused state.
if (bytesRead != 0) {
mDownloadState->saveState(
uri,
itemMeta,
buffer,
tsBuffer,
firstSeqNumberInPlaylist,
lastSeqNumberInPlaylist);
return;
}
shouldPause = true;
}
} while (bytesRead != 0);

if (bufferStartsWithTsSyncByte(buffer)) {
// If we don't see a stream in the program table after fetching a full ts segment
// mark it as nonexistent.
ATSParser::SourceType srcTypes[] =
{ ATSParser::VIDEO, ATSParser::AUDIO };
LiveSession::StreamType streamTypes[] =
{ LiveSession::STREAMTYPE_VIDEO, LiveSession::STREAMTYPE_AUDIO };
const size_t kNumTypes = NELEM(srcTypes);

for (size_t i = 0; i < kNumTypes; i++) {
ATSParser::SourceType srcType = srcTypes[i];
LiveSession::StreamType streamType = streamTypes[i];

sp<AnotherPacketSource> source =
static_cast<AnotherPacketSource *>(
mTSParser->getSource(srcType).get());

if (!mTSParser->hasSource(srcType)) {
ALOGW("MPEG2 Transport stream does not contain %s data.",
srcType == ATSParser::VIDEO ? "video" : "audio");

mStreamTypeMask &= ~streamType;
mPacketSources.removeItem(streamType);
}
}

}

if (checkDecryptPadding(buffer) != OK) {
ALOGE("Incorrect padding bytes after decryption.");
notifyError(ERROR_MALFORMED);
return;
}

if (tsBuffer != NULL) {
AString method;
CHECK(buffer->meta()->findString("cipher-method", &method));
if ((tsBuffer->size() > 0 && method == "NONE")
|| tsBuffer->size() > 16) {
ALOGE("MPEG2 transport stream is not an even multiple of 188 "
"bytes in length.");
notifyError(ERROR_MALFORMED);
return;
}
}

// bulk extract non-ts files
bool startUp = mStartup;
if (tsBuffer == NULL) {
status_t err = extractAndQueueAccessUnits(buffer, itemMeta);
if (err == -EAGAIN) {
// starting sequence number too low/high
postMonitorQueue();
return;
} else if (err == ERROR_OUT_OF_RANGE) {
// reached stopping point
notifyStopReached();
return;
} else if (err != OK) {
notifyError(err);
return;
}
}

++mSeqNumber;

// if adapting, pause after found the next starting point
if (mSeekMode != LiveSession::kSeekModeExactPosition && startUp != mStartup) {
CHECK(mStartTimeUsNotify != NULL);
mStartTimeUsNotify->post();
mStartTimeUsNotify.clear();
shouldPause = true;
}

if (!shouldPause) {
postMonitorQueue();
}
}

判断是否需要切换带宽

bool LiveSession::switchBandwidthIfNeeded(bool bufferHigh, bool bufferLow) {
// no need to check bandwidth if we only have 1 bandwidth settings

int32_t bandwidthBps, shortTermBps;
bool isStable;
//调用estimateBandwidth预测带宽
if (mBandwidthEstimator->estimateBandwidth(&bandwidthBps, &isStable, &shortTermBps)) {
ALOGV("bandwidth estimated at %.2f kbps, ""stable %d, shortTermBps %.2f kbps",bandwidthBps / 1024.0f, isStable, shortTermBps / 1024.0f);
mLastBandwidthBps = bandwidthBps;
mLastBandwidthStable = isStable;
} else {
ALOGV("no bandwidth estimate.");
return false;
}

int32_t curBandwidth = mBandwidthItems.itemAt(mCurBandwidthIndex).mBandwidth;
// canSwithDown and canSwitchUp can't both be true.
// we only want to switch up when measured bw is 120% higher than current variant,
// and we only want to switch down when measured bw is below current variant.
bool canSwitchDown = bufferLow && (bandwidthBps < (int32_t)curBandwidth);
bool canSwitchUp = bufferHigh && (bandwidthBps > (int32_t)curBandwidth * 12 / 10);

if (canSwitchDown || canSwitchUp) {
// bandwidth estimating has some delay, if we have to downswitch when
// it hasn't stabilized, use the short term to guess real bandwidth,
// since it may be dropping too fast.
// (note this doesn't apply to upswitch, always use longer average there)
if (!isStable && canSwitchDown) {
if (shortTermBps < bandwidthBps) {
bandwidthBps = shortTermBps;
}
}

//获取要修改带宽数值index
ssize_t bandwidthIndex = getBandwidthIndex(bandwidthBps);

// it's possible that we're checking for canSwitchUp case, but the returned
// bandwidthIndex is < mCurBandwidthIndex, as getBandwidthIndex() only uses 70%
// of measured bw. In that case we don't want to do anything, since we have
// both enough buffer and enough bw.
if ((canSwitchUp && bandwidthIndex > mCurBandwidthIndex)
|| (canSwitchDown && bandwidthIndex < mCurBandwidthIndex)) {
// if not yet prepared, just restart again with new bw index.
// this is faster and playback experience is cleaner.
//修改配置,包括重启各种资源
changeConfiguration(mInPreparationPhase ? 0 : -1ll, bandwidthIndex);
return true;
}
}
return false;
}

size_t LiveSession::getBandwidthIndex(int32_t bandwidthBps) {
if (mBandwidthItems.size() < 2) {
// shouldn't be here if we only have 1 bandwidth, check
// logic to get rid of redundant bandwidth polling
ALOGW("getBandwidthIndex() called for single bandwidth playlist!");
return 0;
}

#if 1
char value[PROPERTY_VALUE_MAX];
ssize_t index = -1;
if (property_get("media.httplive.bw-index", value, NULL)) {
char *end;
index = strtol(value, &end, 10);
CHECK(end > value && *end == '\0');

if (index >= 0 && (size_t)index >= mBandwidthItems.size()) {
index = mBandwidthItems.size() - 1;
}
}

if (index < 0) {
char value[PROPERTY_VALUE_MAX];
if (property_get("media.httplive.max-bw", value, NULL)) {
char *end;
long maxBw = strtoul(value, &end, 10);
if (end > value && *end == '\0') {
if (maxBw > 0 && bandwidthBps > maxBw) {
ALOGV("bandwidth capped to %ld bps", maxBw);
bandwidthBps = maxBw;
}
}
}

// Pick the highest bandwidth stream that's not currently blacklisted
// below or equal to estimated bandwidth.

index = mBandwidthItems.size() - 1;
ssize_t lowestBandwidth = getLowestValidBandwidthIndex();
while (index > lowestBandwidth) {
// be conservative (70%) to avoid overestimating and immediately
// switching down again.
size_t adjustedBandwidthBps = bandwidthBps * 7 / 10;
const BandwidthItem &item = mBandwidthItems[index];
if (item.mBandwidth <= adjustedBandwidthBps
&& isBandwidthValid(item)) {
break;
}
--index;
}
}
#elif 0
// Change bandwidth at random()
size_t index = uniformRand() * mBandwidthItems.size();
#elif 0
// There's a 50% chance to stay on the current bandwidth and
// a 50% chance to switch to the next higher bandwidth (wrapping around
// to lowest)
const size_t kMinIndex = 0;

static ssize_t mCurBandwidthIndex = -1;

size_t index;
if (mCurBandwidthIndex < 0) {
index = kMinIndex;
} else if (uniformRand() < 0.5) {
index = (size_t)mCurBandwidthIndex;
} else {
index = mCurBandwidthIndex + 1;
if (index == mBandwidthItems.size()) {
index = kMinIndex;
}
}
mCurBandwidthIndex = index;
#elif 0
// Pick the highest bandwidth stream below or equal to 1.2 Mbit/sec

size_t index = mBandwidthItems.size() - 1;
while (index > 0 && mBandwidthItems.itemAt(index).mBandwidth > 1200000) {
--index;
}
#elif 1
char value[PROPERTY_VALUE_MAX];
size_t index;
if (property_get("media.httplive.bw-index", value, NULL)) {
char *end;
index = strtoul(value, &end, 10);
CHECK(end > value && *end == '\0');

if (index >= mBandwidthItems.size()) {
index = mBandwidthItems.size() - 1;
}
} else {
index = 0;
}
#else
size_t index = mBandwidthItems.size() - 1; // Highest bandwidth stream
#endif

CHECK_GE(index, 0);

return index;
}

先上图,以免一大堆的代码引来大家的不适。


在prepare结束后,就可以调用start方法开始播放了。为了简单起见,我们对start之前的调用关系不做分析,仅仅列出这些方法的实现。

public void start() throws IllegalStateException {
if (isRestricted()) {
_setVolume(0, 0);
}
stayAwake(true);
_start();
}
static void
android_media_MediaPlayer_start(JNIEnv *env, jobject thiz)
{
ALOGV("start");
sp<MediaPlayer> mp = getMediaPlayer(env, thiz);
process_media_player_call( env, thiz, mp->start(), NULL, NULL );
}
status_t MediaPlayer::start()
{
status_t ret = NO_ERROR;
Mutex::Autolock _l(mLock);
mLockThreadId = getThreadId();
if (mCurrentState & MEDIA_PLAYER_STARTED) {
ret = NO_ERROR;
} else if ( (mPlayer != 0) && ( mCurrentState & ( MEDIA_PLAYER_PREPARED |
MEDIA_PLAYER_PLAYBACK_COMPLETE | MEDIA_PLAYER_PAUSED ) ) ) {
mPlayer->setLooping(mLoop);
mPlayer->setVolume(mLeftVolume, mRightVolume);
mPlayer->setAuxEffectSendLevel(mSendLevel);
mCurrentState = MEDIA_PLAYER_STARTED;
ret = mPlayer->start();
if (ret != NO_ERROR) {
mCurrentState = MEDIA_PLAYER_STATE_ERROR;
} else {
if (mCurrentState == MEDIA_PLAYER_PLAYBACK_COMPLETE) {
ALOGV("playback completed immediately following start()");
}
}
} else {
ALOGE("start called in state %d", mCurrentState);
ret = INVALID_OPERATION;
}
mLockThreadId = 0;
return ret;
}
status_t StagefrightPlayer::start() {
return mPlayer->play();
}
status_t AwesomePlayer::play() {
ATRACE_CALL();
Mutex::Autolock autoLock(mLock);
modifyFlags(CACHE_UNDERRUN, CLEAR);
return play_l();
}

Start的真正工作是从AwesomePlayer::play_l开始的,在AwesomePlayer::play_l中通过createAudioPlayer_l创建出音频播放器,然后再通过startAudioPlayer_l开始音频播放器的播放。下面将针对这两个方法进行分析:

status_t AwesomePlayer::play_l() {
modifyFlags(SEEK_PREVIEW, CLEAR);
mMediaRenderingStartGeneration = ++mStartGeneration;
if (!(mFlags & PREPARED)) {
status_t err = prepare_l();
}
modifyFlags(PLAYING, SET);
modifyFlags(FIRST_FRAME, SET);
if (mAudioSource != NULL) {
if (mAudioPlayer == NULL) {
createAudioPlayer_l();
}
CHECK(!(mFlags & AUDIO_RUNNING));
if (mVideoSource == NULL) {
// We don't want to post an error notification at this point,
// the error returned from MediaPlayer::start() will suffice.
status_t err = startAudioPlayer_l(
false /* sendErrorNotification */);
}
}
if (mFlags & AT_EOS) {
// Legacy behaviour, if a stream finishes playing and then
// is started again, we play from the start...
seekTo_l(0);
}
return OK;
}

createAudioPlayer_l方法相对简单,它通过AudioPlayer构造方法创建出一个AudioPlayer,然后将mAudioSource赋给它,mAudioPlayer的输入为mAudioSource,也就是解码器对应的OMXCodec。在构造AudioPlayer对象时会存放在其成员mSource中。
这里还需要注意的是mAudioSink,这个大家还有印象吧,就是我们之前提到的在setDataSource阶段创建的AudioOutPut,也就说这里做了两个重要的事情,一个是将输入mAudioSource赋给AudioPlayer,另一个是将mAudioSink这个与硬件相关的赋给AudioPlayer

void AwesomePlayer::createAudioPlayer_l()
{
mAudioPlayer = new AudioPlayer(mAudioSink, flags, this);
mAudioPlayer->setSource(mAudioSource);
// If there was a seek request before we ever started,
// honor the request now.
// Make sure to do this before starting the audio player
// to avoid a race condition.
//如果在开始播放之前有一个seek的请求那么需要在启动audio player之前进行seek
seekAudioIfNecessary_l();
}
AudioPlayer::AudioPlayer(
const sp<MediaPlayerBase::AudioSink> &audioSink,
uint32_t flags,
AwesomePlayer *observer)
: mInputBuffer(NULL),
mSampleRate(0),
mLatencyUs(0),
mFrameSize(0),
mNumFramesPlayed(0),
mNumFramesPlayedSysTimeUs(ALooper::GetNowUs()),
mPositionTimeMediaUs(-1),
mPositionTimeRealUs(-1),
mSeeking(false),
mReachedEOS(false),
mFinalStatus(OK),
mSeekTimeUs(0),
mStarted(false),
mIsFirstBuffer(false),
mFirstBufferResult(OK),
mFirstBuffer(NULL),
mAudioSink(audioSink),
mObserver(observer),
mPinnedTimeUs(-1ll),
mPlaying(false),
mStartPosUs(0),
mCreateFlags(flags) {
}
void AudioPlayer::setSource(const sp<MediaSource> &source) {
CHECK(mSource == NULL);
mSource = source;
}

在audio Player创建结束后就可以在startAudioPlayer_l中调用它的start方法进行播放了。

status_t AwesomePlayer::startAudioPlayer_l(bool sendErrorNotification) {

if (!(mFlags & AUDIOPLAYER_STARTED)) {
bool wasSeeking = mAudioPlayer->isSeeking();
// We've already started the MediaSource in order to enable
// the prefetcher to read its data.
err = mAudioPlayer->start(true /* sourceAlreadyStarted */);
return err;
}
modifyFlags(AUDIOPLAYER_STARTED, SET);
if (wasSeeking) {
CHECK(!mAudioPlayer->isSeeking());
// We will have finished the seek while starting the audio player.
postAudioSeekComplete();
} else {
notifyIfMediaStarted_l();
}
}
return err;
}

AudioPlayer::start首先通过mSource->read(&mFirstBuffer, &options); 读取第一段解码后的数据,解码第一帧相当于启动了解码循环。然后再通过mAudioSink->open, mAudioSink->start();进行播放

status_t AudioPlayer::start(bool sourceAlreadyStarted) {
mFirstBufferResult = mSource->read(&mFirstBuffer, &options);
if (mFirstBufferResult == INFO_FORMAT_CHANGED) {
mFirstBufferResult = OK;
mIsFirstBuffer = false;
} else {
mIsFirstBuffer = true;
}
audio_format_t audioFormat = AUDIO_FORMAT_PCM_16_BIT;
if (mAudioSink.get() != NULL) {
status_t err = mAudioSink->open(
mSampleRate, numChannels, channelMask, audioFormat,
DEFAULT_AUDIOSINK_BUFFERCOUNT,
&AudioPlayer::AudioSinkCallback,
this,
(audio_output_flags_t)flags,
useOffload() ? &offloadInfo : NULL);
if (err == OK) {
mLatencyUs = (int64_t)mAudioSink->latency() * 1000;
mFrameSize = mAudioSink->frameSize();
err = mAudioSink->start();
// do not alter behavior for non offloaded tracks: ignore start status.
if (!useOffload()) {
err = OK;
}
}
} else {

}
return OK;
}

这是AudioSinkCallback

size_t AudioPlayer::AudioSinkCallback(
MediaPlayerBase::AudioSink * /* audioSink */,
void *buffer, size_t size, void *cookie,
MediaPlayerBase::AudioSink::cb_event_t event) {
AudioPlayer *me = (AudioPlayer *)cookie;

switch(event) {
case MediaPlayerBase::AudioSink::CB_EVENT_FILL_BUFFER:
return me->fillBuffer(buffer, size);

case MediaPlayerBase::AudioSink::CB_EVENT_STREAM_END:
ALOGV("AudioSinkCallback: stream end");
me->mReachedEOS = true;
me->notifyAudioEOS();
break;

case MediaPlayerBase::AudioSink::CB_EVENT_TEAR_DOWN:
ALOGV("AudioSinkCallback: Tear down event");
me->mObserver->postAudioTearDown();
break;
}

return 0;
}
status_t OMXCodec::read(MediaBuffer **buffer, const ReadOptions *options) {
if (mInitialBufferSubmit) {
mInitialBufferSubmit = false;
//===========================
drainInputBuffers();
if (mState == EXECUTING) {
// Otherwise mState == RECONFIGURING and this code will trigger
// after the output port is reenabled.
//===========================
fillOutputBuffers();
}
}
//等待缓存被填满
while (mState != ERROR && !mNoMoreOutputData && mFilledBuffers.empty()) {
if ((err = waitForBufferFilled_l()) != OK) {
return err;
}
}
//如果到这里缓存还是空的则表示已经结束解码
if (mFilledBuffers.empty()) {
return mSignalledEOS ? mFinalStatus : ERROR_END_OF_STREAM;
}
//获取缓存的开始位置
size_t index = *mFilledBuffers.begin();
mFilledBuffers.erase(mFilledBuffers.begin());

BufferInfo *info = &mPortBuffers[kPortIndexOutput].editItemAt(index);
CHECK_EQ((int)info->mStatus, (int)OWNED_BY_US);
info->mStatus = OWNED_BY_CLIENT;

info->mMediaBuffer->add_ref();
if (mSkipCutBuffer != NULL) {
mSkipCutBuffer->submit(info->mMediaBuffer);
}
*buffer = info->mMediaBuffer;
return OK;
}
void OMXCodec::drainInputBuffers() {
CHECK(mState == EXECUTING || mState == RECONFIGURING);

if (mFlags & kUseSecureInputBuffers) {
Vector<BufferInfo> *buffers = &mPortBuffers[kPortIndexInput];
for (size_t i = 0; i < buffers->size(); ++i) {
if (!drainAnyInputBuffer()|| (mFlags & kOnlySubmitOneInputBufferAtOneTime)) {
break;
}
}
} else {
Vector<BufferInfo> *buffers = &mPortBuffers[kPortIndexInput];
for (size_t i = 0; i < buffers->size(); ++i) {
BufferInfo *info = &buffers->editItemAt(i);
if (info->mStatus != OWNED_BY_US) {
continue;
}
if (!drainInputBuffer(info)) {
break;
}
if (mFlags & kOnlySubmitOneInputBufferAtOneTime) {
break;
}
}
}
}
bool OMXCodec::drainAnyInputBuffer() {
return drainInputBuffer((BufferInfo *)NULL);
}
bool OMXCodec::drainInputBuffer(BufferInfo *info) {
for (;;) {
MediaBuffer *srcBuffer;
if (mSeekTimeUs >= 0) {

} else if (mLeftOverBuffer) {

} else {
err = mSource->read(&srcBuffer);
}
return true;
}
status_t MP3Source::read(MediaBuffer **out, const ReadOptions *options) {

MediaBuffer *buffer;
status_t err = mGroup->acquire_buffer(&buffer);
size_t frame_size;
int bitrate;
int num_samples;
int sample_rate;
//获取同步信息
for (;;) {
ssize_t n = mDataSource->readAt(mCurrentPos, buffer->data(), 4);
uint32_t header = U32_AT((const uint8_t *)buffer->data());
if ((header & kMask) == (mFixedHeader & kMask)
&& GetMPEGAudioFrameSize(
header, &frame_size, &sample_rate, NULL,
&bitrate, &num_samples)) {
// re-calculate mCurrentTimeUs because we might have called Resync()
if (seekCBR) {
mCurrentTimeUs = (mCurrentPos - mFirstFramePos) * 8000 / bitrate;
mBasisTimeUs = mCurrentTimeUs;
}
break;
}
}
CHECK(frame_size <= buffer->size());
//获取需要获取的数据
ssize_t n = mDataSource->readAt(mCurrentPos, buffer->data(), frame_size);
buffer->set_range(0, frame_size);
buffer->meta_data()->setInt64(kKeyTime, mCurrentTimeUs);
buffer->meta_data()->setInt32(kKeyIsSyncFrame, 1);
mCurrentPos += frame_size;
mSamplesRead += num_samples;
mCurrentTimeUs = mBasisTimeUs + ((mSamplesRead * 1000000) / sample_rate);
*out = buffer;
return OK;
}

紧接着的工作就交给mAudioSink->open, mAudioSink->start()了,
我们知道这里的mAudioSink是通过MediaPlayerService::Client::setDataSource_pre方法的

sp<MediaPlayerBase> MediaPlayerService::Client::setDataSource_pre(
player_type playerType)
{
//………………………………………….
if (!p->hardwareOutput()) {
Mutex::Autolock l(mLock);
mAudioOutput =
new AudioOutput(mAudioSessionId, IPCThreadState::self()->getCallingUid(),
mPid, mAudioAttributes);
static_cast<MediaPlayerInterface*>(p.get())->setAudioSink(mAudioOutput);
}
return p;
}

知道了mAudioSink我们就可以分析open方法了。需要注意的是mAudioSink->open 中传入的参数中有个函数指针 AudioPlayer::AudioSinkCallback ,其主要作用就是audioout播放pcm的时候会定期调用此回调函数填充数据,回调函数保存在 mCallback中。这里还有个重要的细节要注意,在构造AudioTrack对象的时候,传入了CallbackWrapper作为audiotrack的callback当audiotrack需要数据的时候,就会调用此函数:

status_t MediaPlayerService::AudioOutput::open(
uint32_t sampleRate, int channelCount, audio_channel_mask_t channelMask,
audio_format_t format, int bufferCount,
AudioCallback cb, void *cookie,
audio_output_flags_t flags,
const audio_offload_info_t *offloadInfo,
bool doNotReconnect,
uint32_t suggestedFrameCount)
{
sp<AudioTrack> t;
CallbackData *newcbd = NULL;
//将回调函数保存在 mCallback
mCallback = cb;
mCallbackCookie = cookie;
// We don't attempt to create a new track if we are recycling an
// offloaded track. But, if we are recycling a non-offloaded or we
// are switching where one is offloaded and one isn't then we create
// the new track in advance so that we can read additional stream info
if (!(reuse && bothOffloaded)) {
ALOGV("creating new AudioTrack");
if (mCallback != NULL) {
newcbd = new CallbackData(this);
//new 一个AudioTrack
t = new AudioTrack(
mStreamType,
sampleRate,
format,
channelMask,
frameCount,
flags,
//audiotrack需要数据的时候,就会调用此函数
CallbackWrapper,
newcbd,
0, // notification frames
mSessionId,
AudioTrack::TRANSFER_CALLBACK,
offloadInfo,
mUid,
mPid,
mAttributes,
doNotReconnect);
} else {
//………………………………………
}
}
mCallbackData = newcbd;
//将 new出来的AudioTrack保存到mTrack中
mTrack = t;
return res;
}

上面有介绍到AudioOut播放pcm的时候会定期调用AudioPlayer::AudioSinkCallback此回调函数填充数据,但是在代码中如果跟踪mCallback你会发现并没有直接对其进行调用,其实对这个方法的相关调用在AudioOutput::CallbackWrapper中完成的。
接下来我们就从CallbackWrapper的注册以及如何调用AudioPlayer::AudioSinkCallback进行对缓存进行操作进行较为细致的分析:

首先我们看下AudioTrack构造方法:
在构造方法中调用了set方法,CallbackWrapper是它的第7个参数。

AudioTrack::AudioTrack(
audio_stream_type_t streamType,
uint32_t sampleRate,
audio_format_t format,
audio_channel_mask_t channelMask,
size_t frameCount,
audio_output_flags_t flags,
callback_t cbf,
void* user,
uint32_t notificationFrames,
int sessionId,
transfer_type transferType,
const audio_offload_info_t *offloadInfo,
int uid,
pid_t pid,
const audio_attributes_t* pAttributes,
bool doNotReconnect)
: mStatus(NO_INIT),
mIsTimed(false),
mPreviousPriority(ANDROID_PRIORITY_NORMAL),
mPreviousSchedulingGroup(SP_DEFAULT),
mPausedPosition(0),
mSelectedDeviceId(AUDIO_PORT_HANDLE_NONE)
{
mStatus = set(streamType, sampleRate, format, channelMask,
frameCount, flags, cbf, user, notificationFrames,
0 /*sharedBuffer*/, false /*threadCanCallJava*/, sessionId, transferType,
offloadInfo, uid, pid, pAttributes, doNotReconnect);
}

我们看到set方法中CallbackWrapper最终被赋给mCbf,同时在其中开启了AudioTrackThread线程。并调用createTrack_l创建IAudioTrack。

status_t AudioTrack::set(
audio_stream_type_t streamType,
uint32_t sampleRate,
audio_format_t format,
audio_channel_mask_t channelMask,
size_t frameCount,
audio_output_flags_t flags,
callback_t cbf,
void* user,
uint32_t notificationFrames,
const sp<IMemory>& sharedBuffer,
bool threadCanCallJava,
int sessionId,
transfer_type transferType,
const audio_offload_info_t *offloadInfo,
int uid,
pid_t pid,
const audio_attributes_t* pAttributes,
bool doNotReconnect)
{
switch (transferType) {
case TRANSFER_DEFAULT:
//........................
case TRANSFER_CALLBACK:
if (cbf == NULL || sharedBuffer != 0) {
return BAD_VALUE;
}
break;
//........................
default:
return BAD_VALUE;
}
mCbf = cbf;
if (cbf != NULL) {
mAudioTrackThread = new AudioTrackThread(*this, threadCanCallJava);
mAudioTrackThread->run("AudioTrack", ANDROID_PRIORITY_AUDIO, 0 /*stack*/);
// thread begins in paused state, and will not reference us until start()
}
// create the IAudioTrack
status_t status = createTrack_l();
return NO_ERROR;
}

AudioTrack::AudioTrackThread::AudioTrackThread(AudioTrack& receiver, bool bCanCallJava)
: Thread(bCanCallJava), mReceiver(receiver), mPaused(true), mPausedInt(false), mPausedNs(0LL),
mIgnoreNextPausedInt(false)
{
}

bool AudioTrack::AudioTrackThread::threadLoop()
{
//…………………………………………..
nsecs_t ns = mReceiver.processAudioBuffer();
switch (ns) {
case 0:
return true;
case NS_INACTIVE:
pauseInternal();
return true;
case NS_NEVER:
return false;
case NS_WHENEVER:
// Event driven: call wake() when callback notifications conditions change.
ns = INT64_MAX;
// fall through
default:
LOG_ALWAYS_FATAL_IF(ns < 0, "processAudioBuffer() returned %" PRId64, ns);
pauseInternal(ns);
return true;
}
}

我们接下来看下processAudioBuffer,这是一个十分关键的方法,但是这里我们仅仅关注与mCbf相关的部分。如红色标注的代码段所示:

nsecs_t AudioTrack::processAudioBuffer()
{
if (waitStreamEnd) {
// FIXME: Instead of blocking in proxy->waitStreamEndDone(), Callback thread
// should wait on proxy futex and handle CBLK_STREAM_END_DONE within this function
// (and make sure we don't callback for more data while we're stopping).
// This helps with position, marker notifications, and track invalidation.
struct timespec timeout;
timeout.tv_sec = WAIT_STREAM_END_TIMEOUT_SEC;
timeout.tv_nsec = 0;

status_t status = proxy->waitStreamEndDone(&timeout);
switch (status) {
case NO_ERROR:
case DEAD_OBJECT:
case TIMED_OUT:
mCbf(EVENT_STREAM_END, mUserData, NULL);
{
AutoMutex lock(mLock);
// The previously assigned value of waitStreamEnd is no longer valid,
// since the mutex has been unlocked and either the callback handler
// or another thread could have re-started the AudioTrack during that time.
waitStreamEnd = mState == STATE_STOPPING;
if (waitStreamEnd) {
mState = STATE_STOPPED;
mReleased = 0;
}
}
break;
}
return 0;
}
if (newUnderrun) {
mCbf(EVENT_UNDERRUN, mUserData, NULL);
}
while (loopCountNotifications > 0) {
mCbf(EVENT_LOOP_END, mUserData, NULL);
--loopCountNotifications;
}
if (flags & CBLK_BUFFER_END) {
mCbf(EVENT_BUFFER_END, mUserData, NULL);
}
if (markerReached) {
mCbf(EVENT_MARKER, mUserData, &markerPosition);
}
while (newPosCount > 0) {
size_t temp = newPosition;
mCbf(EVENT_NEW_POS, mUserData, &temp);
newPosition += updatePeriod;
newPosCount--;
}
if (mObservedSequence != sequence) {
mObservedSequence = sequence;
mCbf(EVENT_NEW_IAUDIOTRACK, mUserData, NULL);
// for offloaded tracks, just wait for the upper layers to recreate the track
if (isOffloadedOrDirect()) {
return NS_INACTIVE;
}
}

size_t reqSize = audioBuffer.size;
mCbf(EVENT_MORE_DATA, mUserData, &audioBuffer);
size_t writtenSize = audioBuffer.size;

// A lot has transpired since ns was calculated, so run again immediately and re-calculate
return 0;
}

从上面代码可以看出在new AudioTrack的时候会启动AudioTrackThread,在AudioTrackThread中的threadLoop会调用mCbf(EVENT_MORE_DATA, mUserData, &audioBuffer);等回调方法,从而调用AudioOutput::CallbackWrapper方法,在AudioOutput::CallbackWrapper方法,接下来我们看下AudioOutput::CallbackWrapper方法,在该方法中通过data->getOutput();获得AudioOutput,再通过*me->mCallback来调用AudioPlayer::AudioSinkCallback中对应的回调函数。

void MediaPlayerService::AudioOutput::CallbackWrapper(
int event, void *cookie, void *info) {
//ALOGV("callbackwrapper");
CallbackData *data = (CallbackData*)cookie;
// lock to ensure we aren't caught in the middle of a track switch.
data->lock();
AudioOutput *me = data->getOutput();
AudioTrack::Buffer *buffer = (AudioTrack::Buffer *)info;
switch(event) {
case AudioTrack::EVENT_MORE_DATA: {
size_t actualSize = (*me->mCallback)(
me, buffer->raw, buffer->size, me->mCallbackCookie,
CB_EVENT_FILL_BUFFER);
// Log when no data is returned from the callback.
// (1) We may have no data (especially with network streaming sources).
// (2) We may have reached the EOS and the audio track is not stopped yet.
// Note that AwesomePlayer/AudioPlayer will only return zero size when it reaches the EOS.
// NuPlayerRenderer will return zero when it doesn't have data (it doesn't block to fill).
//
// This is a benign busy-wait, with the next data request generated 10 ms or more later;
// nevertheless for power reasons, we don't want to see too many of these.
me->mBytesWritten += actualSize; // benign race with reader.
buffer->size = actualSize;
} break;

case AudioTrack::EVENT_STREAM_END:
// currently only occurs for offloaded callbacks
ALOGV("callbackwrapper: deliver EVENT_STREAM_END");
(*me->mCallback)(me, NULL /* buffer */, 0 /* size */,
me->mCallbackCookie, CB_EVENT_STREAM_END);
break;

case AudioTrack::EVENT_NEW_IAUDIOTRACK :
ALOGV("callbackwrapper: deliver EVENT_TEAR_DOWN");
(*me->mCallback)(me, NULL /* buffer */, 0 /* size */,
me->mCallbackCookie, CB_EVENT_TEAR_DOWN);
break;
case AudioTrack::EVENT_UNDERRUN:
// This occurs when there is no data available, typically
// when there is a failure to supply data to the AudioTrack. It can also
// occur in non-offloaded mode when the audio device comes out of standby.
//
// If an AudioTrack underruns it outputs silence. Since this happens suddenly
// it may sound like an audible pop or glitch.
//
// The underrun event is sent once per track underrun; the condition is reset
// when more data is sent to the AudioTrack.
break;
default:
ALOGE("received unknown event type: %d inside CallbackWrapper !", event);
}
data->unlock();
}

下面是AudioPlayer::AudioSinkCallback的实现部分,我们假设当前传递的事件为CB_EVENT_FILL_BUFFER,这时候将会调用AudioPlayer::fillBuffer来填充缓冲区的数据。
在AudioPlayer::fillBuffer中直接调用err = mSource->read(&mInputBuffer, &options);即调用解码器的mediabuffer来填充数据,这部分代码就不列出了。

// static
size_t AudioPlayer::AudioSinkCallback(
MediaPlayerBase::AudioSink * /* audioSink */,
void *buffer, size_t size, void *cookie,
MediaPlayerBase::AudioSink::cb_event_t event) {
AudioPlayer *me = (AudioPlayer *)cookie;
switch(event) {
case MediaPlayerBase::AudioSink::CB_EVENT_FILL_BUFFER:
return me->fillBuffer(buffer, size);
case MediaPlayerBase::AudioSink::CB_EVENT_STREAM_END:
ALOGV("AudioSinkCallback: stream end");
me->mReachedEOS = true;
me->notifyAudioEOS();
break;
case MediaPlayerBase::AudioSink::CB_EVENT_TEAR_DOWN:
ALOGV("AudioSinkCallback: Tear down event");
me->mObserver->postAudioTearDown();
break;
}
return 0;
}

我们看完open方法的流程,接下来看下start方法:该方法比较简单就只掉调用mTrack->start.

status_t MediaPlayerService::AudioOutput::start()
{
if (mTrack != 0) {
mTrack->setVolume(mLeftVolume, mRightVolume);
mTrack->setAuxEffectSendLevel(mSendLevel);
return mTrack->start();
}
return NO_INIT;
}

我们看下AudioTrack::start在该方法内部调用了 mAudioTrack->start();

status_t AudioTrack::start()
{
State previousState = mState;
if (previousState == STATE_PAUSED_STOPPING) {
mState = STATE_STOPPING;
} else {
mState = STATE_ACTIVE;
}
(void) updateAndGetPosition_l();
sp<AudioTrackThread> t = mAudioTrackThread;
if (t != 0) {
if (previousState == STATE_STOPPING) {
mProxy->interrupt();
} else {
t->resume();
}
} else {
mPreviousPriority = getpriority(PRIO_PROCESS, 0);
get_sched_policy(0, &mPreviousSchedulingGroup);
androidSetThreadPriority(0, ANDROID_PRIORITY_AUDIO);
}

status_t status = NO_ERROR;
if (!(flags & CBLK_INVALID)) {
status = mAudioTrack->start();
if (status == DEAD_OBJECT) {
flags |= CBLK_INVALID;
}
}
return status;
}

跟踪源码我们可以看出mAudioTrack是在createTrack_l()中创建的对应代码如下,它是调用audioFlinger->createTrack创建的,通过上述流程Audiotrack启动后就会周期性的调用回调函数从解码器获取数据进行输出。

// must be called with mLock held
status_t AudioTrack::createTrack_l()
{
//……………………………………….
sp<IAudioTrack> track = audioFlinger->createTrack(streamType,
mSampleRate,
mFormat,
mChannelMask,
&temp,
&trackFlags,
mSharedBuffer,
output,
tid,
&mSessionId,
mClientUid,
&status);

//……………………………………….
mAudioTrack = track;
return status;
}

无图无真相,直接上图!再看这个图应该更清晰了吧,要是还有疑问可以给我发邮件。

MediaPlayer播放框架源代码解析:
Prepare–加载解码器,数据缓存的初始化

通过setDataSource设置播放资源后。就可以调用Prepare方法为播放做准备了。Prepare的整个流程是最为复杂的一个阶段,从整体上可以分成两大部分,第一部分是解码器的加载,第二部分是数据缓存的设置,Prepare之前的调用流程和setDataSource一样都是通过Java层到jni层再到native层,这部分就不做过多的介绍了,这部分的代码如下。

public void prepare() throws IOException, IllegalStateException {
_prepare();
scanInternalSubtitleTracks();
}
private native void _prepare() throws IOException, IllegalStateException;
static void
android_media_MediaPlayer_prepare(JNIEnv *env, jobject thiz)
{
sp<MediaPlayer> mp = getMediaPlayer(env, thiz);
if (mp == NULL ) {
jniThrowException(env, "java/lang/IllegalStateException", NULL);
return;
}
// Handle the case where the display surface was set before the mp was
// initialized. We try again to make it stick.
sp<IGraphicBufferProducer> st = getVideoSurfaceTexture(env, thiz);
mp->setVideoSurfaceTexture(st);
process_media_player_call( env, thiz, mp->prepare(), "java/io/IOException", "Prepare failed." );
}
status_t MediaPlayer::prepare()
{
Mutex::Autolock _l(mLock);
mLockThreadId = getThreadId();
if (mPrepareSync) {
mLockThreadId = 0;
return -EALREADY;
}
mPrepareSync = true;
status_t ret = prepareAsync_l();
if (mPrepareSync) {
mSignal.wait(mLock); // wait for prepare done
mPrepareSync = false;
}
mLockThreadId = 0;
return mPrepareStatus;
}

我们从这里开始:
MediaPlayer 中调用了mPlayer->prepareAsync()方法,这里的mPlayer表示的是Stagefright Player,我们继续往下看:

status_t MediaPlayer::prepareAsync_l(){
if ( (mPlayer != 0) && ( mCurrentState & (MEDIA_PLAYER_INITIALIZED | MEDIA_PLAYER_STOPPED) ) ) {
if (mAudioAttributesParcel != NULL) {
mPlayer->setParameter(KEY_PARAMETER_AUDIO_ATTRIBUTES,
*mAudioAttributesParcel);
} else {
mPlayer->setAudioStreamType(mStreamType);
}
mCurrentState = MEDIA_PLAYER_PREPARING;
return mPlayer->prepareAsync();
}
return INVALID_OPERATION;
}

在StagefrightPlayer中只是简单地调用AwesomePlayer的prepareAsync

status_t StagefrightPlayer::prepareAsync() {
return mPlayer->prepareAsync();
}
status_t AwesomePlayer::prepareAsync() {
ATRACE_CALL();
Mutex::Autolock autoLock(mLock);
if (mFlags & PREPARING) {
return UNKNOWN_ERROR; // async prepare already pending
}
mIsAsyncPrepare = true;
return prepareAsync_l();
}

在AwesomePlayer类的prepareAsync_l方法中将会创建一个AwesomeEvent,启动Queue,将创建的mAsyncPrepareEvent post到Queue中。

status_t AwesomePlayer::prepareAsync_l() {
if (mFlags & PREPARING) {
return UNKNOWN_ERROR; // async prepare already pending
}
if (!mQueueStarted) {
mQueue.start();
mQueueStarted = true;
}
modifyFlags(PREPARING, SET);
mAsyncPrepareEvent = new AwesomeEvent(this, &AwesomePlayer::onPrepareAsyncEvent);
mQueue.postEvent(mAsyncPrepareEvent);
return OK;
}

在继续介绍prepare流程之前我们先来看下TimedEventQueue这个类。从名称上看它是一个事件队列。先来看下它的构造方法,这里很简单只是给它的成员变量初始化,并绑定一个DeathRecipient.

TimedEventQueue::TimedEventQueue()
: mNextEventID(1),
mRunning(false),
mStopped(false),
mDeathRecipient(new PMDeathRecipient(this)),
mWakeLockCount(0) {
}

在start方法中创建一个ThreadWrapper线程。

void TimedEventQueue::start() {
if (mRunning) {
return;
}
mStopped = false;
pthread_attr_t attr;
pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE);
pthread_create(&mThread, &attr, ThreadWrapper, this);
pthread_attr_destroy(&attr);
mRunning = true;
}
// static
void *TimedEventQueue::ThreadWrapper(void *me) {
androidSetThreadPriority(0, ANDROID_PRIORITY_FOREGROUND);
static_cast<TimedEventQueue *>(me)->threadEntry();
return NULL;
}

在ThreadWrapper线程中将会不断循环查看消息队列中的每个Event,看下是否达到执行的时间,如果消息队列为空则将会等待,如果达到超时时间10秒则会退出线程,如果在超时时间之前达到它的执行时间则调用该Event的fire方法。

void TimedEventQueue::threadEntry() {
prctl(PR_SET_NAME, (unsigned long)"TimedEventQueue", 0, 0, 0);
for (;;) {
int64_t now_us = 0;
sp<Event> event;
bool wakeLocked = false;
{
Mutex::Autolock autoLock(mLock);
if (mStopped) {
break;
}
while (mQueue.empty()) {
mQueueNotEmptyCondition.wait(mLock);
}
event_id eventID = 0;
for (;;) {
if (mQueue.empty()) {
// The only event in the queue could have been cancelled
// while we were waiting for its scheduled time.
break;
}
List<QueueItem>::iterator it = mQueue.begin();
eventID = (*it).event->eventID();

now_us = ALooper::GetNowUs();
int64_t when_us = (*it).realtime_us;
int64_t delay_us;
if (when_us < 0 || when_us == INT64_MAX) {
delay_us = 0;
} else {
delay_us = when_us - now_us;
}
if (delay_us <= 0) {
break;
}
static int64_t kMaxTimeoutUs = 10000000ll; // 10 secs
bool timeoutCapped = false;
delay_us = kMaxTimeoutUs;
timeoutCapped = true;
}
status_t err = mQueueHeadChangedCondition.waitRelative(
mLock, delay_us * 1000ll);
if (!timeoutCapped && err == -ETIMEDOUT) {
// We finally hit the time this event is supposed to
// trigger.
now_us = ALooper::GetNowUs();
break;
}
}
// The event w/ this id may have been cancelled while we're
// waiting for its trigger-time, in that case
// removeEventFromQueue_l will return NULL.
// Otherwise, the QueueItem will be removed
// from the queue and the referenced event returned.
event = removeEventFromQueue_l(eventID, &wakeLocked);
}
if (event != NULL) {
// Fire event with the lock NOT held.
event->fire(this, now_us);
if (wakeLocked) {
Mutex::Autolock autoLock(mLock);
releaseWakeLock_l();
}
}
}
}

fire 方法里面直接调用AwesomeEvent中mPlayer的mMethod方法,这个mMethod也就是我们在new AwesomeEvent时候传递进去的onPrepareAsyncEvent。

struct AwesomeEvent : public TimedEventQueue::Event {
AwesomeEvent(AwesomePlayer *player,void (AwesomePlayer::*method)())
: mPlayer(player),
mMethod(method) {
}
protected:
virtual ~AwesomeEvent() {}
virtual void fire(TimedEventQueue * /* queue */, int64_t /* now_us */) {
(mPlayer->*mMethod)();
}
private:
AwesomePlayer *mPlayer;
void (AwesomePlayer::*mMethod)();
AwesomeEvent(const AwesomeEvent &);
AwesomeEvent &operator=(const AwesomeEvent &);
};

所以我们需要看下AwesomePlayer 下的onPrepareAsyncEvent方法。在onPrepareAsyncEvent
方法中调用了beginPrepareAsync_l。在该方法中调用initAudioDecoder()对解码器进行了初始化。

void AwesomePlayer::onPrepareAsyncEvent() {
Mutex::Autolock autoLock(mLock);
beginPrepareAsync_l();
}
void AwesomePlayer::beginPrepareAsync_l() {
if (mFlags & PREPARE_CANCELLED) {
ALOGI("prepare was cancelled before doing anything");
abortPrepare(UNKNOWN_ERROR);
return;
}
if (mUri.size() > 0) {
status_t err = finishSetDataSource_l();
if (err != OK) {
abortPrepare(err);
return;
}
}
if (mVideoTrack != NULL && mVideoSource == NULL) {
status_t err = initVideoDecoder();
if (err != OK) {
abortPrepare(err);
return;
}
}
if (mAudioTrack != NULL && mAudioSource == NULL) {
status_t err = initAudioDecoder();
if (err != OK) {
abortPrepare(err);
return;
}
}
modifyFlags(PREPARING_CONNECTED, SET);
if (isStreamingHTTP()) {
postBufferingEvent_l();
} else {
finishAsyncPrepare_l();
}
}

整个过程如下图所示:

接下来我们重点看下解码器是怎样创建出来的,首先将会调用OMXCodec::Create来创建解码器。

status_t AwesomePlayer::initAudioDecoder() {
ATRACE_CALL();
sp<MetaData> meta = mAudioTrack->getFormat();
const char *mime;
CHECK(meta->findCString(kKeyMIMEType, &mime));
audio_stream_type_t streamType = AUDIO_STREAM_MUSIC;
if (mAudioSink != NULL) {
streamType = mAudioSink->getAudioStreamType();
}
mOffloadAudio = canOffloadStream(meta, (mVideoSource != NULL),
isStreamingHTTP(), streamType);
if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_RAW)) {
mAudioSource = mAudioTrack;
} else {
mOmxSource = OMXCodec::Create(
mClient.interface(), mAudioTrack->getFormat(),
false, // createEncoder
mAudioTrack);
if (mOffloadAudio) {
mAudioSource = mAudioTrack;
} else {
mAudioSource = mOmxSource;
}
}
if (mAudioSource != NULL) {
int64_t durationUs;
if (mAudioTrack->getFormat()->findInt64(kKeyDuration, &durationUs)) {
Mutex::Autolock autoLock(mMiscStateLock);
if (mDurationUs < 0 || durationUs > mDurationUs) {
mDurationUs = durationUs;
}
}
status_t err = mAudioSource->start();
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_QCELP)) {
// For legacy reasons we're simply going to ignore the absence
// of an audio decoder for QCELP instead of aborting playback
// altogether.
return OK;
}
if (mAudioSource != NULL) {
Mutex::Autolock autoLock(mStatsLock);
TrackStat *stat = &mStats.mTracks.editItemAt(mStats.mAudioTrackIndex);
const char *component;
if (!mAudioSource->getFormat()
->findCString(kKeyDecoderComponent, &component)) {
component = "none";
}
stat->mDecoderName = component;
}
return mAudioSource != NULL ? OK : UNKNOWN_ERROR;
}

创建解码器之前我们需要获取当前播放文件的mimeType,然后根据这个mimeType查找对应的解码器,然后创建OMXCodecObserver,并将其赋给每个由allocateNode创建的解码器,并返回。

// static
sp<MediaSource> OMXCodec::Create(
const sp<IOMX> &omx,
const sp<MetaData> &meta, bool createEncoder,
const sp<MediaSource> &source,
const char *matchComponentName,
uint32_t flags,
const sp<ANativeWindow> &nativeWindow) {
//获取MimeType
const char *mime;
bool success = meta->findCString(kKeyMIMEType, &mime);
Vector<CodecNameAndQuirks> matchingCodecs;
//查找匹配的解码器,这里主要从/etc/media_codecs.xml /etc/media_codecs_performence.xml 中加载匹配对应mimetype的解码器。
findMatchingCodecs(mime, createEncoder, matchComponentName, flags, &matchingCodecs);
//这里找到之后将放在matchingCodecs中,主要的存放形式为MediaCodecInfo的列表

//创建OMXCodecObserver
sp<OMXCodecObserver> observer = new OMXCodecObserver;
IOMX::node_id node = 0;

for (size_t i = 0; i < matchingCodecs.size(); ++i) {
const char *componentNameBase = matchingCodecs[i].mName.string();
uint32_t quirks = matchingCodecs[i].mQuirks;
const char *componentName = componentNameBase;
//通过从上面获取
status_t err = omx->allocateNode(componentName, observer, &node);
if (err == OK) {
sp<OMXCodec> codec = new OMXCodec(
omx, node, quirks, flags,
createEncoder, mime, componentName,
source, nativeWindow);
observer->setCodec(codec);
err = codec->configureCodec(meta);
if (err == OK) {
return codec;
}
}
}
return NULL;
}

解码器的匹配是调用findMatchingCodecs来实现的,在开始之前首先获取当前所拥有的编码器的列表,它主要是通过解析/etc/media_codecs.xml这个文件来获取的,然后调用findCodecByType来判断能够处理当前播放文件类型的解码器,并将这些解码器添加到matchingCodecs中,这样返回的就是支持当前播放文件类型的解码器。

// static
void OMXCodec::findMatchingCodecs(
const char *mime,
bool createEncoder, const char *matchComponentName,
uint32_t flags,
Vector<CodecNameAndQuirks> *matchingCodecs) {
matchingCodecs->clear();
//获取当前所拥有的编码器的列表
const sp<IMediaCodecList> list = MediaCodecList::getInstance();
size_t index = 0;
for (;;) {
//通过调用findCodecByType来判断的是否存在能够处理当前播放类型的解码器
ssize_t matchIndex =
list->findCodecByType(mime, createEncoder, index);
if (matchIndex < 0) {
break;
}
index = matchIndex + 1;
const sp<MediaCodecInfo> info = list->getCodecInfo(matchIndex);
const char *componentName = info->getCodecName();
if (matchComponentName && strcmp(componentName, matchComponentName)) {
continue;
}
// When requesting software-only codecs, only push software codecs
// When requesting hardware-only codecs, only push hardware codecs
// When there is request neither for software-only nor for
// hardware-only codecs, push all codecs
if (((flags & kSoftwareCodecsOnly) && IsSoftwareCodec(componentName)) ||
((flags & kHardwareCodecsOnly) && !IsSoftwareCodec(componentName)) ||
(!(flags & (kSoftwareCodecsOnly | kHardwareCodecsOnly)))) {
//将匹配的解码器添加到matchingCodecs
ssize_t index = matchingCodecs->add();
CodecNameAndQuirks *entry = &matchingCodecs->editItemAt(index);
entry->mName = String8(componentName);
entry->mQuirks = getComponentQuirks(info);

ALOGV("matching '%s' quirks 0x%08x",
entry->mName.string(), entry->mQuirks);
}
}
//对解码器进行排序
if (flags & kPreferSoftwareCodecs) {
matchingCodecs->sort(CompareSoftwareCodecsFirst);
}
}
// static
sp<IMediaCodecList> MediaCodecList::getInstance() {
Mutex::Autolock _l(sRemoteInitMutex);
if (sRemoteList == NULL) {
sp<IBinder> binder =
defaultServiceManager()->getService(String16("media.player"));
sp<IMediaPlayerService> service =
interface_cast<IMediaPlayerService>(binder);
if (service.get() != NULL) {
sRemoteList = service->getCodecList();
}
if (sRemoteList == NULL) {
// if failed to get remote list, create local list
sRemoteList = getLocalInstance();
}
}
return sRemoteList;
}
sp<IMediaCodecList> MediaPlayerService::getCodecList() const {
return MediaCodecList::getLocalInstance();
}
// static
sp<IMediaCodecList> MediaCodecList::getLocalInstance() {
Mutex::Autolock autoLock(sInitMutex);
if (sCodecList == NULL) {
MediaCodecList *codecList = new MediaCodecList;
if (codecList->initCheck() == OK) {
sCodecList = codecList;
} else {
delete codecList;
}
}
return sCodecList;
}

MediaCodecList::MediaCodecList()
: mInitCheck(NO_INIT),
mUpdate(false),
mGlobalSettings(new AMessage()) {
parseTopLevelXMLFile("/etc/media_codecs.xml");
parseTopLevelXMLFile("/etc/media_codecs_performance.xml", true);
parseTopLevelXMLFile(kProfilingResults, true/* ignore_errors */);
}
ssize_t MediaCodecList::findCodecByType(
const char *type, bool encoder, size_t startIndex) const {
static const char *advancedFeatures[] = {
"feature-secure-playback",
"feature-tunneled-playback",
};
size_t numCodecs = mCodecInfos.size();
//遍历编解码器列表中的每个项。
for (; startIndex < numCodecs; ++startIndex) {
const MediaCodecInfo &info = *mCodecInfos.itemAt(startIndex).get();
//判断是否是解码器,如果不是则接着加载下一个进行判断
if (info.isEncoder() != encoder) {
continue;
}
//判断是否能够支持当前的MimeType
sp<MediaCodecInfo::Capabilities> capabilities = info.getCapabilitiesFor(type);
if (capabilities == NULL) {
continue;
}
const sp<AMessage> &details = capabilities->getDetails();
int32_t required;
bool isAdvanced = false;
for (size_t ix = 0; ix < ARRAY_SIZE(advancedFeatures); ix++) {
if (details->findInt32(advancedFeatures[ix], &required) &&
required != 0) {
isAdvanced = true;
break;
}
}
if (!isAdvanced) {
return startIndex;
}
}
return -ENOENT;
}

通过上述步骤只是过滤出能够支持当前播放文件类型的解码器信息,但是并没有对这些解码器进行实例化。解码器的实例化是通过如下代码片来完成的。

//分配节点
status_t err = omx->allocateNode(componentName, observer, &node);
//创建解码器的实例
sp<OMXCodec> codec = new OMXCodec(omx, node, quirks, flags,
createEncoder, mime, componentName,
source, nativeWindow);
//将实例赋给observer
observer->setCodec(codec);
//并用meta来配置创建出来的解码器实例
err = codec->configureCodec(meta);

在allocateNode开始的时候首先创建OMXNodeInstance对象,然后调用
makeComponentInstance创建实例。

status_t OMX::allocateNode(
const char *name, const sp<IOMXObserver> &observer, node_id *node) {
Mutex::Autolock autoLock(mLock);
*node = 0;
OMXNodeInstance *instance = new OMXNodeInstance(this, observer, name);
OMX_COMPONENTTYPE *handle;
OMX_ERRORTYPE err = mMaster->makeComponentInstance(name, &OMXNodeInstance::kCallbacks,instance, &handle);
*node = makeNodeID(instance);
mDispatchers.add(*node, new CallbackDispatcher(instance));
instance->setHandle(*node, handle);
mLiveNodes.add(IInterface::asBinder(observer), instance);
IInterface::asBinder(observer)->linkToDeath(this);
return OK;
}

OMXNodeInstance构造方法比较简单这里就不详细介绍了。

OMXNodeInstance::OMXNodeInstance(
OMX *owner, const sp<IOMXObserver> &observer, const char *name)
: mOwner(owner),
mNodeID(0),
mHandle(NULL),
mObserver(observer),
mDying(false),
mBufferIDCount(0)
{
mName = ADebug::GetDebugName(name);
DEBUG = ADebug::GetDebugLevelFromProperty(name, "debug.stagefright.omx-debug");
ALOGV("debug level for %s is %d", name, DEBUG);
DEBUG_BUMP = DEBUG;
mNumPortBuffers[0] = 0;
mNumPortBuffers[1] = 0;
mDebugLevelBumpPendingBuffers[0] = 0;
mDebugLevelBumpPendingBuffers[1] = 0;
mMetadataType[0] = kMetadataBufferTypeInvalid;
mMetadataType[1] = kMetadataBufferTypeInvalid;
}

makeComponentInstance方法首先通过调用mPluginByComponentName.indexOfKey(String8(name))找到指定名字解码器的索引,然后调用mPluginByComponentName.valueAt(index);返回解码器实例。这个mPluginByComponentName是在创建AwesomePlayer的时候创建的。里面存放的是所支持的VentorPlugin以及SoftPlugin
ssize_t index = mPluginByComponentName.indexOfKey(String8(name));
OMXPluginBase *plugin = mPluginByComponentName.valueAt(index);然后调用对应plugin的makeComponentInstance创建出实例,然后将其添加到mPluginByInstance中

OMX_ERRORTYPE OMXMaster::makeComponentInstance(
const char *name,
const OMX_CALLBACKTYPE *callbacks,
OMX_PTR appData,
OMX_COMPONENTTYPE **component) {
Mutex::Autolock autoLock(mLock);
*component = NULL;
//首先通过调用mPluginByComponentName.indexOfKey(String8(name))找到指定名字解码器的索引,然后调用mPluginByComponentName.valueAt(index);返回解码器实例。
//这个mPluginByComponentName是在创建AwesomePlayer的时候创建的。里面存放的
//是所支持的VentorPlugin以及SoftPlugin
ssize_t index = mPluginByComponentName.indexOfKey(String8(name));
OMXPluginBase *plugin = mPluginByComponentName.valueAt(index);
//然后调用对应plugin的makeComponentInstance创建出实例,然后将其添加到
//mPluginByInstance中
OMX_ERRORTYPE err =
plugin->makeComponentInstance(name, callbacks, appData, component);
if (err != OMX_ErrorNone) {
return err;
}
mPluginByInstance.add(*component, plugin);
return err;
}

我们以软解码器为例子来分析makeComponentInstance过程:
首先SoftOMXPlugin::makeComponentInstance会从kComponents数组中找到对应的解码器信息,kComponents是一个结构体数组,存放着编码器名,动态链接库后缀,以及是编码器还是解码器信息。然后根据动态链接库的后缀构建出对应解码器的库文件名,接着打开该库文件,调用其中的createSoftOMXComponent方法,创建出对应的软解码器。

OMX_ERRORTYPE SoftOMXPlugin::makeComponentInstance(
const char *name,
const OMX_CALLBACKTYPE *callbacks,
OMX_PTR appData,
OMX_COMPONENTTYPE **component) {
for (size_t i = 0; i < kNumComponents; ++i) {
// 在kComponents数组中找到对应的解码器信息,
if (strcmp(name, kComponents[i].mName)) {
continue;
}
//构建出对应解码器的库文件名,我们以MP3为例返回的库文件名为:
// libstagefright_soft_mp3dec.so
AString libName = "libstagefright_soft_";
libName.append(kComponents[i].mLibNameSuffix);
libName.append(".so");
//打开该库文件,调用其中的createSoftOMXComponent方法
//创建对应的解码器
void *libHandle = dlopen(libName.c_str(), RTLD_NOW);
typedef SoftOMXComponent *(*CreateSoftOMXComponentFunc)(
const char *, const OMX_CALLBACKTYPE *,
OMX_PTR, OMX_COMPONENTTYPE **);
CreateSoftOMXComponentFunc createSoftOMXComponent =
(CreateSoftOMXComponentFunc)dlsym(
libHandle,
"_Z22createSoftOMXComponentPKcPK16OMX_CALLBACKTYPE"
"PvPP17OMX_COMPONENTTYPE");
sp<SoftOMXComponent> codec =
(*createSoftOMXComponent)(name, callbacks, appData, component);
OMX_ERRORTYPE err = codec->initCheck();
codec->incStrong(this);
codec->setLibHandle(libHandle);
return OMX_ErrorNone;
}
return OMX_ErrorInvalidComponentName;
}
static const struct {
const char *mName;
const char *mLibNameSuffix;
const char *mRole;
} kComponents[] = {
{ "OMX.google.aac.decoder", "aacdec", "audio_decoder.aac" },
{ "OMX.google.aac.encoder", "aacenc", "audio_encoder.aac" },
{ "OMX.google.amrnb.decoder", "amrdec", "audio_decoder.amrnb" },
{ "OMX.google.amrnb.encoder", "amrnbenc", "audio_encoder.amrnb" },
{ "OMX.google.amrwb.decoder", "amrdec", "audio_decoder.amrwb" },
{ "OMX.google.amrwb.encoder", "amrwbenc", "audio_encoder.amrwb" },
{ "OMX.google.h264.decoder", "avcdec", "video_decoder.avc" },
{ "OMX.google.h264.encoder", "avcenc", "video_encoder.avc" },
{ "OMX.google.hevc.decoder", "hevcdec", "video_decoder.hevc" },
{ "OMX.google.g711.alaw.decoder", "g711dec", "audio_decoder.g711alaw" },
{ "OMX.google.g711.mlaw.decoder", "g711dec", "audio_decoder.g711mlaw" },
{ "OMX.google.mpeg2.decoder", "mpeg2dec", "video_decoder.mpeg2" },
{ "OMX.google.h263.decoder", "mpeg4dec", "video_decoder.h263" },
{ "OMX.google.h263.encoder", "mpeg4enc", "video_encoder.h263" },
{ "OMX.google.mpeg4.decoder", "mpeg4dec", "video_decoder.mpeg4" },
{ "OMX.google.mpeg4.encoder", "mpeg4enc", "video_encoder.mpeg4" },
{ "OMX.google.mp3.decoder", "mp3dec", "audio_decoder.mp3" },
{ "OMX.google.vorbis.decoder", "vorbisdec", "audio_decoder.vorbis" },
{ "OMX.google.opus.decoder", "opusdec", "audio_decoder.opus" },
{ "OMX.google.vp8.decoder", "vpxdec", "video_decoder.vp8" },
{ "OMX.google.vp9.decoder", "vpxdec", "video_decoder.vp9" },
{ "OMX.google.vp8.encoder", "vpxenc", "video_encoder.vp8" },
{ "OMX.google.raw.decoder", "rawdec", "audio_decoder.raw" },
{ "OMX.google.flac.encoder", "flacenc", "audio_encoder.flac" },
{ "OMX.google.gsm.decoder", "gsmdec", "audio_decoder.gsm" },
};

每个软解码器都有一个createSoftOMXComponent方法。我们以MP3软解码器为例,在它内部通过 android::SoftMP3构造方法创建出MP3软解码器。

android::SoftOMXComponent *createSoftOMXComponent(
const char *name, const OMX_CALLBACKTYPE *callbacks,
OMX_PTR appData, OMX_COMPONENTTYPE **component) {
return new android::SoftMP3(name, callbacks, appData, component);
}

到这里估计大家会有点晕了吧,如果有点晕我这里再上个图做个小总结:

我们在创建解码器实例的时候传入的是媒体文件的mimeType,拿着这个mimetype我们去匹配可以处理这个格式的解码器,和什么匹配?就是从/etc/media_codecs.xml./etc/media_codec_performance.xml这两个xml文件中解析出来的数据中匹配,这里记录了平台所支持的每个编解码器的信息,每个信息封装在一个MediaCodecInfo对象中。
匹配后的所有MediaCodecInfo存放在matchingCodecs列表中,然后再拿着这个列表中的每个解码器的ComponentName到mPluginByComponentName中查找对应的plugin。比如MP3那么我们会找到SoftOMXPlugin,然后再从对应的库中调用库内部的createSoftOMXComponent方法创建出SoftMp3这个component,初始化后加入到mPluginByInstance

在MP3软编码器构造方法中最重要的有三个步骤

  1. SimpleSoftOMXComponent的创建
  2. initPorts();
  3. initDecoder();
    SoftMP3::SoftMP3(
    const char *name,
    const OMX_CALLBACKTYPE *callbacks,
    OMX_PTR appData,
    OMX_COMPONENTTYPE **component)
    : SimpleSoftOMXComponent(name, callbacks, appData, component),
    mConfig(new tPVMP3DecoderExternal),
    mDecoderBuf(NULL),
    mAnchorTimeUs(0),
    mNumFramesOutput(0),
    mNumChannels(2),
    mSamplingRate(44100),
    mSignalledError(false),
    mSawInputEos(false),
    mSignalledOutputEos(false),
    mOutputPortSettingsChange(NONE) {
    initPorts();
    initDecoder();
    }
    在SimpleSoftOMXComponent构造方法中主要是创建了SoftOMXComponent,并初始化了一个mHandler以及一个mLooper,并将mHandler注册到对应的mLooper,然后启动mLooper。
    在mHandler中能够处理kWhatEmptyThisBuffer,kWhatFillThisBuffer,kWhatSendCommand这些事件,当这些事件触发后将会被发送到SimpleSoftOMXComponent::onMessageReceived中进行处理。
    SimpleSoftOMXComponent::SimpleSoftOMXComponent(
    const char *name,
    const OMX_CALLBACKTYPE *callbacks,
    OMX_PTR appData,
    OMX_COMPONENTTYPE **component)
    : SoftOMXComponent(name, callbacks, appData, component),
    mLooper(new ALooper),
    mHandler(new AHandlerReflector<SimpleSoftOMXComponent>(this)),
    mState(OMX_StateLoaded),
    mTargetState(OMX_StateLoaded) {
    mLooper->setName(name);
    mLooper->registerHandler(mHandler);
    mLooper->start(
    false, // runOnCallingThread
    false, // canCallJava
    ANDROID_PRIORITY_FOREGROUND);
    }
    我们看下SoftOMXComponent::SoftOMXComponent,这部分主要是new出一个OMX_COMPONENTTYPE,它是一个结构体对象,在
    frameworks/native/include/media/openmax/OMX_Component.h文件中对其定义。
    SoftOMXComponent::SoftOMXComponent(
    const char *name,
    const OMX_CALLBACKTYPE *callbacks,
    OMX_PTR appData,
    OMX_COMPONENTTYPE **component)
    : mName(name),
    mCallbacks(callbacks),
    mComponent(new OMX_COMPONENTTYPE),
    mLibHandle(NULL) {
    mComponent->nSize = sizeof(*mComponent);
    mComponent->nVersion.s.nVersionMajor = 1;
    mComponent->nVersion.s.nVersionMinor = 0;
    mComponent->nVersion.s.nRevision = 0;
    mComponent->nVersion.s.nStep = 0;
    mComponent->pComponentPrivate = this;
    mComponent->pApplicationPrivate = appData;
    mComponent->GetComponentVersion = NULL;
    mComponent->SendCommand = SendCommandWrapper;
    mComponent->GetParameter = GetParameterWrapper;
    mComponent->SetParameter = SetParameterWrapper;
    mComponent->GetConfig = GetConfigWrapper;
    mComponent->SetConfig = SetConfigWrapper;
    mComponent->GetExtensionIndex = GetExtensionIndexWrapper;
    mComponent->GetState = GetStateWrapper;
    mComponent->ComponentTunnelRequest = NULL;
    mComponent->UseBuffer = UseBufferWrapper;
    mComponent->AllocateBuffer = AllocateBufferWrapper;
    mComponent->FreeBuffer = FreeBufferWrapper;
    mComponent->EmptyThisBuffer = EmptyThisBufferWrapper;
    mComponent->FillThisBuffer = FillThisBufferWrapper;
    mComponent->SetCallbacks = NULL;
    mComponent->ComponentDeInit = NULL;
    mComponent->UseEGLImage = NULL;
    mComponent->ComponentRoleEnum = NULL;
    *component = mComponent;
    }
    上面构造方法中的callback的定义如下:
    // static
    OMX_CALLBACKTYPE OMXNodeInstance::kCallbacks = {
    &OnEvent, &OnEmptyBufferDone, &OnFillBufferDone
    };
    在initPorts方法中创建了两个端口,一个为输入端口,一个为输出端口,其中输入端口的index为0,输出端口的index为1.
    void SoftMP3::initPorts() {
    OMX_PARAM_PORTDEFINITIONTYPE def;
    InitOMXParams(&def);
    def.nPortIndex = 0;
    def.eDir = OMX_DirInput;
    def.nBufferCountMin = kNumBuffers;
    def.nBufferCountActual = def.nBufferCountMin;
    def.nBufferSize = 8192;
    def.bEnabled = OMX_TRUE;
    def.bPopulated = OMX_FALSE;
    def.eDomain = OMX_PortDomainAudio;
    def.bBuffersContiguous = OMX_FALSE;
    def.nBufferAlignment = 1;
    def.format.audio.cMIMEType = const_cast<char *>(MEDIA_MIMETYPE_AUDIO_MPEG);
    def.format.audio.pNativeRender = NULL;
    def.format.audio.bFlagErrorConcealment = OMX_FALSE;
    def.format.audio.eEncoding = OMX_AUDIO_CodingMP3;
    addPort(def);

    def.nPortIndex = 1;
    def.eDir = OMX_DirOutput;
    def.nBufferCountMin = kNumBuffers;
    def.nBufferCountActual = def.nBufferCountMin;
    def.nBufferSize = kOutputBufferSize;
    def.bEnabled = OMX_TRUE;
    def.bPopulated = OMX_FALSE;
    def.eDomain = OMX_PortDomainAudio;
    def.bBuffersContiguous = OMX_FALSE;
    def.nBufferAlignment = 2;
    def.format.audio.cMIMEType = const_cast<char *>("audio/raw");
    def.format.audio.pNativeRender = NULL;
    def.format.audio.bFlagErrorConcealment = OMX_FALSE;
    def.format.audio.eEncoding = OMX_AUDIO_CodingPCM;
    addPort(def);
    }
    紧接着调用initDecoder来初始化解码器。
    void SoftMP3::initDecoder() {
    mConfig->equalizerType = flat;
    mConfig->crcEnabled = false;
    uint32_t memRequirements = pvmp3_decoderMemRequirements();
    mDecoderBuf = malloc(memRequirements);
    pvmp3_InitDecoder(mConfig, mDecoderBuf);
    mIsFirst = true;
    }
    到这里我们再回头总结下,我们之前介绍过如何从传入的mimetype到创建出component。这里component个人认为是一个解码组件,它有一个核心的解码器以及一个输入端口,一个输出端口,上面所作的工作就是初始化这个核心解码器,以及解码器的输入端口和输出端口的配置。
    这里还需要注意的是OMX_CALLBACKTYPE,OMX_COMPONENTTYPE这两个对象以及mHandler。还是上个图吧,无图无真相!

我们回过头来看下allocateNode,在对应的解码器创建结束后调用makeNodeID为对应的node创建ID并添加到mNodeIDToInstance中。这里每个实例对应一个id

OMX::node_id OMX::makeNodeID(OMXNodeInstance *instance) {
// mLock is already held.
node_id node = (node_id)++mNodeCounter;
mNodeIDToInstance.add(node, instance);
return node;
}

紧接着就是创建OMXCodec,在OMXCodec构造方法中调用setComponentRole,根据对应的mimeType,以及isEncoder来获取对应的Role Name,并对其进行初始化。

OMXCodec::OMXCodec(
const sp<IOMX> &omx, IOMX::node_id node,
uint32_t quirks, uint32_t flags,
bool isEncoder,
const char *mime,
const char *componentName,
const sp<MediaSource> &source,
const sp<ANativeWindow> &nativeWindow)
: mOMX(omx),
//……………………………………….
mPortStatus[kPortIndexInput] = ENABLED;
mPortStatus[kPortIndexOutput] = ENABLED;
setComponentRole();
}
void OMXCodec::setComponentRole(
const sp<IOMX> &omx, IOMX::node_id node, bool isEncoder,
const char *mime) {
struct MimeToRole {
const char *mime;
const char *decoderRole;
const char *encoderRole;
};

static const MimeToRole kMimeToRole[] = {
{ MEDIA_MIMETYPE_AUDIO_MPEG,
"audio_decoder.mp3", "audio_encoder.mp3" },
{ MEDIA_MIMETYPE_AUDIO_MPEG_LAYER_I,
"audio_decoder.mp1", "audio_encoder.mp1" },
{ MEDIA_MIMETYPE_AUDIO_MPEG_LAYER_II,
"audio_decoder.mp2", "audio_encoder.mp2" },
{ MEDIA_MIMETYPE_AUDIO_AMR_NB,
"audio_decoder.amrnb", "audio_encoder.amrnb" },
{ MEDIA_MIMETYPE_AUDIO_AMR_WB,
"audio_decoder.amrwb", "audio_encoder.amrwb" },
{ MEDIA_MIMETYPE_AUDIO_AAC,
"audio_decoder.aac", "audio_encoder.aac" },
{ MEDIA_MIMETYPE_AUDIO_VORBIS,
"audio_decoder.vorbis", "audio_encoder.vorbis" },
{ MEDIA_MIMETYPE_AUDIO_OPUS,
"audio_decoder.opus", "audio_encoder.opus" },
{ MEDIA_MIMETYPE_AUDIO_G711_MLAW,
"audio_decoder.g711mlaw", "audio_encoder.g711mlaw" },
{ MEDIA_MIMETYPE_AUDIO_G711_ALAW,
"audio_decoder.g711alaw", "audio_encoder.g711alaw" },
{ MEDIA_MIMETYPE_VIDEO_AVC,
"video_decoder.avc", "video_encoder.avc" },
{ MEDIA_MIMETYPE_VIDEO_HEVC,
"video_decoder.hevc", "video_encoder.hevc" },
{ MEDIA_MIMETYPE_VIDEO_MPEG4,
"video_decoder.mpeg4", "video_encoder.mpeg4" },
{ MEDIA_MIMETYPE_VIDEO_H263,
"video_decoder.h263", "video_encoder.h263" },
{ MEDIA_MIMETYPE_VIDEO_VP8,
"video_decoder.vp8", "video_encoder.vp8" },
{ MEDIA_MIMETYPE_VIDEO_VP9,
"video_decoder.vp9", "video_encoder.vp9" },
{ MEDIA_MIMETYPE_AUDIO_RAW,
"audio_decoder.raw", "audio_encoder.raw" },
{ MEDIA_MIMETYPE_AUDIO_FLAC,
"audio_decoder.flac", "audio_encoder.flac" },
{ MEDIA_MIMETYPE_AUDIO_MSGSM,
"audio_decoder.gsm", "audio_encoder.gsm" },
{ MEDIA_MIMETYPE_VIDEO_MPEG2,
"video_decoder.mpeg2", "video_encoder.mpeg2" },
{ MEDIA_MIMETYPE_AUDIO_AC3,
"audio_decoder.ac3", "audio_encoder.ac3" },
};

static const size_t kNumMimeToRole =
sizeof(kMimeToRole) / sizeof(kMimeToRole[0]);

size_t i;
for (i = 0; i < kNumMimeToRole; ++i) {
if (!strcasecmp(mime, kMimeToRole[i].mime)) {
break;
}
}

if (i == kNumMimeToRole) {
return;
}

const char *role =
isEncoder ? kMimeToRole[i].encoderRole
: kMimeToRole[i].decoderRole;

if (role != NULL) {
OMX_PARAM_COMPONENTROLETYPE roleParams;
InitOMXParams(&roleParams);

strncpy((char *)roleParams.cRole,
role, OMX_MAX_STRINGNAME_SIZE - 1);

roleParams.cRole[OMX_MAX_STRINGNAME_SIZE - 1] = '\0';

status_t err = omx->setParameter(
node, OMX_IndexParamStandardComponentRole,
&roleParams, sizeof(roleParams));

if (err != OK) {
ALOGW("Failed to set standard component role '%s'.", role);
}
}
}

看完了解码器的创建过程,我们继续看下initAudioDecoder中的

status_t err = mAudioSource->start(),首先我们需要明确mAudioSource是怎么来的,
status_t AwesomePlayer::initAudioDecoder() {
if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_RAW)) {
mAudioSource = mAudioTrack;
} else {
mOmxSource = OMXCodec::Create(
mClient.interface(), mAudioTrack->getFormat(),
false, // createEncoder
mAudioTrack);
if (mOffloadAudio) {
mAudioSource = mAudioTrack;
} else {
mAudioSource = mOmxSource;
}
}
status_t err = mAudioSource->start();
}

从上面可以看出mAudioSource指的是mOmxSource,是创建出来的OMXCodec。而OMXCodec::Create返回值是一个OMXCodec对象。所以我们接下来看下OMXCodec的start方法.

status_t OMXCodec::start(MetaData *meta) {
Mutex::Autolock autoLock(mLock);
sp<MetaData> params = new MetaData;
//………………………………………………..
// Decoder case
if ((err = mSource->start(params.get())) != OK) {
return err;
}
return init();
}

在该方法中调用了mSource的start方法,以及init()方法,我们在该段代码中主要针对这两部分进行分析。同样我们在分析具体流程之前需要明确mSource到底指的是什么,这就需要从它的根源找起.

OMXCodec::OMXCodec(
const sp<IOMX> &omx, IOMX::node_id node,
uint32_t quirks, uint32_t flags,
bool isEncoder,
const char *mime,
const char *componentName,
const sp<MediaSource> &source,
const sp<ANativeWindow> &nativeWindow)
: mOMX(omx),
//……………………………….
mSource(source),
//……………………………..{
mPortStatus[kPortIndexInput] = ENABLED;
mPortStatus[kPortIndexOutput] = ENABLED;
setComponentRole();
}
sp<MediaSource> OMXCodec::Create(
const sp<IOMX> &omx,
const sp<MetaData> &meta, bool createEncoder,
const sp<MediaSource> &source,
const char *matchComponentName,
uint32_t flags,
const sp<ANativeWindow> &nativeWindow) {
sp<OMXCodec> codec = new OMXCodec(
omx, node, quirks, flags,
createEncoder, mime, componentName,
source, nativeWindow);
}

status_t AwesomePlayer::initAudioDecoder() {
mOmxSource = OMXCodec::Create(
mClient.interface(), mAudioTrack->getFormat(),
false, // createEncoder
mAudioTrack);

}
void AwesomePlayer::setAudioSource(sp<MediaSource> source) {
CHECK(source != NULL);
mAudioTrack = source;
}

在这里使用MediaExtractor对视频文件做A/V的分离

status_t AwesomePlayer::setDataSource_l(const sp<MediaExtractor> &extractor) {
for (size_t i = 0; i < extractor->countTracks(); ++i) {
if (!haveVideo && !strncasecmp(mime.string(), "video/", 6)) {
setVideoSource(extractor->getTrack(i));
} else if (!haveAudio && !strncasecmp(mime.string(), "audio/", 6)) {
setAudioSource(extractor->getTrack(i));
}
return OK;
}

上面是整个调用的过程,从上面可以看出最终的调用根源来自extractor->getTrack,假设当前播放歌曲的格式为MP3格式,那么extractor就是MP3Extractor,则mAudioTrack就是MP3Extractor::getTrack的返回值,也就是MP3Source,知道了这点我们就可以继续对prepare流程进行分析了。

sp<MediaSource> MP3Extractor::getTrack(size_t index) {
if (mInitCheck != OK || index != 0) {
return NULL;
}
return new MP3Source(
mMeta, mDataSource, mFirstFramePos, mFixedHeader,
mSeeker);
}

MP3Source::start中主要创建出一个MediaBuffer然后调用MediaBufferGroup的add_buffer方法将其添加到MediaBufferGroup中。并且将一些相关的标志位置为初始状态。

status_t MP3Source::start(MetaData *) {
CHECK(!mStarted);
mGroup = new MediaBufferGroup;
mGroup->add_buffer(new MediaBuffer(kMaxFrameSize));
mCurrentPos = mFirstFramePos;
mCurrentTimeUs = 0;
mBasisTimeUs = mCurrentTimeUs;
mSamplesRead = 0;
mStarted = true;
return OK;
}

接下来我们看下init方法。

status_t OMXCodec::init() {
// mLock is held.
status_t err;
if (!(mQuirks & kRequiresLoadedToIdleAfterAllocation)) {
err = mOMX->sendCommand(mNode, OMX_CommandStateSet, OMX_StateIdle);
setState(LOADED_TO_IDLE);
}
err = allocateBuffers();
if (mQuirks & kRequiresLoadedToIdleAfterAllocation) {
err = mOMX->sendCommand(mNode, OMX_CommandStateSet, OMX_StateIdle);
setState(LOADED_TO_IDLE);
}
while (mState != EXECUTING && mState != ERROR) {
mAsyncCompletion.wait(mLock);
}
return mState == ERROR ? UNKNOWN_ERROR : OK;
}

在init方法中主要是通过调用allocateBuffers来为输入输出端口分配缓存,紧接着调用mOMX->sendCommand将状态设置到底层。首先我们看下allocateBuffers方法:

status_t OMXCodec::allocateBuffers() {
status_t err = allocateBuffersOnPort(kPortIndexInput);
return allocateBuffersOnPort(kPortIndexOutput);
}

在allocateBuffersOnPort中分别为输入输出端口分配指定大小的缓存空间并对其统一管理。

status_t OMXCodec::allocateBuffersOnPort(OMX_U32 portIndex) {
if (mNativeWindow != NULL && portIndex == kPortIndexOutput) {
return allocateOutputBuffersFromNativeWindow();
}
status_t err = OK;
if ((mFlags & kStoreMetaDataInVideoBuffers)
&& portIndex == kPortIndexInput) {
err = mOMX->storeMetaDataInBuffers(mNode, kPortIndexInput, OMX_TRUE);
}

OMX_PARAM_PORTDEFINITIONTYPE def;
//在获取参数def之前先初始下
InitOMXParams(&def);
def.nPortIndex = portIndex;
//获取指定端口的def参数
err = mOMX->getParameter(mNode, OMX_IndexParamPortDefinition, &def, sizeof(def));

CODEC_LOGV("allocating %u buffers of size %u on %s port",
def.nBufferCountActual, def.nBufferSize,
portIndex == kPortIndexInput ? "input" : "output");
//开始为指定端口分配大小为def.nBufferSize 个数为def.nBufferCountActual的缓存
//在分配参数之前先检查def.nBufferSize ,def.nBufferCountActual 是否合理
if (def.nBufferSize != 0 && def.nBufferCountActual > SIZE_MAX / def.nBufferSize) {
return BAD_VALUE;
}
//开始分配大小为totalSize的总空间
size_t totalSize = def.nBufferCountActual * def.nBufferSize;
mDealer[portIndex] = new MemoryDealer(totalSize, "OMXCodec");
for (OMX_U32 i = 0; i < def.nBufferCountActual; ++i) {
//从总缓存空间中划分出一个大小为 def.nBufferSize的空间
sp<IMemory> mem = mDealer[portIndex]->allocate(def.nBufferSize);
BufferInfo info;
info.mData = NULL;
info.mSize = def.nBufferSize;
IOMX::buffer_id buffer;
if (portIndex == kPortIndexInput
&& ((mQuirks & kRequiresAllocateBufferOnInputPorts)
|| (mFlags & kUseSecureInputBuffers))) {
if (mOMXLivesLocally) {
//在使用前将存储空间进行清除
mem.clear();
//为该解码器输入端口分配空间
err = mOMX->allocateBuffer(
mNode, portIndex, def.nBufferSize, &buffer,
&info.mData);
} else {
err = mOMX->allocateBufferWithBackup(
mNode, portIndex, mem, &buffer, mem->size());
}
} else if (portIndex == kPortIndexOutput
&& (mQuirks & kRequiresAllocateBufferOnOutputPorts)) {
if (mOMXLivesLocally) {
//在使用前将存储空间进行清除
mem.clear();
//为该解码器输入端口分配空间
err = mOMX->allocateBuffer(
mNode, portIndex, def.nBufferSize, &buffer,
&info.mData);
} else {
err = mOMX->allocateBufferWithBackup(
mNode, portIndex, mem, &buffer, mem->size());
}
} else {
err = mOMX->useBuffer(mNode, portIndex, mem, &buffer, mem->size());
}

if (mem != NULL) {
info.mData = mem->pointer();
}

info.mBuffer = buffer;
info.mStatus = OWNED_BY_US;
info.mMem = mem;
info.mMediaBuffer = NULL;
//将分配的缓存信息添加到端口缓存表中进行统一管理
mPortBuffers[portIndex].push(info);
CODEC_LOGV("allocated buffer %u on %s port", buffer,
portIndex == kPortIndexInput ? "input" : "output");
}

if (portIndex == kPortIndexOutput) {
sp<MetaData> meta = mSource->getFormat();
int32_t delay = 0;
if (!meta->findInt32(kKeyEncoderDelay, &delay)) {
delay = 0;
}
int32_t padding = 0;
if (!meta->findInt32(kKeyEncoderPadding, &padding)) {
padding = 0;
}
int32_t numchannels = 0;
if (delay + padding) {
if (mOutputFormat->findInt32(kKeyChannelCount, &numchannels)) {
size_t frameSize = numchannels * sizeof(int16_t);
if (mSkipCutBuffer != NULL) {
size_t prevbuffersize = mSkipCutBuffer->size();
if (prevbuffersize != 0) {
ALOGW("Replacing SkipCutBuffer holding %zu bytes",
prevbuffersize);
}
}
mSkipCutBuffer = new SkipCutBuffer(delay * frameSize, padding * frameSize);
}
}
}

if (portIndex == kPortIndexInput && (mFlags & kUseSecureInputBuffers)) {
Vector<MediaBuffer *> buffers;
for (size_t i = 0; i < def.nBufferCountActual; ++i) {
const BufferInfo &info = mPortBuffers[kPortIndexInput].itemAt(i);

MediaBuffer *mbuf = new MediaBuffer(info.mData, info.mSize);
buffers.push(mbuf);
}
status_t err = mSource->setBuffers(buffers);
}
return OK;
}
status_t OMX::allocateBuffer(
node_id node, OMX_U32 port_index, size_t size,
buffer_id *buffer, void **buffer_data) {
return findInstance(node)->allocateBuffer(port_index, size, buffer, buffer_data);
}
status_t OMXNodeInstance::allocateBuffer(
OMX_U32 portIndex, size_t size, OMX::buffer_id *buffer,
void **buffer_data) {
Mutex::Autolock autoLock(mLock);
BufferMeta *buffer_meta = new BufferMeta(size);
OMX_BUFFERHEADERTYPE *header;
OMX_ERRORTYPE err = OMX_AllocateBuffer(mHandle, &header, portIndex, buffer_meta, size);
if (err != OMX_ErrorNone) {
CLOG_ERROR(allocateBuffer, err, BUFFER_FMT(portIndex, "%zu@", size));
delete buffer_meta;
buffer_meta = NULL;
*buffer = 0;
return StatusFromOMXError(err);
}
CHECK_EQ(header->pAppPrivate, buffer_meta);
*buffer = makeBufferID(header);
*buffer_data = header->pBuffer;
addActiveBuffer(portIndex, *buffer);
sp<GraphicBufferSource> bufferSource(getGraphicBufferSource());
if (bufferSource != NULL && portIndex == kPortIndexInput) {
bufferSource->addCodecBuffer(header);
}
CLOG_BUFFER(allocateBuffer, NEW_BUFFER_FMT(*buffer, portIndex, "%zu@%p", size, *buffer_data));
return OK;
}

之前我们提到过我们创建一个Component的时候会调用initPort初始化端口参数,但是那时候还没为端口分配内存,仅仅只是参数设置而已,这里的init就开始为每个端口分配内存空间了,在空间分配的时候会先从内存中划分出一整块所需的总空间,然后再细分后调用addActiveBuffer将其分配给某个端口:
老办法上图来说明内存分配这部分的原理:

接下来我们看下sendCommand部分:

status_t OMX::sendCommand(
node_id node, OMX_COMMANDTYPE cmd, OMX_S32 param) {
return findInstance(node)->sendCommand(cmd, param);
}
OMXNodeInstance *OMX::findInstance(node_id node) {
Mutex::Autolock autoLock(mLock);
ssize_t index = mNodeIDToInstance.indexOfKey(node);
return index < 0 ? NULL : mNodeIDToInstance.valueAt(index);
}
status_t OMXNodeInstance::sendCommand(
OMX_COMMANDTYPE cmd, OMX_S32 param) {
const sp<GraphicBufferSource>& bufferSource(getGraphicBufferSource());
if (bufferSource != NULL && cmd == OMX_CommandStateSet) {
if (param == OMX_StateIdle) {
//将状态从Executing到Idle,ACodec等待所有的缓存返回,并且不再向解码器缓存中发送数据。
bufferSource->omxIdle();
} else if (param == OMX_StateLoaded) {
// Initiating transition from Idle/Executing -> Loaded
// Buffers are about to be freed.
bufferSource->omxLoaded();
setGraphicBufferSource(NULL);
}
// fall through
}
//………………………………………………..
const char *paramString =
cmd == OMX_CommandStateSet ? asString((OMX_STATETYPE)param) : portString(param);
CLOG_STATE(sendCommand, "%s(%d), %s(%d)", asString(cmd), cmd, paramString, param);
OMX_ERRORTYPE err = OMX_SendCommand(mHandle, cmd, param, NULL);
CLOG_IF_ERROR(sendCommand, err, "%s(%d), %s(%d)", asString(cmd), cmd, paramString, param);
return StatusFromOMXError(err);
}

我们看到上述的OMXNodeInstance::sendCommand主要有两项工作:

  1. 调用 bufferSource->omxIdle();将状态从Executing到Idle,等待原先的解码结束,并不再发送数据到解码器中进行解码。
  2. 调用OMX_SendCommand继续后续的处理。
    我们可以在hardware/qcom/media/mm-core/inc/OMX_Core.h中找到OMX_SendCommand宏方法的定义,它调用hComponent中的SendCommand方法,将处理流程转给它来处理。
    #define OMX_SendCommand(                                    \
    hComponent, \
    Cmd, \
    nParam, \
    pCmdData) \
    ((OMX_COMPONENTTYPE*)hComponent)->SendCommand( \
    hComponent, \
    Cmd, \
    nParam, \
    pCmdData) /* Macro End */
    OMX_SendCommand(mHandle, cmd, param, NULL)进行宏展开之后就变成将cmd这个命令传递给mHandle,让它来处理。所以我们必须明确到底mHandle指的是什么,我们在OMXNodeInstance中看到,mHandle是通过setHandle进行赋值的。
    void OMXNodeInstance::setHandle(OMX::node_id node_id, OMX_HANDLETYPE handle) {
    mNodeID = node_id;
    CLOG_LIFE(allocateNode, "handle=%p", handle);
    CHECK(mHandle == NULL);
    mHandle = handle;
    }
    而OMXNodeInstance::setHandle方法是在OMX::allocateNode中被调用的,而这个handle是通过mMaster->makeComponentInstance中传递出来的。
    status_t OMX::allocateNode(
    const char *name, const sp<IOMXObserver> &observer, node_id *node) {
    //…………………………………………
    OMX_ERRORTYPE err = mMaster->makeComponentInstance(
    name, &OMXNodeInstance::kCallbacks,
    instance, &handle);
    //…………………………….
    instance->setHandle(*node, handle);
    //……………………………..
    return OK;
    }
    OMX_ERRORTYPE OMXMaster::makeComponentInstance(
    const char *name,
    const OMX_CALLBACKTYPE *callbacks,
    OMX_PTR appData,
    OMX_COMPONENTTYPE **component) {
    //……………………………………………….
    OMX_ERRORTYPE err =
    plugin->makeComponentInstance(name, callbacks, appData, component);
    mPluginByInstance.add(*component, plugin);
    return err;
    }
    OMX_ERRORTYPE SoftOMXPlugin::makeComponentInstance(
    const char *name,
    const OMX_CALLBACKTYPE *callbacks,
    OMX_PTR appData,
    OMX_COMPONENTTYPE **component) {
    //………………………………………
    sp<SoftOMXComponent> codec =
    (*createSoftOMXComponent)(name, callbacks, appData, component);
    //………………………………………
    return OMX_ErrorInvalidComponentName;
    }
    android::SoftOMXComponent *createSoftOMXComponent(
    const char *name, const OMX_CALLBACKTYPE *callbacks,
    OMX_PTR appData, OMX_COMPONENTTYPE **component) {
    return new android::SoftMP3(name, callbacks, appData, component);
    }
    SoftMP3::SoftMP3(
    const char *name,
    const OMX_CALLBACKTYPE *callbacks,
    OMX_PTR appData,
    OMX_COMPONENTTYPE **component)
    : SimpleSoftOMXComponent(name, callbacks, appData, component),
    //……………………………………………………..
    mOutputPortSettingsChange(NONE) {
    initPorts();
    initDecoder();
    }
    SimpleSoftOMXComponent::SimpleSoftOMXComponent(
    const char *name,
    const OMX_CALLBACKTYPE *callbacks,
    OMX_PTR appData,
    OMX_COMPONENTTYPE **component)
    : SoftOMXComponent(name, callbacks, appData, component),
    mLooper(new ALooper),
    mHandler(new AHandlerReflector<SimpleSoftOMXComponent>(this)),
    mState(OMX_StateLoaded),
    mTargetState(OMX_StateLoaded) {
    mLooper->setName(name);
    mLooper->registerHandler(mHandler);
    mLooper->start(
    false, // runOnCallingThread
    false, // canCallJava
    ANDROID_PRIORITY_FOREGROUND);
    }
    从上面可以看出mHandler实际上是在SimpleSoftOMXComponent构造方法中被创建的。
    所以我们可以在SimpleSoftOMXComponent中找到它的SendCommand方法。
    OMX_ERRORTYPE SimpleSoftOMXComponent::sendCommand(
    OMX_COMMANDTYPE cmd, OMX_U32 param, OMX_PTR data) {
    CHECK(data == NULL);
    sp<AMessage> msg = new AMessage(kWhatSendCommand, mHandler);
    msg->setInt32("cmd", cmd);
    msg->setInt32("param", param);
    msg->post();
    return OMX_ErrorNone;
    }
    void SimpleSoftOMXComponent::onMessageReceived(const sp<AMessage> &msg) {
    Mutex::Autolock autoLock(mLock);
    uint32_t msgType = msg->what();
    ALOGV("msgType = %d", msgType);
    switch (msgType) {
    case kWhatSendCommand:
    {
    int32_t cmd, param;
    CHECK(msg->findInt32("cmd", &cmd));
    CHECK(msg->findInt32("param", &param));
    onSendCommand((OMX_COMMANDTYPE)cmd, (OMX_U32)param);
    break;
    }
    //………………………………………………
    }
    void SimpleSoftOMXComponent::onSendCommand(
    OMX_COMMANDTYPE cmd, OMX_U32 param) {
    switch (cmd) {
    case OMX_CommandStateSet:
    {
    onChangeState((OMX_STATETYPE)param);
    break;
    }
    case OMX_CommandPortEnable:
    case OMX_CommandPortDisable:
    {
    onPortEnable(param, cmd == OMX_CommandPortEnable);
    break;
    }
    case OMX_CommandFlush:
    {
    onPortFlush(param, true /* sendFlushComplete */);
    break;
    }
    default:
    TRESPASS();
    break;
    }
    }
    void SimpleSoftOMXComponent::onChangeState(OMX_STATETYPE state) {
    // We shouldn't be in a state transition already.
    CHECK_EQ((int)mState, (int)mTargetState);
    switch (mState) {
    case OMX_StateLoaded:
    CHECK_EQ((int)state, (int)OMX_StateIdle);
    break;
    case OMX_StateIdle:
    CHECK(state == OMX_StateLoaded || state == OMX_StateExecuting);
    break;
    case OMX_StateExecuting:
    {
    CHECK_EQ((int)state, (int)OMX_StateIdle);
    for (size_t i = 0; i < mPorts.size(); ++i) {
    onPortFlush(i, false /* sendFlushComplete */);
    }
    mState = OMX_StateIdle;
    notify(OMX_EventCmdComplete, OMX_CommandStateSet, state, NULL);
    break;
    }
    default:
    TRESPASS();
    }
    mTargetState = state;
    checkTransitions();
    }
    void SimpleSoftOMXComponent::checkTransitions() {
    if (mState != mTargetState) {
    bool transitionComplete = true;
    if (mState == OMX_StateLoaded) {
    CHECK_EQ((int)mTargetState, (int)OMX_StateIdle);
    for (size_t i = 0; i < mPorts.size(); ++i) {
    const PortInfo &port = mPorts.itemAt(i);
    if (port.mDef.bEnabled == OMX_FALSE) {
    continue;
    }
    if (port.mDef.bPopulated == OMX_FALSE) {
    transitionComplete = false;
    break;
    }
    }
    } else if (mTargetState == OMX_StateLoaded) {

    }
    if (transitionComplete) {
    mState = mTargetState;
    if (mState == OMX_StateLoaded) {
    onReset();
    }
    notify(OMX_EventCmdComplete, OMX_CommandStateSet, mState, NULL);
    }
    }

    for (size_t i = 0; i < mPorts.size(); ++i) {
    PortInfo *port = &mPorts.editItemAt(i);
    if (port->mTransition == PortInfo::DISABLING) {
    if (port->mBuffers.empty()) {
    ALOGV("Port %zu now disabled.", i);
    port->mTransition = PortInfo::NONE;
    notify(OMX_EventCmdComplete, OMX_CommandPortDisable, i, NULL);
    onPortEnableCompleted(i, false /* enabled */);
    }
    } else if (port->mTransition == PortInfo::ENABLING) {
    if (port->mDef.bPopulated == OMX_TRUE) {
    ALOGV("Port %zu now enabled.", i);
    port->mTransition = PortInfo::NONE;
    port->mDef.bEnabled = OMX_TRUE;
    notify(OMX_EventCmdComplete, OMX_CommandPortEnable, i, NULL);
    onPortEnableCompleted(i, true /* enabled */);
    }
    }
    }
    }
    经过上述的层层跟踪,我们看到OMX_SendCommand(mHandle, cmd, param, NULL)实际上是完成了将idle状态设置到底层并禁止往解码器中输入解码数据。至此prepare流程分析结束,从整个大的角度来看在Prepare阶段主要做的是根据待播放的类型创建对应的解码器,并为每个解码器输入输出端口创建缓存。并且将解码器的状态设置为idle状态。
    老样子上图作为结尾。

这样就结束了吗?还没呢,我们上面看到的只是beginAsyncPrepare_l最后还有finishAsyncPrepare_l,这里主要完成通知上层prepare结束:

void AwesomePlayer::finishAsyncPrepare_l() {
if (mIsAsyncPrepare) {
if (mVideoSource == NULL) {
notifyListener_l(MEDIA_SET_VIDEO_SIZE, 0, 0);
} else {
notifyVideoSize_l();
}

notifyListener_l(MEDIA_PREPARED);
}

mPrepareResult = OK;
modifyFlags((PREPARING|PREPARE_CANCELLED|PREPARING_CONNECTED), CLEAR);
modifyFlags(PREPARED, SET);
mAsyncPrepareEvent = NULL;
mPreparedCondition.broadcast();

if (mAudioTearDown) {
if (mPrepareResult == OK) {
if (mExtractorFlags & MediaExtractor::CAN_SEEK) {
seekTo_l(mAudioTearDownPosition);
}

if (mAudioTearDownWasPlaying) {
modifyFlags(CACHE_UNDERRUN, CLEAR);
play_l();
}
}
mAudioTearDown = false;
}
}

我们重点看下notifyListener_l(MEDIA_PREPARED);

void AwesomePlayer::notifyListener_l(int msg, int ext1, int ext2) {
if ((mListener != NULL) && !mAudioTearDown) {
sp<MediaPlayerBase> listener = mListener.promote();

if (listener != NULL) {
listener->sendEvent(msg, ext1, ext2);
}
}
}

大家还记得下面这张图吧,我们从这张图上可以很明显看出整个调用的结束点为EventHandler

整个上层的处理很简单就是先判断下是否有注册mOnPreparedListener如果有则调用onPrepared方法,将后续工作交给开发者处理。

case MEDIA_PREPARED:
try {
scanInternalSubtitleTracks();
} catch (RuntimeException e) {
// send error message instead of crashing;
// send error message instead of inlining a call to onError
// to avoid code duplication.
Message msg2 = obtainMessage(
MEDIA_ERROR, MEDIA_ERROR_UNKNOWN, MEDIA_ERROR_UNSUPPORTED, null);
sendMessage(msg2);
}
if (mOnPreparedListener != null)
mOnPreparedListener.onPrepared(mMediaPlayer);
return;

MediaPlayer播放框架源代码解析:
setDataSource–创建播放引擎,设置数据源

setDataSource可以使用文件路径,Url,以及Content Provider作为获取资源的标识,为了将流程简单化我们以文件的Uri作为参数进行传递。分析整个流程。

public void setDataSource(Context context, Uri uri)
throws IOException, IllegalArgumentException, SecurityException, IllegalStateException {
setDataSource(context, uri, null);
}

我们这里假设走的是文件类型的分支:

public void setDataSource(Context context, Uri uri, Map<String, String> headers)
throws IOException, IllegalArgumentException, SecurityException,
IllegalStateException {
final String scheme = uri.getScheme();

if (ContentResolver.SCHEME_FILE.equals(scheme)) {
//1. 当uri为file时候走这个分支
setDataSource(uri.getPath());
return;
} else if (ContentResolver.SCHEME_CONTENT.equals(scheme)
&& Settings.AUTHORITY.equals(uri.getAuthority())) {
//2. 当uri为铃声类型的时候走这里
// Redirect ringtones to go directly to underlying provider
uri = RingtoneManager.getActualDefaultRingtoneUri(context,
RingtoneManager.getDefaultType(uri));
if (uri == null) {
throw new FileNotFoundException("Failed to resolve default ringtone");
}
}
AssetFileDescriptor fd = null;
try {
//3. 如果content为文件类型的时候走这个分支
ContentResolver resolver = context.getContentResolver();
fd = resolver.openAssetFileDescriptor(uri, "r");
if (fd == null) {
return;
}
if (fd.getDeclaredLength() < 0) {
setDataSource(fd.getFileDescriptor());
} else {
setDataSource(fd.getFileDescriptor(), fd.getStartOffset(), fd.getDeclaredLength());
}
return;
} catch (SecurityException | IOException ex) {
Log.w(TAG, "Couldn't open file on client side; trying server side: " + ex);
} finally {
if (fd != null) {
fd.close();
}
}
//否则走这个流程
setDataSource(uri.toString(), headers);
}
public void setDataSource(String path)
throws IOException, IllegalArgumentException, SecurityException, IllegalStateException {
setDataSource(path, null, null);
}
private void setDataSource(String path, String[] keys, String[] values)
throws IOException, IllegalArgumentException, SecurityException, IllegalStateException {
final Uri uri = Uri.parse(path);
final String scheme = uri.getScheme();
if ("file".equals(scheme)) {
path = uri.getPath();
} else if (scheme != null) {
// handle non-file sources
nativeSetDataSource(
MediaHTTPService.createHttpServiceBinderIfNecessary(path),
path,
keys,
values);
return;
}
final File file = new File(path);
if (file.exists()) {
FileInputStream is = new FileInputStream(file);
FileDescriptor fd = is.getFD();
setDataSource(fd);
is.close();
} else {
throw new IOException("setDataSource failed.");
}
}
public void setDataSource(FileDescriptor fd)
throws IOException, IllegalArgumentException, IllegalStateException {
// intentionally less than LONG_MAX
setDataSource(fd, 0, 0x7ffffffffffffffL);
}
public void setDataSource(FileDescriptor fd, long offset, long length)
throws IOException, IllegalArgumentException, IllegalStateException {
_setDataSource(fd, offset, length);
}
private native void _setDataSource(FileDescriptor fd, long offset, long length)
throws IOException, IllegalArgumentException, IllegalStateException;

下面是整个setDataSource的大致路径图,如果看不清楚可以点击下面的图片再看,或者将图片保存在本地后放大看,因为内容太多所以画的时候容纳不下只能将其缩小:

到这里我们已经准备进入JNI层了,在JNI部分调用的是
android_media_MediaPlayer_setDataSourceFD方法。在这里通过调用getMediaPlayer方法获取开始的时候存储在fields.context上的native MediaPlayer,所以mp->setDataSource(fd, offset, length)中的mp指的是native的MediaPlayer。

static void
android_media_MediaPlayer_setDataSourceFD(JNIEnv *env, jobject thiz, jobject fileDescriptor, jlong offset, jlong length)
{
sp<MediaPlayer> mp = getMediaPlayer(env, thiz);
int fd = jniGetFDFromFileDescriptor(env, fileDescriptor);
ALOGV("setDataSourceFD: fd %d", fd);
process_media_player_call( env, thiz, mp->setDataSource(fd, offset, length),"java/io/IOException", "setDataSourceFD failed." );
}
创建并加载播放引擎
static sp<MediaPlayer> getMediaPlayer(JNIEnv* env, jobject thiz)
{
Mutex::Autolock l(sLock);
MediaPlayer* const p = (MediaPlayer*)env->GetLongField(thiz, fields.context);
return sp<MediaPlayer>(p);
}

在native mediaplayer中首先获取MediaPlayerService,调用它的create方法通过MediaPlayerService::Client::Client方法创建出MediaPlayerService的客户端返回,赋给player。 因此player->setDataSource(httpService, url, headers)))中的player实际上是
MediaPlayerService::Client。
@frameworks/av/media/libmedia/mediaplayer.cpp

status_t MediaPlayer::setDataSource(
const sp<IMediaHTTPService> &httpService,
const char *url, const KeyedVector<String8, String8> *headers)
{
status_t err = BAD_VALUE;
if (url != NULL) {
const sp<IMediaPlayerService>& service(getMediaPlayerService());
if (service != 0) {
sp<IMediaPlayer> player(service->create(this, mAudioSessionId));
if ((NO_ERROR != doSetRetransmitEndpoint(player)) ||
(NO_ERROR != player->setDataSource(httpService, url, headers))) {
player.clear();
}
err = attachNewPlayer(player);
}
}
return err;
}

我们先看下这个Client是如何创建的:
@ frameworks/av/media/libmedia/IMediaDeathNotifier.cpp

/*static*/const sp<IMediaPlayerService>&
IMediaDeathNotifier::getMediaPlayerService()
{
Mutex::Autolock _l(sServiceLock);
if (sMediaPlayerService == 0) {
//获取ServiceManger
sp<IServiceManager> sm = defaultServiceManager();
sp<IBinder> binder;
do {
//获取MediaPlayerService
binder = sm->getService(String16("media.player"));
if (binder != 0) {
break;
}
usleep(500000); // 0.5 s
} while (true);

if (sDeathNotifier == NULL) {
sDeathNotifier = new DeathNotifier();
}
//绑定死亡通知
binder->linkToDeath(sDeathNotifier);
sMediaPlayerService = interface_cast<IMediaPlayerService>(binder);
}
ALOGE_IF(sMediaPlayerService == 0, "no media player service!?");
return sMediaPlayerService;
}

首先是获取MediaPlayerService,它是在mediaserver的main方法中创建的这个会在后面进行介绍:
@frameworks/av/media/libmediaplayerservice/MediaPlayerService.cpp

sp<IMediaPlayer> MediaPlayerService::create(const sp<IMediaPlayerClient>& client,
int audioSessionId)
{
pid_t pid = IPCThreadState::self()->getCallingPid();
int32_t connId = android_atomic_inc(&mNextConnId);
sp<Client> c = new Client(
this, pid, connId, client, audioSessionId,
IPCThreadState::self()->getCallingUid());
wp<Client> w = c;
{
Mutex::Autolock lock(mLock);
mClients.add(w);
}
return c;
}

在上面代码中new了一个Client对象并添加到mClients中。下面是Client的构造方法,这里没啥可以介绍的:

MediaPlayerService::Client::Client(
const sp<MediaPlayerService>& service, pid_t pid,
int32_t connId, const sp<IMediaPlayerClient>& client,
int audioSessionId, uid_t uid)
{
mPid = pid;
mConnId = connId;
mService = service;
mClient = client;
mLoop = false;
mStatus = NO_INIT;
mAudioSessionId = audioSessionId;
mUID = uid;
mRetransmitEndpointValid = false;
mAudioAttributes = NULL;
#if CALLBACK_ANTAGONIZER
ALOGD("create Antagonizer");
mAntagonizer = new Antagonizer(notify, this);
#endif
}

在MediaPlayerService::Client::setDataSource方法中首先通过
MediaPlayerFactory::getPlayerType获取最匹配的player的类型。然后通过
setDataSource_pre(playerType)创建出getPlayerType方法返回类型的player。最后再调用所创建出来的player的setDataSource。下面我们就这个部分一步一步进行深入分析。

status_t MediaPlayerService::Client::setDataSource(int fd, int64_t offset, int64_t length)
{
struct stat sb;
int ret = fstat(fd, &sb);
if (ret != 0) {
ALOGE("fstat(%d) failed: %d, %s", fd, ret, strerror(errno));
return UNKNOWN_ERROR;
}
//如果偏移量大于文件大小则表示偏移量设置得不对
if (offset >= sb.st_size) {
ALOGE("offset error");
::close(fd);
return UNKNOWN_ERROR;
}
//如果偏移量加上长度大于整个文件的长度则输出错误信息,并调整length值
if (offset + length > sb.st_size) {
length = sb.st_size - offset;
ALOGV("calculated length = %lld", length);
}
//get the file type of current play file
//there will create StagefrightPlayer
player_type playerType = MediaPlayerFactory::getPlayerType(this,fd,offset,length);
sp<MediaPlayerBase> p = setDataSource_pre(playerType);
if (p == NULL) {
return NO_INIT;
}
// now set data source
setDataSource_post(p, p->setDataSource(fd, offset, length));
return mStatus;
}

首先我们看下getPlayerType,它传入的参数为IMediaPlayer类型的client参数以及一个url参数。而获得player类型是通过GET_PLAYER_TYPE_IMPL宏来实现的。

player_type MediaPlayerFactory::getPlayerType(const sp<IMediaPlayer>& client,const char* url) {
GET_PLAYER_TYPE_IMPL(client, url);
}

sFactoryMap是包含每种类型Player的工厂类的数组,在GET_PLAYER_TYPE_IMPL中首先会遍历sFactoryMap并调用每个IFactory的scoreFactory方法对其进行评估找出最匹配的Player类型并返回。

#define GET_PLAYER_TYPE_IMPL(a...)                      \
Mutex::Autolock lock_(&sLock); \
\
player_type ret = STAGEFRIGHT_PLAYER; \
float bestScore = 0.0; \
\
for (size_t i = 0; i < sFactoryMap.size(); ++i) { \
\
IFactory* v = sFactoryMap.valueAt(i); \
float thisScore; \
CHECK(v != NULL); \
thisScore = v->scoreFactory(a, bestScore); \
if (thisScore > bestScore) { \
ret = sFactoryMap.keyAt(i); \
bestScore = thisScore; \
} \
}
if (0.0 == bestScore) { \
ret = getDefaultPlayerType(); \
} \
return ret;

在获得播放器类型后在setDataSource_pre中调用createPlayer方法创建对应类型的player

sp<MediaPlayerBase> MediaPlayerService::Client::setDataSource_pre(
player_type playerType)
{
ALOGV("player type = %d", playerType);
// create the right type of player
sp<MediaPlayerBase> p = createPlayer(playerType);
if (p == NULL) {
return p;
}

if (!p->hardwareOutput()) {
Mutex::Autolock l(mLock);
mAudioOutput =
new AudioOutput(mAudioSessionId, IPCThreadState::self()->getCallingUid(),
mPid, mAudioAttributes);
static_cast<MediaPlayerInterface*>(p.get())->setAudioSink(mAudioOutput);
}
return p;
}

在上面代码中会创建一个音频播放硬件的抽象–AudioOutput,它负责将Buffer输出到硬件的接口,这个将会在介绍start方法的时候进行介绍:

在MediaPlayerService::Client::createPlayer中调用的是MediaPlayerFactory这个工厂类,根据传进去的playerType来创建对应的Player,

sp<MediaPlayerBase> MediaPlayerService::Client::createPlayer(player_type playerType)
{
// determine if we have the right player type
sp<MediaPlayerBase> p = mPlayer;
if ((p != NULL) && (p->playerType() != playerType)) {
p.clear();
}
if (p == NULL) {
p = MediaPlayerFactory::createPlayer(playerType, this, notify, mPid);
}
if (p != NULL) {
p->setUID(mUID);
}
return p;
}

例如我们传入的是STAGEFRIGHT_PLAYER,那么将new出一个StagefrightPlayer.

virtual sp<MediaPlayerBase> createPlayer(pid_t /* pid */) {
return new StagefrightPlayer();
}

在StagefrightPlayer构造方法中 new 出了AwesomePlayer。

StagefrightPlayer::StagefrightPlayer()
: mPlayer(new AwesomePlayer) {
ALOGV("StagefrightPlayer");
mPlayer->setListener(this);
}

下面这个图当时是连同NuPlayer一起画的,因为这部分逻辑两者还是一致的,如果大家暂时不想了解Nuplayer可以只看一边

在AwesomePlayer构造方法中通过RegisterDefaultSniffers注册了格式sniffer,创建出一系列AwesomeEvent,并通过mClient.connect()加载一系列编码器插件。

AwesomePlayer::AwesomePlayer()
: mQueueStarted(false),
mUIDValid(false),
mTimeSource(NULL),
mVideoRenderingStarted(false),
mVideoRendererIsPreview(false),
mMediaRenderingStartGeneration(0),
mStartGeneration(0),
mAudioPlayer(NULL),
mDisplayWidth(0),
mDisplayHeight(0),
mVideoScalingMode(NATIVE_WINDOW_SCALING_MODE_SCALE_TO_WINDOW),
mFlags(0),
mExtractorFlags(0),
mVideoBuffer(NULL),
mDecryptHandle(NULL),
mLastVideoTimeUs(-1),
mTextDriver(NULL),
mOffloadAudio(false),
mAudioTearDown(false) {
CHECK_EQ(mClient.connect(), (status_t)OK);
DataSource::RegisterDefaultSniffers();
mVideoEvent = new AwesomeEvent(this, &AwesomePlayer::onVideoEvent);
mVideoEventPending = false;
mStreamDoneEvent = new AwesomeEvent(this, &AwesomePlayer::onStreamDone);
mStreamDoneEventPending = false;
mBufferingEvent = new AwesomeEvent(this, &AwesomePlayer::onBufferingUpdate);
mBufferingEventPending = false;
mVideoLagEvent = new AwesomeEvent(this, &AwesomePlayer::onVideoLagUpdate);
mVideoLagEventPending = false;
mCheckAudioStatusEvent = new AwesomeEvent(
this, &AwesomePlayer::onCheckAudioStatus);

mAudioStatusEventPending = false;
mAudioTearDownEvent = new AwesomeEvent(this,
&AwesomePlayer::onAudioTearDownEvent);
mAudioTearDownEventPending = false;
mClockEstimator = new WindowedLinearFitEstimator();
mPlaybackSettings = AUDIO_PLAYBACK_RATE_DEFAULT;
reset();
}
// static
void DataSource::RegisterDefaultSniffers() {
Mutex::Autolock autoLock(gSnifferMutex);
if (gSniffersRegistered) {
return;
}
RegisterSniffer_l(SniffMPEG4);
RegisterSniffer_l(SniffMatroska);
RegisterSniffer_l(SniffOgg);
RegisterSniffer_l(SniffWAV);
RegisterSniffer_l(SniffFLAC);
RegisterSniffer_l(SniffAMR);
RegisterSniffer_l(SniffMPEG2TS);
RegisterSniffer_l(SniffMP3);
RegisterSniffer_l(SniffAAC);
RegisterSniffer_l(SniffMPEG2PS);
RegisterSniffer_l(SniffWVM);
RegisterSniffer_l(SniffMidi);
char value[PROPERTY_VALUE_MAX];
if (property_get("drm.service.enabled", value, NULL)
&& (!strcmp(value, "1") || !strcasecmp(value, "true"))) {
RegisterSniffer_l(SniffDRM);
}
gSniffersRegistered = true;
}

我们接下来看下OMXClient::connect方法。它是通过MediaPlayerService方法中的getOMX获取new 出来的OMX对象。在OMX对象中有个mMaster的成员变量,在创建它的时候调用addVendorPlugin以及addPlugin来加载软件以及硬件的解码器插件。

status_t OMXClient::connect() {
sp<IServiceManager> sm = defaultServiceManager();
sp<IBinder> binder = sm->getService(String16("media.player"));
sp<IMediaPlayerService> service = interface_cast<IMediaPlayerService>(binder);
mOMX = service->getOMX();
if (!mOMX->livesLocally(0 /* node */, getpid())) {
ALOGI("Using client-side OMX mux.");
mOMX = new MuxOMX(mOMX);
}
return OK;
}
sp<IOMX> MediaPlayerService::getOMX() {
Mutex::Autolock autoLock(mLock);
if (mOMX.get() == NULL) {
mOMX = new OMX;
}
return mOMX;
}
OMX::OMX()
: mMaster(new OMXMaster),
mNodeCounter(0) {
}
OMXMaster::OMXMaster()
: mVendorLibHandle(NULL) {
addVendorPlugin();
addPlugin(new SoftOMXPlugin);
}

我们看下VendorPlugin的创建过程,首先它会调用dlopen打开libstagefrighthw.so动态库,然后调用里面的createOMXPlugin方法,创建出OMXPluginBase类型的对象,然后通过OMXMaster::addPlugin(OMXPluginBase *plugin)方法将其添加到mPluginByComponentName中。

void OMXMaster::addVendorPlugin() {
addPlugin("libstagefrighthw.so");
}
void OMXMaster::addPlugin(const char *libname) {
mVendorLibHandle = dlopen(libname, RTLD_NOW);
typedef OMXPluginBase *(*CreateOMXPluginFunc)();
CreateOMXPluginFunc createOMXPlugin =
(CreateOMXPluginFunc)dlsym(mVendorLibHandle, "createOMXPlugin");
if (!createOMXPlugin)
createOMXPlugin = (CreateOMXPluginFunc)dlsym(
mVendorLibHandle, "_ZN7android15createOMXPluginEv");
if (createOMXPlugin) {
addPlugin((*createOMXPlugin)());
}
}

在OMXMaster::addPlugin(OMXPluginBase *plugin)方法中将会调用enumerateComponents方法列出VentorPlugin或者SoftPlugin中的所有Components,添加到mPluginByComponentName

void OMXMaster::addPlugin(OMXPluginBase *plugin) {
Mutex::Autolock autoLock(mLock);
mPlugins.push_back(plugin);
OMX_U32 index = 0;
char name[128];
OMX_ERRORTYPE err;
while ((err = plugin->enumerateComponents(
name, sizeof(name), index++)) == OMX_ErrorNone) {
String8 name8(name);
if (mPluginByComponentName.indexOfKey(name8) >= 0) {
ALOGE("A component of name '%s' already exists, ignoring this one.",
name8.string());
continue;
}
mPluginByComponentName.add(name8, plugin);
}
}


到目前为止我们在Client部分根据Uri类型找到匹配的player类型,并根据这个类型调用对应的工厂类,创建出对应的player,这里以MP3格式为例,那么创建出的将是StagefrightPlayer类型的播放器,在创建过程中new出了带有AwesomeEvent的AwesomePlayer,并注册了格式sniffer以及完成了VentorPlugin以及SoftPlugin名的加载。

加载数据源

播放引擎加载结束后就需要为播放引擎添加数据源了。
再回到SetDataSource的流程上来,创建完StagefrightPlayer后将会调用它的SetDataSource方法。在该方法中将会调用mPlayer的SetDataSource。在介绍StagefrightPlayer创建流程的时候我们已经知道mPlayer是AwesomePlayer对象,所以我们需要看下AwesomePlayer类中的setDataSource方法。

status_t StagefrightPlayer::setDataSource(
const sp<IMediaHTTPService> &httpService,
const char *url,
const KeyedVector<String8, String8> *headers) {
return mPlayer->setDataSource(httpService, url, headers);
}

下面是AwesomePlayer类中setDataSource,它经过层层调用最终new出一个FileSource 对象赋给mFileSource。并通过MediaExtractor::Create创建出一个Extractor从FileSource中抽取出如,比特率等参数。

status_t AwesomePlayer::setDataSource(
const sp<IMediaHTTPService> &httpService,
const char *uri,
const KeyedVector<String8, String8> *headers) {
Mutex::Autolock autoLock(mLock);
return setDataSource_l(httpService, uri, headers);
}
status_t AwesomePlayer::setDataSource(
int fd, int64_t offset, int64_t length) {
Mutex::Autolock autoLock(mLock);
reset_l();
sp<DataSource> dataSource = new FileSource(fd, offset, length);
status_t err = dataSource->initCheck();
mFileSource = dataSource;
{
Mutex::Autolock autoLock(mStatsLock);
mStats.mFd = fd;
mStats.mURI = String8();
}
return setDataSource_l(dataSource);
}
status_t AwesomePlayer::setDataSource_l(
const sp<DataSource> &dataSource) {
sp<MediaExtractor> extractor = MediaExtractor::Create(dataSource);
if (extractor->getDrmFlag()) {
checkDrmStatus(dataSource);
}
return setDataSource_l(extractor);
}

创建MediaExtractor需要调用各个sniff方法判断出当前数据源的类型,然后根据mime创建对应的MediaExtractor,比如我们当前的数据源为MP3类型那么返回的将是MP3Extractor

// static
sp<MediaExtractor> MediaExtractor::Create(
const sp<DataSource> &source, const char *mime) {
sp<AMessage> meta;
String8 tmp;
if (mime == NULL) {
float confidence;
if (!source->sniff(&tmp, &confidence, &meta)) {
ALOGV("FAILED to autodetect media content.");
return NULL;
}
mime = tmp.string();
ALOGV("Autodetected media content as '%s' with confidence %.2f",
mime, confidence);
}
bool isDrm = false;
// DRM MIME type syntax is "drm+type+original" where
// type is "es_based" or "container_based" and
// original is the content's cleartext MIME type
if (!strncmp(mime, "drm+", 4)) {
const char *originalMime = strchr(mime+4, '+');
if (originalMime == NULL) {
// second + not found
return NULL;
}
++originalMime;
if (!strncmp(mime, "drm+es_based+", 13)) {
// DRMExtractor sets container metadata kKeyIsDRM to 1
return new DRMExtractor(source, originalMime);
} else if (!strncmp(mime, "drm+container_based+", 20)) {
mime = originalMime;
isDrm = true;
} else {
return NULL;
}
}
MediaExtractor *ret = NULL;
if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_MPEG4)
|| !strcasecmp(mime, "audio/mp4")) {
ret = new MPEG4Extractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_MPEG)) {
ret = new MP3Extractor(source, meta);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AMR_NB)
|| !strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AMR_WB)) {
ret = new AMRExtractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_FLAC)) {
ret = new FLACExtractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_WAV)) {
ret = new WAVExtractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_OGG)) {
ret = new OggExtractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_MATROSKA)) {
ret = new MatroskaExtractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_MPEG2TS)) {
ret = new MPEG2TSExtractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_WVM)) {
// Return now. WVExtractor should not have the DrmFlag set in the block below.
return new WVMExtractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AAC_ADTS)) {
ret = new AACExtractor(source, meta);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_MPEG2PS)) {
ret = new MPEG2PSExtractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_MIDI)) {
ret = new MidiExtractor(source);
}
if (ret != NULL) {
if (isDrm) {
ret->setDrmFlag(true);
} else {
ret->setDrmFlag(false);
}
}
return ret;
}

有了MP3Extractor之后我们就可以从Media文件中抽取文件关键信息了,这里最关键的是extractor->getTrack(i)这个会返回对应的歌曲内容,通过setAudioSource以及setVideoSource赋值给播放引擎,播放引擎后续将会将这个数据源作为解码器的输入。进行解码

status_t AwesomePlayer::setDataSource_l(const sp<MediaExtractor> &extractor) {
// Attempt to approximate overall stream bitrate by summing all
// tracks' individual bitrates, if not all of them advertise bitrate,
// we have to fail.
int64_t totalBitRate = 0;
mExtractor = extractor;
for (size_t i = 0; i < extractor->countTracks(); ++i) {
sp<MetaData> meta = extractor->getTrackMetaData(i);
int32_t bitrate;
if (!meta->findInt32(kKeyBitRate, &bitrate)) {
const char *mime;
CHECK(meta->findCString(kKeyMIMEType, &mime));
totalBitRate = -1;
break;
}
totalBitRate += bitrate;
}
sp<MetaData> fileMeta = mExtractor->getMetaData();
if (fileMeta != NULL) {
int64_t duration;
if (fileMeta->findInt64(kKeyDuration, &duration)) {
mDurationUs = duration;
}
}
mBitrate = totalBitRate;
ALOGV("mBitrate = %lld bits/sec", (long long)mBitrate);
{
Mutex::Autolock autoLock(mStatsLock);
mStats.mBitrate = mBitrate;
mStats.mTracks.clear();
mStats.mAudioTrackIndex = -1;
mStats.mVideoTrackIndex = -1;
}
bool haveAudio = false;
bool haveVideo = false;
for (size_t i = 0; i < extractor->countTracks(); ++i) {
sp<MetaData> meta = extractor->getTrackMetaData(i);
const char *_mime;
CHECK(meta->findCString(kKeyMIMEType, &_mime));

String8 mime = String8(_mime);
if (!haveVideo && !strncasecmp(mime.string(), "video/", 6)) {
setVideoSource(extractor->getTrack(i));
haveVideo = true;
// Set the presentation/display size
int32_t displayWidth, displayHeight;
bool success = meta->findInt32(kKeyDisplayWidth, &displayWidth);
if (success) {
success = meta->findInt32(kKeyDisplayHeight, &displayHeight);
}
if (success) {
mDisplayWidth = displayWidth;
mDisplayHeight = displayHeight;
}
{
Mutex::Autolock autoLock(mStatsLock);
mStats.mVideoTrackIndex = mStats.mTracks.size();
mStats.mTracks.push();
TrackStat *stat =
&mStats.mTracks.editItemAt(mStats.mVideoTrackIndex);
stat->mMIME = mime.string();
}
} else if (!haveAudio && !strncasecmp(mime.string(), "audio/", 6)) {
setAudioSource(extractor->getTrack(i));
haveAudio = true;
mActiveAudioTrackIndex = i;
{
Mutex::Autolock autoLock(mStatsLock);
mStats.mAudioTrackIndex = mStats.mTracks.size();
mStats.mTracks.push();
TrackStat *stat =
&mStats.mTracks.editItemAt(mStats.mAudioTrackIndex);
stat->mMIME = mime.string();
}

if (!strcasecmp(mime.string(), MEDIA_MIMETYPE_AUDIO_VORBIS)) {
sp<MetaData> fileMeta = extractor->getMetaData();
int32_t loop;
if (fileMeta != NULL
&& fileMeta->findInt32(kKeyAutoLoop, &loop) && loop != 0) {
modifyFlags(AUTO_LOOPING, SET);
}
}
} else if (!strcasecmp(mime.string(), MEDIA_MIMETYPE_TEXT_3GPP)) {
addTextSource_l(i, extractor->getTrack(i));
}
}
if (!haveAudio && !haveVideo) {
if (mWVMExtractor != NULL) {
return mWVMExtractor->getError();
} else {
return UNKNOWN_ERROR;
}
}
mExtractorFlags = extractor->flags();
return OK;
}
void AwesomePlayer::setAudioSource(sp<MediaSource> source) {
CHECK(source != NULL);
mAudioTrack = source;
}
void MediaPlayerService::Client::setDataSource_post(
const sp<MediaPlayerBase>& p,
status_t status)
{
ALOGV(" setDataSource");
mStatus = status;
if (mStatus != OK) {
ALOGE(" error: %d", mStatus);
return;
}

// Set the re-transmission endpoint if one was chosen.
if (mRetransmitEndpointValid) {
mStatus = p->setRetransmitEndpoint(&mRetransmitEndpoint);
if (mStatus != NO_ERROR) {
ALOGE("setRetransmitEndpoint error: %d", mStatus);
}
}
if (mStatus == OK) {
mPlayer = p;
}
}


最后贴个setDataSource整个过程的结构图:

再前一个博客中大家已经对整个播放框架有了一个整体的了解接下来的这篇博客将针对每个具体细节进行介绍:
再进行介绍之前我们看下使用MediaPlayer进行播放本地音频视频文件时的一般调用步骤,以及每个步骤所处理的任务:

每个阶段的任务

我们先来看下一个播放过程上层的调用情况:

  • 创建MediaPlayer对象
    在这部分中主要完成如下任务:

    1. 从JNI层获取应用层MediaPlayer相关域和方法的引用,这是由于MediaPlayer对象在上层创建会比在下层创建来得容易些。
    2. 创建EventHandler,用于处理底层播放引擎向应用层传递的事件。
  • setDataSource() 创建播放引擎并为其设置数据源
    在这部分中主要完成如下任务:

    1. 根据本地音视频文件的mimetype对其进行匹配,选择适当的播放引擎并实例化
    2. 创建AudioOutput,将其设置到步骤1中创建的播放引擎,后续解码后的音频数据就可以通过它输出到硬件设备上。
    3. 实例化数据源FileSource。根据文件的Mimetype创建相应的Extractor,根据文件类型创建相应的数据容器,并将设置到播放引擎中作为解码器的输入数据来源
  • setDisplay()

    1. 在这部分中主要为视频设置渲染画板,视频流就是通过这个接口显示到显示屏上的。
  • prepare()

    1. 根据上面MediaExtractor从文件中抽取出的mimetype类型搜寻并加载匹配的解码器。
    2. 配置解码器设置解码器的监听器
    3. 设置解码器Buffer的尺寸大小数据
  • start()

    1. 创建AudioPlayer并设置对应的参数,从音视频文件的数据容器中读取数据,将数据传送给解码器进行解码,最后将解码后的数据返回给播放引擎
    2. 通过AudioPlayer向硬件输出音频数据。
    3. 创建视频Render来渲染解码好的视频数据
MediaPlayer播放框架源代码解析:
创建MediaPlayer对象
建立底层MediaPlayer与Java层MediaPlayer事件传递的通道

在MediaPlayer加载后它会先执行如下代码,由于这个是静态代码块只有在类第一次加载的时候会被调用,因此在这个地方加载动态库libmedia_jni.so是最为合适的。

static {
System.loadLibrary("media_jni");
native_init();
}

在native_init方法中获取一些属性的id为后续的初始化做准备,由于这些field id是比较常用的,所以在类加载的时候直接获取可以避免后续操作的时候每次都要获取降低了执行效率。

static void
android_media_MediaPlayer_native_init(JNIEnv *env)
{
jclass clazz;
//从上层获取Context
clazz = env->FindClass("android/media/MediaPlayer");
fields.context = env->GetFieldID(clazz, "mNativeContext", "J");
//在介绍native_setup的时候会介绍这个值域,用于将native层的事件传递到Java层
fields.post_event = env->GetStaticMethodID(clazz, "postEventFromNative",
"(Ljava/lang/Object;IIILjava/lang/Object;)V");
fields.surface_texture = env->GetFieldID(clazz, "mNativeSurfaceTexture", "J");
env->DeleteLocalRef(clazz);
clazz = env->FindClass("android/net/ProxyInfo");
fields.proxyConfigGetHost =
env->GetMethodID(clazz, "getHost", "()Ljava/lang/String;");
fields.proxyConfigGetPort =
env->GetMethodID(clazz, "getPort", "()I");
fields.proxyConfigGetExclusionList =
env->GetMethodID(clazz, "getExclusionListAsString", "()Ljava/lang/String;");
env->DeleteLocalRef(clazz);
gPlaybackParamsFields.init(env);
gSyncParamsFields.init(env);
}

紧接着我们就需要调用MediaPlayer类的构造方法创建MediaPlayer对象:
在这部分我们就来详细看下上层和native层是如何进行消息传递的:

public MediaPlayer() {
Looper looper;
//创建了EventHandler用于处理从native层传递过来的消息
if ((looper = Looper.myLooper()) != null) {
mEventHandler = new EventHandler(this, looper);
} else if ((looper = Looper.getMainLooper()) != null) {
mEventHandler = new EventHandler(this, looper);
} else {
mEventHandler = null;
}
//.....
//创建一个MediaPlayer的软引用传递给native_setup方法
native_setup(new WeakReference<MediaPlayer>(this));
}

首先这里创建了一个EventHandler,上面提到过这个Handler用于处理从native层传递过来的消息,然后调用native_setup

static void
android_media_MediaPlayer_native_setup(JNIEnv *env, jobject thiz, jobject weak_this)
{
sp<MediaPlayer> mp = new MediaPlayer();
// 创建一个JNIMediaPlayerListener 并将其设置到上面new出来的MediaPlayer
// JNIMediaPlayerListener 实际上是将native层的通知,通过调用Java层的postEventFromNative方法传递上去,这个工作是在frameworks/base/media/jni/android_media_MediaPlayer.cpp:JNIMediaPlayerListener::notify方法中完成的。这个方法中的fields.post_event正是在上面介绍的android_media_MediaPlayer_native_init中从Java层中获取到的
sp<JNIMediaPlayerListener> listener = new JNIMediaPlayerListener(env, thiz, weak_this);
//将上述创建的JNIMediaPlayerListener赋给MediaPlayer本地类的mListener成员变量
mp->setListener(listener);
// Stow our new C++ MediaPlayer in an opaque field in the Java object.
setMediaPlayer(env, thiz, mp);
}

在native_setup中我们创建一个上层的MediaPlayer对象以及JNIMediaPlayerListener对象,JNIMediaPlayerListener 实际上是将native层的通知通过调用Java层的postEventFromNative方法传递上去,这个工作是在frameworks/base/media/jni/android_media_MediaPlayer.cpp:JNIMediaPlayerListener::notify方法中完成的。这个方法中的fields.post_event正是在上面介绍的android_media_MediaPlayer_native_init中从Java层中获取到的。

下面是MediaPlayer native的构造方法,这里比较简单只是对一些成员变量的初始化。

MediaPlayer::MediaPlayer()
{
mListener = NULL; //用于与Java层通信的Listener JNIMediaPlayerListener
mCookie = NULL;
mStreamType = AUDIO_STREAM_MUSIC; //Stream 类型
mAudioAttributesParcel = NULL;
mCurrentPosition = -1; //当前位置
mSeekPosition = -1; //Seek位置
mCurrentState = MEDIA_PLAYER_IDLE; //当前MediaPlayer的状态
mPrepareSync = false; //是否同步Prepare
mPrepareStatus = NO_ERROR; //Prepare状态
mLoop = false; //是否循环播放
mLeftVolume = mRightVolume = 1.0; //左右音量
mVideoWidth = mVideoHeight = 0; //视频的宽度和高度值
mLockThreadId = 0;
mAudioSessionId = AudioSystem::newAudioUniqueId(); //AudioSession Id
AudioSystem::acquireAudioSessionId(mAudioSessionId, -1); //获取AudioSessionId
mSendLevel = 0;
mRetransmitEndpointValid = false;
}

当native MediaPlayer创建后通过调用setMediaPlayer将创建的MediaPlayer native层对象赋给Java层的Context。

static sp<MediaPlayer> setMediaPlayer(JNIEnv* env, jobject thiz, const sp<MediaPlayer>& player)
{
sp<MediaPlayer> old = (MediaPlayer*)env->GetLongField(thiz, fields.context);
if (player.get()) {
player->incStrong((void*)setMediaPlayer);
}
if (old != 0) {
old->decStrong((void*)setMediaPlayer);
}
env->SetLongField(thiz, fields.context, (jlong)player.get());
return old;
}

要了解native层是怎样将事件传递到上层就先要了解JNIMediaPlayerListener

JNIMediaPlayerListener是连接native层的MediaPlayer与Java层MediaPlayer的重要监听器。它通过JNI调用Java层的postEventFromNative方法将事件传递到Java层。最终传递给Java层MediaPlayer中的EventHandler进行处理。

class JNIMediaPlayerListener: public MediaPlayerListener{
public:
JNIMediaPlayerListener(JNIEnv* env, jobject thiz, jobject weak_thiz);
~JNIMediaPlayerListener();
virtual void notify(int msg, int ext1, int ext2, const Parcel *obj = NULL);
private:
JNIMediaPlayerListener();
jclass mClass; // 指向native层 MediaPlayer的引用
jobject mObject; // Java层MediaPlayer的弱引用
};

JNIMediaPlayerListener::JNIMediaPlayerListener(JNIEnv* env, jobject thiz, jobject weak_thiz)
{
jclass clazz = env->GetObjectClass(thiz);
if (clazz == NULL) {
ALOGE("Can't find android/media/MediaPlayer");
jniThrowException(env, "java/lang/Exception", NULL);
return;
}
//初始化mClass和mObject 成员变量
mClass = (jclass)env->NewGlobalRef(clazz);
mObject = env->NewGlobalRef(weak_thiz);
}

JNIMediaPlayerListener::~JNIMediaPlayerListener()
{
//析构掉mClass和mObject 成员变量
JNIEnv *env = AndroidRuntime::getJNIEnv();
env->DeleteGlobalRef(mObject);
env->DeleteGlobalRef(mClass);
}

void JNIMediaPlayerListener::notify(int msg, int ext1, int ext2, const Parcel *obj)
{
JNIEnv *env = AndroidRuntime::getJNIEnv();
if (obj && obj->dataSize() > 0) {
//创建要传递的ParcelObject对象
jobject jParcel = createJavaParcelObject(env);
if (jParcel != NULL) {
Parcel* nativeParcel = parcelForJavaObject(env, jParcel);
//设置要发送的数据
nativeParcel->setData(obj->data(), obj->dataSize());
//调用Java层的postEventFromNative方法
env->CallStaticVoidMethod(mClass, fields.post_event, mObject,msg, ext1, ext2, jParcel);
env->DeleteLocalRef(jParcel);
}
} else {
env->CallStaticVoidMethod(mClass, fields.post_event, mObject,
msg, ext1, ext2, NULL);
}
if (env->ExceptionCheck()) {
ALOGW("An exception occurred while notifying an event.");
LOGW_EX(env);
env->ExceptionClear();
}
}

整个JNIMediaPlayerListener中最主要的就是notify方法,它将数据封装完后通过JNI接口调用Java层的postEventFromNative方法。

总结:
至此MediaPlayer初始化完成,Native层的MediaPlayer完成初始化后通过setMediaPlayer将其赋给Java层的mNativeContext,Java层的MediaPlayer在构造方法中调用native_setup将其引用设置到native层,并传递给JNIMediaPlayerListener。JNIMediaPlayerListener负责将native层的事件通知到Java层的MediaPlayer上。并交给EventHandler处理。整个结构大致如下:

整个调用过程如下:

为了避免整个博客的篇幅太长我将一个阶段对应一篇博客,下个博客将向大家介绍setDataSource–创建播放引擎,设置数据源,关于今天的这篇博客如果大家有什么疑问,或者发现有错误的地方欢迎大家留言或者发邮件告诉我。