上篇文章讲了Binder的驱动,也没有详细的扣源码,一条主线:先在kernel层注册Binder驱动,提供binder_open、binder_mmap、binder_ioctl等接口来操作,根据Binder序言这篇文章中得出来的结论,binder客户端通过驱动先去访问ServiceManager,查询到服务端的地址,客户端再去访问服务端(通过驱动),我们也已经知道ServiceManager是系统中使用binder提供服务通信的大管家(从驱动篇中的命令:BINDER_ SET _ ConTEXT _ MGR也能印证),那么我们能发现,ServiceManager其实也是一个典型的Binder Server,因此顺着这条线展开。
servicemanager可以类比网络中的DNS服务器,“IP地址为0”,和DNS服务器本身也是一个服务器一样,servicemanager也是一个典型的BinderServer(serviceManager以下简称sm)。
1、ServiceManager的启动一个DNS服务器,必须要在用户浏览网页之前就绪,很显然,sm也必须在binder使用前就启动,有理由猜测是在init脚本中启动的,事实确实如此。
service servicemanager /system/bin/servicemanager
class core
user system
group system
critical
onrestart restart zygote
onrestart restart media
onrestart restart surfaceflinger
onrestart restart drm
从上面可以看出来,如果servicemanager发生问题重启,zygote、media、surfaceflinger和drm这样系统核心服务也会重启(这里插一句,在init脚本中新增一个系统服务时,它默认就是开机自启动、kill进程后会自己重启,这是系统赋予的权限,这里留一个点,为实现原理是什么?)
servicemanager是一个c/c++编译的可执行文件,源码路径在项目的frameworks/native/cmds/servicemanager目录中,
int main(int argc, char **argv)
{
struct binder_state *bs;
void *svcmgr = BINDER_SERVICE_MANAGER; // ((void*) 0)
bs = binder_open(128*1024); // mmap内存大小128K
if (binder_become_context_manager(bs)) {//系统中只能调用一次,第二次调用就会返回失败
ALOGE("cannot become context manager (%s)n", strerror(errno));
return -1;
}
binder_loop(bs, svcmgr_handler);
return 0;
}
很明显单个重要接口函数:binder_open去开启binder驱动,binder_become_context_manager(bs)把sm注册成binder大管家,binder_loop开始loop循环,接收客户端消息。分别看下三个函数
1.1、binder_open
struct binder_state *binder_open(unsigned mapsize) // mapsize == 128K
{
struct binder_state *bs;
bs = malloc(sizeof(*bs));
if (!bs) {
errno = ENOMEM;
return 0;
}
bs->fd = open("/dev/binder", O_RDWR); // 打开/dev/binder驱动,得到fd
if (bs->fd < 0) {
fprintf(stderr,"binder: cannot open device (%s)n",
strerror(errno));
goto fail_open;
}
bs->mapsize = mapsize; // 128K
// mmap将虚拟设备/dev/binder的fd映射到一块128K的内存,操作内存即操作设备
bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);
if (bs->mapped == MAP_FAILED) {
fprintf(stderr,"binder: cannot map device (%s)n",
strerror(errno));
goto fail_map;
}
return bs;
fail_map:
close(bs->fd);
fail_open:
free(bs);
return 0;
}
1.2、binder_become_context_manager
int binder_become_context_manager(struct binder_state *bs)
{
return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);
}
1.3、binder_loop
void binder_loop(struct binder_state *bs, binder_handler func)
{
int res;
struct binder_write_read bwr;
unsigned readbuf[32];
bwr.write_size = 0;
bwr.write_consumed = 0;
bwr.write_buffer = 0;
readbuf[0] = BC_ENTER_LOOPER;
binder_write(bs, readbuf, sizeof(unsigned));// 向binder驱动写数据BC_ENTER_LOOPER
for (;;) { // 循环
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (unsigned) readbuf;
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); // 向binder驱动读写数据
if (res < 0) {
ALOGE("binder_loop: ioctl failed (%s)n", strerror(errno));
break;
}
res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func); // 解析binder数据
if (res == 0) {
ALOGE("binder_loop: unexpected reply?!n");
break;
}
if (res < 0) {
ALOGE("binder_loop: io error %d %sn", res, strerror(errno));
break;
}
}
}
int binder_write(struct binder_state *bs, void *data, unsigned len)
{
struct binder_write_read bwr;
int res;
bwr.write_size = len;
bwr.write_consumed = 0;
bwr.write_buffer = (unsigned) data;
bwr.read_size = 0;
bwr.read_consumed = 0;
bwr.read_buffer = 0;
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
if (res < 0) {
fprintf(stderr,"binder_write: ioctl failed (%s)n",
strerror(errno));
}
return res;
}
int binder_parse(struct binder_state *bs, struct binder_io *bio,
uint32_t *ptr, uint32_t size, binder_handler func)
{
int r = 1;
uint32_t *end = ptr + (size / 4);
while (ptr < end) {
uint32_t cmd = *ptr++;
#if TRACE
fprintf(stderr,"%s:n", cmd_name(cmd));
#endif
switch(cmd) {
case BR_NOOP:
break;
case BR_TRANSACTION_COMPLETE:
break;
case BR_INCREFS:
case BR_ACQUIRE:
case BR_RELEASE:
case BR_DECREFS:
#if TRACE
fprintf(stderr," %08x %08xn", ptr[0], ptr[1]);
#endif
ptr += 2;
break;
case BR_TRANSACTION: {
struct binder_txn *txn = (void *) ptr;
if ((end - ptr) * sizeof(uint32_t) < sizeof(struct binder_txn)) {
ALOGE("parse: txn too small!n");
return -1;
}
binder_dump_txn(txn);
if (func) {
unsigned rdata[256/4];
struct binder_io msg;
struct binder_io reply;
int res;
bio_init(&reply, rdata, sizeof(rdata), 4);
bio_init_from_txn(&msg, txn);
res = func(bs, txn, &msg, &reply);
binder_send_reply(bs, &reply, txn->data, res);
}
ptr += sizeof(*txn) / sizeof(uint32_t);
break;
}
case BR_REPLY: {
struct binder_txn *txn = (void*) ptr;
if ((end - ptr) * sizeof(uint32_t) < sizeof(struct binder_txn)) {
ALOGE("parse: reply too small!n");
return -1;
}
binder_dump_txn(txn);
if (bio) {
bio_init_from_txn(bio, txn);
bio = 0;
} else {
}
ptr += (sizeof(*txn) / sizeof(uint32_t));
r = 0;
break;
}
case BR_DEAD_BINDER: {
struct binder_death *death = (void*) *ptr++;
death->func(bs, death->ptr);
break;
}
case BR_FAILED_REPLY:
r = -1;
break;
case BR_DEAD_REPLY:
r = -1;
break;
default:
ALOGE("parse: OOPS %dn", cmd);
return -1;
}
}
return r;
}
总结下上述逻辑,很直白:
1.3.1、binder_parse中的BR_TRANSACTION分支1、调用binder_open,打开/dev/binder 驱动节点,mmap映射128k的内存
2、使用BINDER_SET_CONTEXT_MGR命令,将servicemanager注册成binder的大管家
3、binder_loop开始死循环,等待客户端的消息,读的时候是从binder驱动中取数据,执行是用func函数,此函数是从service_manager.c的main函数的svcmgr_handler函数指针传过来的,然后返回给Binder驱动。因为这个func函数很重要,接着binder_parse函数,单独拎出来讲BR_TRANSACTION
servicemanager的作用或者目的是为了完成 Binder Server name (域名)到 Server Handle(IP地址)的对应关系查询,合理推测,sm所提供的服务至少包括:
1、注册,当一个Binder Server运行起来,它要将自己的 【名称,Binder句柄】注册给servicemanager进行备案;
2、查询,当一个Binder Client向sm发起查询时,sm返回给应用客户端Binder Server的句柄
3、也许还有什么版本号等等其他信息查询,但这不是必须
BR_TRANSACTION分支中,最重要的就是func函数,而这个函数是从service_manager.c的main函数的svcmgr_handler函数指针传过来的,下面从svcmgr_handler函数入手,分析sm是如何查询这种对应关系
int svcmgr_handler(struct binder_state *bs,
struct binder_transaction_data_secctx *txn_secctx,
struct binder_io *msg,
struct binder_io *reply)
{
struct svcinfo *si;
uint16_t *s;
size_t len;
uint32_t handle;
uint32_t strict_policy;
int allow_isolated;
if (txn->target.ptr != BINDER_SERVICE_MANAGER)
return -1;
if (txn->code == PING_TRANSACTION)
return 0;
strict_policy = bio_get_uint32(msg);
bio_get_uint32(msg);
s = bio_get_string16(msg, &len);//step1、bio_xx系列函数是为取出各种类型的数据结构提供方便,之后就会根据具体命令来进行处理
if ((len != (sizeof(svcmgr_id) / 2)) ||
memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {
fprintf(stderr,"invalid id %sn", str8(s, len));
return -1;
}
switch(txn->code) {
case SVC_MGR_GET_SERVICE:
case SVC_MGR_CHECK_SERVICE:
s = bio_get_string16(msg, &len);
//这个函数执行查找操作,sm中维护了一个全局的svclist变量,用于保存所有Server的注册信息,感兴趣的同学可以追进去看,逻辑不复杂
handle = do_find_service(s, len, txn->sender_euid, txn->sender_pid,
(const char*) txn_secctx->secctx);
bio_put_ref(reply, handle);//保存查询结果用来返回给客户端
return 0;
case SVC_MGR_ADD_SERVICE:
s = bio_get_string16(msg, &len);
handle = bio_get_ref(msg);
allow_isolated = bio_get_uint32(msg) ? 1 : 0;
dumpsys_priority = bio_get_uint32(msg);
//执行添加操作
if (do_add_service(bs, s, len, handle, txn->sender_euid, allow_isolated, dumpsys_priority,
txn->sender_pid, (const char*) txn_secctx->secctx))
return -1;
break;
case SVC_MGR_LIST_SERVICES: {
uint32_t n = bio_get_uint32(msg);
uint32_t req_dumpsys_priority = bio_get_uint32(msg);
if (!svc_can_list(txn->sender_pid, (const char*) txn_secctx->secctx, txn->sender_euid)) {
ALOGE("list_service() uid=%d - PERMISSION DENIEDn",
txn->sender_euid);
return -1;
}
si = svclist;
// walk through the list of services n times skipping services that
// do not support the requested priority
while (si) {
if (si->dumpsys_priority & req_dumpsys_priority) {
if (n == 0) break;
n--;
}
si = si->next;
}
if (si) {
bio_put_string16(reply, si->name);//保存结果
return 0;
}
return -1;
}
default:
ALOGE("unknown code %dn", txn->code);
return -1;
}
bio_put_uint32(reply, 0);
return 0;
}
SVC_MGR_GET_SERVICESVC_MGR_CHECK_SERVICE,俩命令一样,都是根据Server名称查询其handle值;SVC_MGR_ADD_SERVICE,用来注册一个Binder ServerSVC_MGR_LIST_SERVICES,获取列表中对应的Server
ServiceManager的功能其实很简洁,内部维护一个svclist列表,用来存储所有Server相关信息(数据结构是svcinfo),注册、查询都是基于这个表展开。
复盘一下,流程走到binder_parse的svcmgr_handler(SVC_MGR_GET_SERVICE分支),Binder Server的handle句柄已经查询到了,接下来就是用binder_send_parse函数中使用binder_send_reply将查询结果返回给Binder驱动,再返回给应用进程客户端。然后binder_parse进入下一轮的while,直到ptr < end为false,此时说明servicemanager上一次从驱动拿到的消息都处理完了,所以binder_loop再去向驱动查询一次res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);没有新消息的话就开始休眠。
接下来我们分析BR_REPLY分支,回到binder_parse函数中,只取关键点
case BR_REPLY: {
struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
if (bio) {
bio_init_from_txn(bio, txn);
bio = 0;
} else {
}
ptr += sizeof(*txn);
r = 0;
break;
}
发现它没什么重要东西,这是正常的,因为servicemanager唯一的作用就是为其他应用进程提供 Binder Name(域名)到 handler(IP地址)的对应关系查询,它无需跟其他进程来回的通信交互,我们一定要记得!这里的BR_REPLY 、BR_TRANSACTION分支,作用域或者说代表的是sm这个进程,假如reply了,就是sm进程给应用进程发数据(虽然是回复,但也是sm进程主动、主观给应用进程发消息),和BR_TRANSACTION查询到 Binder Server的句柄使用binder_send_reply返回句柄是不同的。换句话说,servicemanager很傲娇,它不需要和其他进程主动聊天的
到现在为止,servicemanager进程起来了,我们要怎么才能拿到ServiceManager服务?
2、获取ServiceManager服务我们知道sm服务是基于native代码实现的,那是不是只有native层才能用,这当然不会,因为上层有大量的跨进程,或者这么说,每个apk的启动都要用AMS等服务,不知道跨了多少进程(就是用binder跨进程),我们紧紧握住这样一个根本点,Binder跨进程的根基在Binder驱动上,只要你的语言能找到Binder的驱动并使用它就行了(话是这么说,实际上从kernel层的驱动注册开始到application层,Binder封装的框架大到爆炸,但是宏观上这样去理解逻辑),很显然,这里需要分成俩部分,篇幅原因,分为两份博文来展开:
- native层的c++如何通过binder获取sm服务
spsm = defaultServiceManager();//获取sm服务 sp binder = sm->getService(String16(ServerName)); sp service = interface_cast (binder);
- application层的java应用程序如何通过binder获取sm服务


 “DNS服务器”–ServiceManager[Binder Server](三)](http://www.mshxw.com/aiimages/31/750956.png)
