栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 系统运维 > 运维 > Linux

Android FrameWork ---- ServiceManager源码分析

Linux 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

Android FrameWork ---- ServiceManager源码分析

先把图放上来

当客户端想要调用系统的服务时,例如AMS服务(IBinder),客户端想直接拿到这个服务是拿不到的,这种情况下,就需要使用到ServiceManager

ServiceManager是什么呢?ServiceManager也是一个服务,像AMS这种服务,在app进程启动的时候,就已经注册到了ServiceManager中;那么在客户端 调用这个服务的时候,其实是ServiceManager将这个服务返回给客户端,就像是一个大管家,管理全部的服务,然后处理客户端的请求。

ServiceManager原理剖析

1 ServiceManager的启动和注册1.1 binder_open干了什么事?1.2 binder_become_context_manager干了什么事?1.3 binder_loop干了什么事?2 Native层ServieManager的获取2.1 BpBinder的创建2.2 逐渐引出 AIDL3 AIDL的使用及原理分析3.1 Stub类3.2 Proxy类3.3 transact方法

1 ServiceManager的启动和注册

同样,ServiceManager也是Init进程启动的时候创建,解析init.rc文件,然后ServiceManager启动入口,在service_manager.c的main方法中

/frameworks/native/cmds/servicemanager/service_manager.c

int main(int argc, char** argv)
{
    struct binder_state *bs;
    union selinux_callback cb;
    char *driver;
    ……
    //step 1    allocate memory is 128k
    bs = binder_open(driver, 128*1024);
    ……
	// setp 2
    if (binder_become_context_manager(bs)) {
        ALOGE("cannot become context manager (%s)n", strerror(errno));
        return -1;
    }
    ……
    //step 3
    binder_loop(bs, svcmgr_handler);

    return 0;
}
1.1 binder_open干了什么事?

调用了binder_open方法(注意,这可不是Binder驱动里的binder_open),在这个方法中,开启binder驱动,然后将ServiceManager的虚拟内存与内核空间的虚拟内存映射,其中为ServiceManager分配的内存为128K

# /frameworks/native/cmds/servicemanager/binder.c

struct binder_state *binder_open(const char* driver, size_t mapsize)
{
	//开启binder驱动
    bs->fd = open(driver, O_RDWR | O_CLOEXEC);
    ……
    bs->mapsize = mapsize;
    //mmap内存映射
    bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);
    if (bs->mapped == MAP_FAILED) {
        fprintf(stderr,"binder: cannot map device (%s)n",
                strerror(errno));
        goto fail_map;
    }
}
1.2 binder_become_context_manager干了什么事?

然后,调用 binder_become_context_manager ,将ServiceManager设置为大管家;

# /frameworks/native/cmds/servicemanager/binder.c

int binder_become_context_manager(struct binder_state *bs)
{
    return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);
}

在binder_become_context_manager中,调用了binder驱动的ioctl,cmd指令为BINDER_SET_CONTEXT_MGR

case BINDER_SET_CONTEXT_MGR:
	ret = binder_ioctl_set_ctx_mgr(filp);
	if (ret)
	goto err;
	ret = security_binder_set_context_mgr(proc->tsk);
	if (ret < 0)
	goto err;
static int binder_ioctl_set_ctx_mgr(struct file *filp)
{
	int ret = 0;
	struct binder_proc *proc = filp->private_data;
	kuid_t curr_euid = current_euid();
	……
	if (uid_valid(binder_context_mgr_uid)) {
		if (!uid_eq(binder_context_mgr_uid, curr_euid)) {
			pr_err("BINDER_SET_CONTEXT_MGR bad uid %d != %dn",
			       from_kuid(&init_user_ns, curr_euid),
			       from_kuid(&init_user_ns,
					binder_context_mgr_uid));
			ret = -EPERM;
			goto out;
		}
	} else {
		binder_context_mgr_uid = curr_euid;
	}
	binder_context_mgr_node = binder_new_node(proc, 0, 0);
	if (binder_context_mgr_node == NULL) {
		ret = -ENOMEM;
		goto out;
	}
	……
}

在BINDER_SET_CONTEXT_MGR指令下,创建了一个binder_node节点,node->proc = proc;
然后将当前进程的信息赋值给node节点,创建了work 和 todo 两个队列,类似于MessageQueue

static struct binder_node *binder_new_node(struct binder_proc *proc,
					   binder_uintptr_t ptr,
					   binder_uintptr_t cookie)
{
	struct rb_node **p = &proc->nodes.rb_node;
	struct rb_node *parent = NULL;
	struct binder_node *node;
	……
	node = kzalloc(sizeof(*node), GFP_KERNEL);
	node->debug_id = ++binder_last_id;
	node->proc = proc;
	node->ptr = ptr;
	node->cookie = cookie;
	node->work.type = BINDER_WORK_NODE;
	INIT_LIST_HEAD(&node->work.entry);
	INIT_LIST_HEAD(&node->async_todo);
	……
}
1.3 binder_loop干了什么事?

最后调用binder_loop方法,用来处理数据,可以看到,binder_loop也是一个死循环

void binder_loop(struct binder_state *bs, binder_handler func)
{
    int res;
    struct binder_write_read bwr;
    uint32_t readbuf[32];
    …… 
    readbuf[0] = BC_ENTER_LOOPER;
    //设置状态
    binder_write(bs, readbuf, sizeof(uint32_t));
    //死循环
    for (;;) {
        bwr.read_size = sizeof(readbuf);
        bwr.read_consumed = 0;
        bwr.read_buffer = (uintptr_t) readbuf;
		//再次调用了 binder_ioctl方法
        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
        ……

        res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
        ……
}

首先执行命令 BC_ENTER_LOOPER,代表进入循环,开始写数据,执行binder_write,这个也不是binder驱动中的binder_write

int binder_write(struct binder_state *bs, void *data, size_t len)
{
    struct binder_write_read bwr;
    int res;
	//这里大于0,可以写
    bwr.write_size = len;
    bwr.write_consumed = 0;
    bwr.write_buffer = (uintptr_t) data;
    // ==0 不可以读
    bwr.read_size = 0;
    bwr.read_consumed = 0;
    bwr.read_buffer = 0;
    res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
    ……
}

最终执行binder驱动的binder_ioctl写到文件里,执行BINDER_WRITE_READ,在上一节的代码中有体现,因为 bwr.write_size > 0 ,直接从下面的代码进去

if (bwr.write_size > 0) {
	ret = binder_thread_write(proc, thread,
				  //读取的数据 data  BC_ENTER_LOOPER
				  bwr.write_buffer,
				  bwr.write_size,
				  &bwr.write_consumed);
	trace_binder_write_done(ret);
	if (ret < 0) {
		bwr.read_consumed = 0;
		if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
			ret = -EFAULT;
		goto out;
	}
}

binder_thread_write中,主要就是根据命令来执行对应的操作,代码太多了就不粘贴了自己去找,只展示BC_ENTER_LOOPER命令

case BC_ENTER_LOOPER:
	binder_debug(BINDER_DEBUG_THREADS,
		     "%d:%d BC_ENTER_LOOPERn",
		     proc->pid, thread->pid);
	if (thread->looper & BINDER_LOOPER_STATE_REGISTERED) {
		thread->looper |= BINDER_LOOPER_STATE_INVALID;
		binder_user_error("%d:%d ERROR: BC_ENTER_LOOPER called after BC_REGISTER_LOOPERn",
			proc->pid, thread->pid);
	}
	thread->looper |= BINDER_LOOPER_STATE_ENTERED;
	break;

这里只要就是将当前线程的状态,设置为 BINDER_LOOPER_STATE_ENTERED

这个时候,开始进入死循环,再次调用了binder_ioctl方法,这个时候 read_size > 0,可以进行读操作,直接进入binder_thread_read函数中

static int binder_thread_read(struct binder_proc *proc,
			      struct binder_thread *thread,
			      binder_uintptr_t binder_buffer, size_t size,
			      binder_size_t *consumed, int non_block)
{
	……
	if (*consumed == 0) {
		if (put_user(BR_NOOP, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
	}
	
	retry:
	//这里为true
	wait_for_proc_work = thread->transaction_stack == NULL &&
				list_empty(&thread->todo);
	……
	thread->looper |= BINDER_LOOPER_STATE_WAITING;
	if (wait_for_proc_work)
		proc->ready_threads++;
	……
	if (wait_for_proc_work) {
		……
		if (non_block) {
			if (!binder_has_proc_work(proc, thread))
				ret = -EAGAIN;
		} else
			ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
	} else {
		if (non_block) {
			if (!binder_has_thread_work(thread))
				ret = -EAGAIN;
		} else
			ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));
	}

	binder_lock(__func__);

	if (wait_for_proc_work)
		proc->ready_threads--;
	thread->looper &= ~BINDER_LOOPER_STATE_WAITING;

在binder_thread_read中,会判断当前是否有任务可以处理,是否处于阻塞状态,如果没有任务处理,那么就进入等待的状态 wait_event_freezable_exclusive

至此,ServiceManager就已经注册完成!就是上边的3个步骤

2 Native层ServieManager的获取

在ServiceManager注册完成之后,如果想要注册服务并获取服务,客户端需要先获取ServiceManager的引用

2.1 BpBinder的创建
/frameworks/native/libs/binder/IServiceManager.cpp

sp defaultServiceManager()
{
    if (gDefaultServiceManager != NULL) return gDefaultServiceManager;
    {
        AutoMutex _l(gDefaultServiceManagerLock);
        while (gDefaultServiceManager == NULL) {
        	//获取ServiceManager对象
            gDefaultServiceManager = interface_cast(
                ProcessState::self()->getContextObject(NULL));
            if (gDefaultServiceManager == NULL)
                sleep(1);
        }
	}
    return gDefaultServiceManager;
}

defaultServiceManager是一个单例,获取ServiceManager就是通过下面的代码,一层一层剖析,先看下ProcessState是干什么用的

# /frameworks/native/libs/binder/ProcessState.cpp

sp ProcessState::self()
{
    Mutex::Autolock _l(gProcessMutex);
    if (gProcess != NULL) {
        return gProcess;
    }
    gProcess = new ProcessState("/dev/binder");
    return gProcess;
}

self方法主要是创建了一个ProcessState实例,open_driver打开binder驱动,然后创建了Binder线程池,最大线程数为15,也就是一个进程中都存在一个 Binder线程池

# /frameworks/native/libs/binder/ProcessState.cpp

#define BINDER_VM_SIZE ((1 * 1024 * 1024) - sysconf(_SC_PAGE_SIZE) * 2)
#define DEFAULT_MAX_BINDER_THREADS 15

ProcessState::ProcessState(const char *driver)
    : mDriverName(String8(driver))
    , mDriverFD(open_driver(driver))
    , mVMStart(MAP_FAILED)
    , mThreadCountLock(PTHREAD_MUTEX_INITIALIZER)
    , mThreadCountDecrement(PTHREAD_COND_INITIALIZER)
    , mExecutingThreadsCount(0)
    , mMaxThreads(DEFAULT_MAX_BINDER_THREADS)
    , mStarvationStartTimeMs(0)
    , mManagesContexts(false)
    , mBinderContextCheckFunc(NULL)
    , mBinderContextUserData(NULL)
    , mThreadPoolStarted(false)
    , mThreadPoolSeq(1)
{
    if (mDriverFD >= 0) {
        // mmap the binder, providing a chunk of virtual address space to receive transactions.
        mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
        ……
}

# open_driver
static int open_driver(const char *driver)
{
    int fd = open(driver, O_RDWR | O_CLOEXEC);
    if (fd >= 0) {
        int vers = 0;
        status_t result = ioctl(fd, BINDER_VERSION, &vers);
        ……
        size_t maxThreads = DEFAULT_MAX_BINDER_THREADS;
        //创建binder线程池
        result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);
        ……
}

然后往下走,执行了mmap,设置共享内存,共享内存的大小为1M - 8K ,也就是之前提及到的,app进程在用户空间分配的内存就是这么多,这样普通的服务注册跟ServiceManager建立的内存映射大小就是 1M - 8K

# /frameworks/native/libs/binder/ProcessState.cpp

sp ProcessState::getContextObject(const sp& )
{
    return getStrongProxyForHandle(0);
}

然后调用getContextObject方法,返回一个getStrongProxyForHandle

sp ProcessState::getStrongProxyForHandle(int32_t handle)
{
	……
    handle_entry* e = lookupHandleLocked(handle);
    if (e != NULL) {
        // We need to create a new BpBinder if there isn't currently one, OR we
        // are unable to acquire a weak reference on this current one.  See comment
        // in getWeakProxyForHandle() for more info about this.
        IBinder* b = e->binder;
        if (b == NULL || !e->refs->attemptIncWeak(this)) {
            if (handle == 0) {
            ……
                Parcel data;
                status_t status = IPCThreadState::self()->transact(
                        0, IBinder::PING_TRANSACTION, data, NULL, 0);
                if (status == DEAD_OBJECT)
                   return NULL;
            }

            b = BpBinder::create(handle);
            e->binder = b;
            if (b) e->refs = b->getWeakRefs();
            result = b;
        } else {
        }

在getStrongProxyForHandle方法中,会创建一个BpBinder对象,其实getContextObject就是为了创建一个BpBinder。

# /frameworks/native/libs/binder/BpBinder.cpp

BpBinder::BpBinder(int32_t handle, int32_t trackedUid)
    : mHandle(handle)
    , mAlive(1)
    , mObitsSent(0)
    , mObituaries(NULL)
    , mTrackedUid(trackedUid)
{
    ALOGV("Creating BpBinder %p handle %dn", this, mHandle);

    extendObjectLifetime(OBJECT_LIFETIME_WEAK);
    IPCThreadState::self()->incWeakHandle(handle, this);
}

BpBinder可以看做是服务端的代理对象,可以认为是ServiceManager,因为跨进程的通信,客户端没法直接拿到服务端的对象,因此通过代理,相当于拿到了ServiceManager的实例对象,通过首页的图可以看到,BpBinder就是指向服务端的。

因为在Android Init进程启动的时候,ServiceManager已经在Binder中注册过了,它就相当于服务端,BBinder,但是客户端拿ServiceManager实例的时候,没法直接new出来,只能是拿到一个BpBinder作为服务端的代理,其实 BpBinder 和 BBinder 是一个,这是在Native层,在Java层就是BinderProxy

2.2 逐渐引出 AIDL
# /frameworks/native/include/binder/IInterface.h

template
inline sp interface_cast(const sp& obj)
{
    return INTERFACE::asInterface(obj);
}

interface_cast就是将BpBinder对象,转换为了IServiceManager对象,具体怎么转换的呢?

#  /frameworks/native/include/binder/IInterface.h

#define IMPLEMENT_meta_INTERFACE(INTERFACE, NAME)                       
    const ::android::String16 I##INTERFACE::descriptor(NAME);           
    const ::android::String16&                                          
            I##INTERFACE::getInterfaceDescriptor() const {              
        return I##INTERFACE::descriptor;                                
    }                                                                   
    ::android::sp I##INTERFACE::asInterface(              
            const ::android::sp<::android::IBinder>& obj)               
    {                                                                   
        ::android::sp intr;                               
        if (obj != NULL) {                                              
            intr = static_cast(                          
                obj->queryLocalInterface(                               
                        I##INTERFACE::descriptor).get());               
            if (intr == NULL) {                                         
                intr = new Bp##INTERFACE(obj);                          
            }                                                           
        }                                                               
        return intr;                                                    
    }                                 

注意,如果把INTERFACE替换成ServiceManager,那么最终new出来的对象就是BpServiceManager,也就是说,最终获取的ServiceManager对象就是BpServiceManager

interface_cast的作用 —> new BpServiceManager(new BpBinder)

那么从Java层 ----- native层是怎样的一个流程呢

3 AIDL的使用及原理分析

首先创建一个aidl文件,看编译之后,生成了什么

interface IServiceManager {

    void addPerson(int age);
}

============================>编译后生成的代码

  public static abstract class Stub extends android.os.Binder implements com.study.modulelization.IServiceManager
  {
    private static final java.lang.String DEscriptOR = "com.study.modulelization.IServiceManager";
    
    public Stub()
    {
      this.attachInterface(this, DEscriptOR);
    }
    
    public static com.study.modulelization.IServiceManager asInterface(android.os.IBinder obj)
    {
      if ((obj==null)) {
        return null;
      }
      android.os.IInterface iin = obj.queryLocalInterface(DEscriptOR);
      if (((iin!=null)&&(iin instanceof com.study.modulelization.IServiceManager))) {
        return ((com.study.modulelization.IServiceManager)iin);
      }
      return new com.study.modulelization.IServiceManager.Stub.Proxy(obj);
    }
    @Override 
    public android.os.IBinder asBinder()
    {
      return this;
    }
    @Override 
    public boolean onTransact(int code, android.os.Parcel data, android.os.Parcel reply, int flags) throws android.os.RemoteException
    {
      java.lang.String descriptor = DEscriptOR;
      switch (code)
      {
        case INTERFACE_TRANSACTION:
        {
          reply.writeString(descriptor);
          return true;
        }
        case TRANSACTION_addPerson:
        {
          data.enforceInterface(descriptor);
          int _arg0;
          _arg0 = data.readInt();
          //调用了服务端的 addPerson
          this.addPerson(_arg0);
          reply.writeNoException();
          return true;
        }
        default:
        {
          return super.onTransact(code, data, reply, flags);
        }
      }
    }
    private static class Proxy implements com.study.modulelization.IServiceManager
    {
      private android.os.IBinder mRemote;
      Proxy(android.os.IBinder remote)
      {
        mRemote = remote;
      }
      @Override 
      public android.os.IBinder asBinder()
      {
        return mRemote;
      }
      public java.lang.String getInterfaceDescriptor()
      {
        return DEscriptOR;
      }
      @Override 
      public void addPerson(int age) throws android.os.RemoteException
      {
        android.os.Parcel _data = android.os.Parcel.obtain();
        android.os.Parcel _reply = android.os.Parcel.obtain();
        try {
          _data.writeInterfaceToken(DEscriptOR);
          _data.writeInt(age);
          boolean _status = mRemote.transact(Stub.TRANSACTION_addPerson, _data, _reply, 0);
          if (!_status && getDefaultImpl() != null) {
            getDefaultImpl().addPerson(age);
            return;
          }
          _reply.readException();
        }
        finally {
          _reply.recycle();
          _data.recycle();
        }
      }
      public static com.study.modulelization.IServiceManager sDefaultImpl;
    }
    static final int TRANSACTION_addPerson = (android.os.IBinder.FIRST_CALL_TRANSACTION + 0);
    public static boolean setDefaultImpl(com.study.modulelization.IServiceManager impl) {
      // only one user of this interface can use this function
      // at a time. This is a heuristic to detect if two different
      // users in the same process use this function.
      if (Stub.Proxy.sDefaultImpl != null) {
        throw new IllegalStateException("setDefaultImpl() called twice");
      }
      if (impl != null) {
        Stub.Proxy.sDefaultImpl = impl;
        return true;
      }
      return false;
    }
    public static com.study.modulelization.IServiceManager getDefaultImpl() {
      return Stub.Proxy.sDefaultImpl;
    }
  }
  public void addPerson(int age) throws android.os.RemoteException;

下面逐一分析代码,注意这里都是在客户端拿到的代码

3.1 Stub类

aidl接口在编译之后,生成了一个Stub类,这个类实现了IBinder接口和aidl接口IServiceManager,它其实就是相当于AMS中的ActivityManagerNative类,是一个Binder类

public static com.study.modulelization.IServiceManager asInterface(android.os.IBinder obj)
    {
      if ((obj==null)) {
        return null;
      }
      android.os.IInterface iin = obj.queryLocalInterface(DEscriptOR);
      if (((iin!=null)&&(iin instanceof com.study.modulelization.IServiceManager))) {
        return ((com.study.modulelization.IServiceManager)iin);
      }
      return new com.study.modulelization.IServiceManager.Stub.Proxy(obj);
    }

其中,有个asInterface方法,在这个方法中做的主要事情就是:判断当前是否是跨进程通信,如果是不是跨进程通信,那么就直接返回这个服务对象;如果是跨进程通信,那么就返回一个Proxy对象

new IServiceManager.Stub.Proxy(obj : IBinder)

==========>对应native层

new BpServiceManager(new BpBinder())

也就是说,Proxy就是ServiceManagerProxy,而 obj:IBinder 就是 BinderProxy

3.2 Proxy类
Proxy(android.os.IBinder remote)
{
  mRemote = remote;
}

这里remote就是BinderProxy类,看看前面的框架图,客户端拿到的就是服务端的代理BinderProxy

 val intent = Intent()
 intent.action = ""
 intent.setPackage("")

 val connection = object : ServiceConnection{
     override fun onServiceConnected(name: ComponentName?, service: IBinder?) {
     
         //service 是服务端返回的,就是BBinder
         val sm:IServiceManager = IServiceManager.Stub.asInterface(service)
         sm.addPerson(13)
     }
     override fun onServiceDisconnected(name: ComponentName?) {
     }
 }
 bindService(intent,connection, BIND_AUTO_CREATE)

在客户端bindService后,会调用到onServiceConnected,service是服务端返回的,调用了Stub类的asInterface,这个时候会判断service跟当前进程是否在同一个进程里,如果是跨进程通信,那么返回的就是Proxy对象;

然后调用Proxy的addPerson方法,是调用mRemote.transact进行跨进程通信,将客户端的数据,以及Stub.TRANSACTION_addPerson发送给服务端

@Override 
public void addPerson(int age) throws android.os.RemoteException
{
  android.os.Parcel _data = android.os.Parcel.obtain();
  android.os.Parcel _reply = android.os.Parcel.obtain();
  try {
    _data.writeInterfaceToken(DEscriptOR);
    _data.writeInt(age);
    boolean _status = mRemote.transact(Stub.TRANSACTION_addPerson, _data, _reply, 0);
    if (!_status && getDefaultImpl() != null) {
      getDefaultImpl().addPerson(age);
      return;
    }
    _reply.readException();
  }
  finally {
    _reply.recycle();
    _data.recycle();
  }
}

问题:思考一下,在客户端调用aidl接口方法,怎么会跨进程调用到服务端的数据资源,native层又没有aidl中的这些接口?

在native层,BpBinder是具有跨进程通信能力的,客户端想要跨进程通信,就需要拿到BpBinder的实例,于是就使用BinderProxy给封装了一层,其实BinderProxy 跟 BpBinder其实是对等的

3.3 transact方法

在客户端调用addPerson方法的时候,通过BinderProxy的transact方法跨进程通信,将客户端的数据,以及Stub.TRANSACTION_addPerson发送给服务端

Stub就相当于服务端,会解析这些数据,在onTransact 方法中,调用this.addPerson(_arg0),这个this,就是服务端创建的Stub对象

@Override 
public boolean onTransact(int code, android.os.Parcel data, android.os.Parcel reply, int flags) throws android.os.RemoteException
{
  java.lang.String descriptor = DEscriptOR;
  switch (code)
  {
    case INTERFACE_TRANSACTION:
    {
      reply.writeString(descriptor);
      return true;
    }
    case TRANSACTION_addPerson:
    {
      data.enforceInterface(descriptor);
      int _arg0;
      _arg0 = data.readInt();
      //调用了服务端的 addPerson
      this.addPerson(_arg0);
      reply.writeNoException();
      return true;
    }
    default:
    {
      return super.onTransact(code, data, reply, flags);
    }
  }
}

RemoteService就是this,这里addPerson会处理数据,修改服务端的数据;如果有数据返回,之前客户端就会挂起,等待数据返回

class RemoteService : Service() {

    override fun onBind(intent: Intent): IBinder {

        return object : IServiceManager.Stub(){
            override fun addPerson(age: Int) {
                //服务端的binder
            }
        }
    }
}

看图就明白这个其中的逻辑关系

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/733575.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号