storage R&D guy.
全部博文(1000)
分类: 服务器与存储
2015-07-24 10:24:23
因为nfs服务器启动时的端口是不确定的,所以nfs服务器将自己的端口注册到rpc服务,客户端通过rpc请求知道nfs服务器的监听端口。下面就分析整个rpc的处理过程。现在假设客户端有一个rpc请求达到服务器端了,通过上面nfs协议初始化的分析知道:所有的数据读写事件都是在函数nfs_rpcsvc_conn_data_handler中处理,因为是客户端发送来的请求数据,所以执行的是epoll_in事件处理相关代码,这些事件的处理都是在函数nfs_rpcsvc_conn_data_poll_in中,这个函数实现如下:
1 int nfs_rpcsvc_conn_data_poll_in (rpcsvc_conn_t *conn) 2 3 { 4 5 ssize_t dataread = -1; 6 7 size_t readsize = 0; 8 9 char *readaddr = NULL; 10 11 int ret = -1; 12 13 readaddr = nfs_rpcsvc_record_read_addr (&conn->rstate);//rpc服务记录开始读取数据的地址 14 15 readsize = nfs_rpcsvc_record_read_size (&conn->rstate);//rpc服务记录数据需要读取的长度 16 17 dataread = nfs_rpcsvc_socket_read (conn->sockfd, readaddr, readsize);//从socket中读出记录数据 18 19 if (dataread > 0) 20 21 ret = nfs_rpcsvc_record_update_state (conn, dataread);//根据读取的数据处理 22 23 return ret; 24 25 }
上面代码首先会根据rpc服务记录中的接收数据类型来判断接收什么数据,主要是分为头部消息和正式的rpc消息,正式的rpc消息的长度是通过头部消息中给出的,所以接收消息的步骤一般是先头部消息,然后正式的rpc调用消息,否则就是视为错误的消息,然后根据消息的长度从socket中读出消息到rpc服务记录的结构体的成员变量中,最后交给函数nfs_rpcsvc_record_update_state处理,它根据读取的数据来处理整个rpc的过程,包括xdr(外部数据表示)和根据消息获取调用的函数并且执行函数,具体实现如下:
1 int nfs_rpcsvc_record_update_state (rpcsvc_conn_t *conn, ssize_t dataread) 2 3 { 4 5 rpcsvc_record_state_t *rs = NULL; 6 7 rpcsvc_t *svc = NULL; 8 9 rs = &conn->rstate; 10 11 if (nfs_rpcsvc_record_readfraghdr(rs))//根据rpc服务的记录状态是否读取头部消息 12 13 dataread = nfs_rpcsvc_record_update_fraghdr (rs, dataread);//读取消息头部 14 15 if (nfs_rpcsvc_record_readfrag(rs)) {//是否读取后面的数据 16 17 if ((dataread > 0) && (nfs_rpcsvc_record_vectored (rs))) {//是否读取向量片段( 18 19 dataread = nfs_rpcsvc_handle_vectored_frag (conn, dataread);//处理向量片段数据 20 21 } else if (dataread > 0) { 22 23 dataread = nfs_rpcsvc_record_update_frag (rs, dataread);//更新rpc服务记录的片段数据 24 25 } 26 27 } 28 29 if ((nfs_rpcsvc_record_readfraghdr(rs)) && (rs->islastfrag)) {//如果下一条消息是头部消息且是最后一帧 30 31 nfs_rpcsvc_handle_rpc_call (conn);//处理rpc调用 32 33 svc = nfs_rpcsvc_conn_rpcsvc (conn);//链接对象引用加1 34 35 nfs_rpcsvc_record_init (rs, svc->ctx->iobuf_pool);//重新初始化rpc服务记录的状态信息 36 37 } 38 39 return 0; 40 41 }
整个函数首先读取协议信息的头部消息,读取完头部信息以后更新rpc服务记录状态,然后根据更新的状态继续读取头部信息后面的消息,后面的消息分为两种情况来读取,一般第一次来的是一个头部消息,这个消息中记录了下一次需要读取的消息的长度,也就是正式的rpc调用信息的长度。所以当第二次消息响应来的时候就是正式消息,根据不同的消息有不同的处理方式。头部消息处理方式主要是为接收正式的消息做一些初始化和准备工作(例如数据的长度和类型等)。如果头部消息则不会执行处理rpc的调用函数,因为它必须要接收到rpc调用消息以后才能处理。下面继续分析处理rpc调用的函数nfs_rpcsvc_handle_rpc_call,因为它是处理整个rpc调用的核心,它的实现如下:
1 int nfs_rpcsvc_handle_rpc_call (rpcsvc_conn_t *conn) 2 3 { 4 5 rpcsvc_actor_t *actor = NULL; 6 7 rpcsvc_request_t *req = NULL; 8 9 int ret = -1; 10 11 req = nfs_rpcsvc_request_create (conn);//动态创建一个rpc服务请求对象(结构体) 12 13 if (!nfs_rpcsvc_request_accepted (req))//是否接受rpc服务请求 14 15 ; 16 17 actor = nfs_rpcsvc_program_actor (req);//得到rpc服务调用过程的描述对象 18 19 if ((actor) && (actor->actor)) { 20 21 THIS = nfs_rpcsvc_request_actorxl (req);//得到请求的xlator链表 22 23 nfs_rpcsvc_conn_ref (conn);//链接状态对象的引用加1 24 25 ret = actor->actor (req);//执行函数调用 26 27 } 28 29 return ret; 30 31 }
这个函数首先根据链接状态对象创建一个rpc服务请求的对象,然后根据rpc服务请求对象得到一个rpc服务调用过程的描述对象,最后就根据这个描述对象执行具体的某一个rpc远程调用请求。下面在看看怎样根据连接状态对象创建rpc服务请求对象的,nfs_rpcsvc_request_create函数实现如下:
1 rpcsvc_request_t * nfs_rpcsvc_request_create (rpcsvc_conn_t *conn) 2 3 { 4 5 char *msgbuf = NULL; 6 7 struct rpc_msg rpcmsg; 8 9 struct iovec progmsg; /* RPC Program payload */ 10 11 rpcsvc_request_t *req = NULL; 12 13 int ret = -1; 14 15 rpcsvc_program_t *program = NULL; 16 17 nfs_rpcsvc_alloc_request (conn, req);//从内存池中得到一个权限请求对象并且初始化为0 18 19 msgbuf = iobuf_ptr (conn->rstate.activeiob);//从激活的IO缓存得到一个用于消息存放的缓存空间 20 21 //从xdr数据格式转换到rpc数据格式 22 23 ret = nfs_xdr_to_rpc_call (msgbuf, conn->rstate.recordsize, &rpcmsg, 24 25 &progmsg, req->cred.authdata, req->verf.authdata); 26 27 nfs_rpcsvc_request_init (conn, &rpcmsg, progmsg, req);//根据上面转换的消息初始化rpc服务请求对象 28 29 if (nfs_rpc_call_rpcvers (&rpcmsg) != 2) {//rpc协议版本是否支持 30 31 ; 32 33 } 34 35 ret = __nfs_rpcsvc_program_actor (req, &program);//根据程序版本号得到正确的rpc请求描述对象 36 37 req->program = program; 38 39 ret = nfs_rpcsvc_authenticate (req);//执行权限验证函数调用验证权限 40 41 if (ret == RPCSVC_AUTH_REJECT) {//是否被权限拒绝 42 43 ; 44 45 } 46 47 return req; 48 49 }
通过上面的函数调用就得到了一个正确版本的rpc服务远程调用程序的描述对象,后面会根据这个对象得到对应的远程调用函数的描述对象,这个是通过下面这个函数实现的:
1 rpcsvc_actor_t * nfs_rpcsvc_program_actor (rpcsvc_request_t *req) 2 3 { 4 5 int err = SYSTEM_ERR; 6 7 rpcsvc_actor_t *actor = NULL; 8 9 actor = &req->program->actors[req->procnum];//根据函数id得到正确的函数调用对象 10 11 return actor; 12 13 }
这里得到的函数调用对象就会返回给调用程序,调用程序就会具体执行远程过程调用了。到此一个完整的rpc调用以及一个nfs服务就完成了,nfs服务器就等待下一个请求,整个过程可谓一波三折,整个过程绕了很大一个圈。下面通过一个图来完整描述整个过程:
NFS Protocol Family |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
MNTV1:ftp://ftp.rfc-editor.org/in-notes/rfc1094.txt.
Directory Path Length:The directory path length.
Mntv3:ftp://ftp.rfc-editor.org/in-notes/rfc1813.txt.
Directory path length:The directory path length.
NFS2:ftp://ftp.rfc-editor.org/in-notes/rfc1094.txt.
File info/Directory info:The File info or directory info.
NFS3:ftp://ftp.rfc-editor.org/in-notes/rfc1813.txt. 1 The number of over-the-wire packets for a given set of file operations is reduced by returning file attributes on every operation, thus decreasing the number of calls to get modified attributes. 2 The write throughput bottleneck caused by the synchronous definition of write in the NFS version 2 protocol has been addressed by adding support so that the NFS server can do unsafe writes. Unsafe writes are writes which have not been committed to stable storage before the operation returns. 3 Limitations on transfer sizes have been relaxed.
The ability to support multiple versions of a protocol in RPC will allow implementors of the NFS version 3 protocol to define clients and servers that provide backward compatibility with the existing installed base of NFS version 2 protocol implementations.
Object info/ File info/ Directory info Length:The information length in octets
NFSv4:ftp://ftp.rfc-editor.org/in-notes/rfc3010.txt · Improved access and good performance on the Internet. · Strong security with negotiation built into the protocol. · Good cross-platform interoperability. · Designed for protocol extensions.
The general file system model used for the NFS version 4 protocol is the same as previous versions. The server file system is hierarchical with the regular files contained within being treated as opaque byte streams. In a slight departure, file and directory names are encoded with UTF-8 to deal with the basics of internationalization.
Tag Length:The length in bytes of the tag
NLMv4:ftp://ftp.rfc-editor.org/in-notes/rfc1813.txt.
Cookie Length:The cookie length.
NSMv1: style="margin:0px;padding:0px;" />
The Network Status Monitor (NSM) protocol is related to, but separate from, the Network Lock Manager (NLM) protocol.The NLM uses the NSM (Network Status Monitor Protocol V1) to enable it to recover from crashes of either the client or server host. To do this, the NSM and NLM protocols on both the client and server hosts must cooperate.
Name Length: The mon name or host name length. |