+

CN108173965A - Community-aware ICN caching method - Google Patents

Community-aware ICN caching method Download PDF

Info

Publication number
CN108173965A
CN108173965A CN201810058582.7A CN201810058582A CN108173965A CN 108173965 A CN108173965 A CN 108173965A CN 201810058582 A CN201810058582 A CN 201810058582A CN 108173965 A CN108173965 A CN 108173965A
Authority
CN
China
Prior art keywords
corporations
node
network
importance
interest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810058582.7A
Other languages
Chinese (zh)
Inventor
蔡君
刘燕
罗建桢
雷方元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Polytechnic Normal University
Original Assignee
Guangdong Polytechnic Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Polytechnic Normal University filed Critical Guangdong Polytechnic Normal University
Priority to CN201810058582.7A priority Critical patent/CN108173965A/en
Publication of CN108173965A publication Critical patent/CN108173965A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5682Policies or rules for updating, deleting or replacing the stored data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0076Distributed coding, e.g. network coding, involving channel coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

一种社团感知的ICN缓存方法,属于计算机的技术领域,其涉及在存储、传输和处理数字信息。随着新应用的不断涌现,流量产生和传输的方式也将发生根本性变换,其中大部分流量来源于用户驱动的内容获取类应用。本发明为一种社团感知的ICN缓存策略(SCCNC),具有不同节点社团重要度的节点采取不同的缓存决策和缓存替换策略,使缓存内容在时间和空间分布上更加合理。将网络编码引入缓存决策和缓存替换策略,在不增加缓存空间的情况下,提高缓存命中率和缓存多样性。在多种实验条件下对SCCNC策略进行了仿真验证,结果表明该策略与其它三种缓存策略相比,能更好地提升包括缓存命中率、跳数减少率、平均下载时间等在内的网络缓存性能。

A community-aware ICN caching method belongs to the technical field of computers and relates to storing, transmitting and processing digital information. As new applications continue to emerge, the way traffic is generated and delivered will fundamentally change, with the majority of traffic coming from user-driven content acquisition applications. The invention is a community-aware ICN caching strategy (SCCNC). Nodes with different node community importance adopt different caching decisions and caching replacement strategies, so that the caching content is more reasonable in terms of time and space distribution. Introduce network coding into cache decision-making and cache replacement strategy to improve cache hit rate and cache diversity without increasing cache space. The SCCNC strategy was simulated and verified under various experimental conditions, and the results showed that compared with the other three cache strategies, this strategy can better improve the network including cache hit rate, hop reduction rate, average download time, etc. Cache performance.

Description

社团感知的ICN缓存方法Community-aware ICN caching method

技术领域technical field

本发明属于计算机的技术领域,尤其涉及在存储、传输和处理数字信息上受用的一种方法。The invention belongs to the technical field of computers, and in particular relates to a method used in storing, transmitting and processing digital information.

背景技术Background technique

近年来,随着新应用的不断涌现,流量产生和传输的方式也将发生根本性变换,其中大部分流量来源于用户驱动的内容获取类应用。内容分发与获取成为互联网的主要应用,这使得当前基于端到端通信的TCP/IP网络架构遇到前所未有的挑战。为了适应用户和应用的需求,增强互联网架构的移动性、安全性和可扩展性,国内外研究学者提出了一系列以信息或内容为中心的全新网络体系架构。In recent years, with the continuous emergence of new applications, the way traffic is generated and transmitted will also undergo a fundamental change. Most of the traffic comes from user-driven content acquisition applications. Content distribution and acquisition have become the main applications of the Internet, which makes the current TCP/IP network architecture based on end-to-end communication encounter unprecedented challenges. In order to meet the needs of users and applications and enhance the mobility, security and scalability of the Internet architecture, domestic and foreign researchers have proposed a series of new network architectures centered on information or content.

以内容为中心的网络(Content Centric Networking,简记为CCN)被公认为最有前途的ICN网络架构。在CCN中,有两种类型的数据包:Interest包和Data包。用户通过发送Interest请求某个内容,任何收到Interest并有此内容的节点,均可以通过发送携带该内容的Data包响应Interest。沿途节点记录Interest的转发路径,使得Data包可以沿Interest的转发路径原路返回到用户。中间节点收到Data包时,将内容缓存在本地缓存中,以便响应后续请求。Content-centric networking (Content Centric Networking, abbreviated as CCN) is recognized as the most promising ICN network architecture. In CCN, there are two types of data packets: Interest packets and Data packets. The user requests a certain content by sending an Interest, and any node that receives the Interest and has the content can respond to the Interest by sending a Data packet carrying the content. The nodes along the way record the forwarding path of the Interest, so that the Data packet can return to the user along the forwarding path of the Interest. When the intermediate node receives the Data packet, it caches the content in the local cache so as to respond to subsequent requests.

为缓解网络流量快速增长对网络带宽造成的巨大压力,ICN通过为全网节点增加缓存功能,让内容距离用户更近,减少网络流量。缓存策略决定了内容在网络中的时空分布,影响网络的流量行为。ICN中原始缓存策略是LCE(Leave Copy Everywhere),即网络中的每个节点都缓存收到的内容,这会造成极大的缓存冗余。为此,国内外研究学者提出了多种缓存机制。现有的缓存机制主要存在以下问题:(1)在缓存位置上,大多从全局角度出发,而缓存的目的是为了满足局部用户的需求;(2)在替换策略上,每个节点采用相同的替换策略,导致缓存内容同质化。为此,本发明提出一种社团感知的缓存与缓存替换策略(SCCNC),使得不同流行度的内容在网络中分布更合理。In order to alleviate the huge pressure on the network bandwidth caused by the rapid growth of network traffic, ICN adds a cache function to the nodes of the entire network to make content closer to users and reduce network traffic. The caching strategy determines the spatiotemporal distribution of content in the network and affects the traffic behavior of the network. The original cache strategy in ICN is LCE (Leave Copy Everywhere), that is, each node in the network caches the received content, which will cause great cache redundancy. For this reason, researchers at home and abroad have proposed a variety of caching mechanisms. The existing caching mechanism mainly has the following problems: (1) Most of the cache locations are from a global perspective, while the purpose of caching is to meet the needs of local users; (2) In terms of replacement strategies, each node uses the same Replacement strategy, resulting in homogenization of cached content. For this reason, the present invention proposes a community-aware caching and caching replacement strategy (SCCNC), so that content with different popularity can be more reasonably distributed in the network.

近年来,不少学者认为将网络编码引入ICN可以提升网络性能。然而,由于ICN的网内缓存机制,同一个编码块有可能会被转发路径上的多个节点缓存。相同或是线性相关的编码块有可能会响应给同一个用户,造成用户收到线性相关的编码块,无法解码的情况。为了避免把相同的或是线性相关的编码块响应给同一用户,途中节点只缓存原始块,收到编码块只能使用一次,不能被节点缓存用于响应后续请求。In recent years, many scholars believe that introducing network coding into ICN can improve network performance. However, due to ICN's in-network caching mechanism, the same encoded block may be cached by multiple nodes on the forwarding path. The same or linearly related coded blocks may respond to the same user, causing the user to receive linearly related coded blocks that cannot be decoded. In order to avoid responding to the same user with the same or linearly related coded blocks, the nodes on the way only cache the original blocks, and the received coded blocks can only be used once, and cannot be cached by the node to respond to subsequent requests.

有研究表明,Internet网络拓扑结构呈现社团特性,即:社团内部节点连接相对紧密,社团之间节点连接则相对稀疏。在同一社团中,节点社团重要度大的节点不仅容易被社团内的节点访问,也容易被社团外的节点访问。因此,在SCCNC中,把原始内容块缓存于其经过的各社团内节点社团重要度最大的节点处,编码块缓存在节点社团重要度较低的节点上。这样做不仅能提高网络的缓存效率,节约网络的缓存空间,而且能使不同流行度的内容在网络空间上的分布更加均匀和合理。当缓存空间耗尽时,在每个社团内,不同节点采用不同的缓存替换策略,以一定的概率替换缓存内容,替换概率取决于该节点的节点社团重要度和缓存内容的流行度,以实现缓存内容在时间上的合理分布。同时,本发明提出一种用编码代替移除的缓存替换策略,在不增加节点缓存空间的条件下,提升缓存内容多样性和缓存命中率。Some studies have shown that the Internet network topology presents community characteristics, that is, the nodes within a community are relatively closely connected, and the nodes between communities are relatively sparsely connected. In the same community, nodes with high importance of the node community are not only easily accessed by nodes within the community, but also easily accessed by nodes outside the community. Therefore, in SCCNC, the original content block is cached at the node with the highest node community importance in each community it passes through, and the coded block is cached at the node with lower node community importance. Doing so can not only improve the cache efficiency of the network, save the cache space of the network, but also make the distribution of content with different popularity more even and reasonable in the network space. When the cache space is exhausted, in each community, different nodes adopt different cache replacement strategies to replace the cache content with a certain probability. The replacement probability depends on the node community importance of the node and the popularity of the cache content to achieve Reasonable distribution of cached content over time. At the same time, the present invention proposes a cache replacement strategy that uses encoding instead of removal to improve cache content diversity and cache hit rate without increasing node cache space.

相关工作介绍Related job introduction

缓存技术已经被广泛应用到Web、P2P和CDN中。但是,传统的缓存机制是针对特定的网络场景设计的,不能直接移植到ICN中。通过在全网节点增加缓存是ICN的一个重要特征,缓存策略和缓存替换策略决定内容在网络中的时空分布,影响网络性能。目前大部分研究针对缓存策略,缓存策略决定缓存对象和缓存位置。在最初的ICN提案中,采用LCE(LeaveCopy Everywhere)策略,即:内容被转发路径上所有节点缓存,这会造成极大的缓存冗余。目前的缓存策略可以分为两大类:显式协作缓存和隐式协作缓存。显式协作缓存策略根据内容访问模式,缓存网络拓扑和每个缓存的状态信息计算缓存节点。在文献中,介绍了一种基于哈希函数的协作式缓存策略(CINC),在CINC中,每个节点都与其两跳范围内的邻居节点交换信息,并为自己和邻居节点分配标签。当节点收到Data包时,利用哈希函数确定该内容的缓存位置,这样可以降低网络中的缓存冗余。SainoL等人,提出了三种ICN中的哈希路由与缓存机制,分别为对称、非对称、以及多播哈希路由。其主要思想为:利用哈希函数计算缓存节点,将且仅将内容缓存到该节点上。当用户请求内容时,Interest直接被路由到该缓存节点而不是内容服务器。WANG等人将内容空间分区与哈希路由结合,提出一种协作式的网内缓存策略,实现高效的协作式缓存,但是会带来较高的通信开销。Caching technology has been widely used in Web, P2P and CDN. However, traditional caching mechanisms are designed for specific network scenarios and cannot be directly transplanted into ICN. It is an important feature of ICN to increase the cache in the nodes of the whole network. The cache strategy and cache replacement strategy determine the temporal and spatial distribution of content in the network and affect network performance. Most of the current research focuses on caching strategies, which determine cache objects and cache locations. In the original ICN proposal, the LCE (LeaveCopy Everywhere) strategy is adopted, that is, the content is cached by all nodes on the forwarding path, which will cause great cache redundancy. Current caching strategies can be divided into two categories: explicit cooperative caching and implicit cooperative caching. An explicit cooperative caching strategy computes cache nodes based on content access patterns, cache network topology, and state information for each cache. In the literature, a hash function-based collaborative caching strategy (CINC) is introduced, in which each node exchanges information with its neighbors within two hops and assigns labels to itself and its neighbors. When the node receives the Data packet, it uses the hash function to determine the cache location of the content, which can reduce the cache redundancy in the network. SainoL et al. proposed three hash routing and caching mechanisms in ICN, namely symmetric, asymmetric, and multicast hash routing. The main idea is: use the hash function to calculate the cache node, and only cache the content on this node. When a user requests content, Interest is routed directly to the cache node instead of the content server. Combining content space partitioning with hash routing, WANG et al. proposed a cooperative intra-network caching strategy to achieve efficient cooperative caching, but it would bring high communication overhead.

在隐式协作缓存策略中,每个节点独立自主地决定是否缓存收到的内容。在文献中,内容转发路径上每个节点按一定的概率缓存该内容,缓存概率与节点位置有关,越靠近用户的节点缓存该内容的概率越高,以便将内容快速地推送到网络边缘。CHAI等人提出基于介数(Betweenness)的选择性缓存机制,将内容缓存在Interest转发路径上介数最大的节点,以提高缓存命中率,减少替换次数。在文献中,作者提出一种基于内容块流行度的缓存策略,将流行度高的内容块缓存到边缘路由器中。WAVE是一个基于内容流行度的缓存机制,根据内容流行度调节每个节点缓存的内容块数量。随着请求次数的增加,内容块缓存数量呈指数增长。In an implicit cooperative caching strategy, each node independently decides whether to cache received content. In the literature, each node on the content forwarding path caches the content with a certain probability. The caching probability is related to the location of the node. The closer the node is to the user, the higher the probability of caching the content, so that the content can be quickly pushed to the edge of the network. CHAI and others proposed a selective caching mechanism based on betweenness, which caches content at the node with the largest betweenness on the forwarding path of Interest, so as to improve the cache hit rate and reduce the number of replacements. In the literature, the author proposes a caching strategy based on the popularity of content blocks, which caches content blocks with high popularity in edge routers. WAVE is a caching mechanism based on content popularity, which adjusts the number of content blocks cached by each node according to content popularity. As the number of requests increases, the number of content block caches grows exponentially.

随机网络编码被证明是一种适合于多播网络的、能够节省传输带宽的技术。但在ICN中的应用研究却很少,其中亟待解决的问题是“如何保证用户收到足够多的线性独立的编码块来解码出原始内容”。WANG等人提出一种在软件定义网络上实现ICN的网络编码与内容缓存方案。SDN控制器作为中心控制器,根据各交换机得到的关于请求的统计数据,确定如何最优地把被请求的内容块缓存到靠近用户的节点,使得总的数据传输量尽可能少。文献考虑的是如何根据ICN网络拓扑、缓存容量和链路容量,对被请求的内容块和编码块进行全网最优缓存和传输。然而它没有具体指明如何在ICN网络中实现所述的方案,而仅仅是从理论上探讨如何优化和如何减少所需计算量问题。WU等人提出CodingCache,通过网络编码和随机转发来提高CCN缓存效率。CodingCache获取编码块的过程实际上是采用一块接一块的方式来获取的,因为它必须把已经获取的编码块的系数向量都放在请求中,使得收到请求的节点能够提供下一个线性独立的编码块。其局限性是,一个用户要获取N个编码块,就需要N个来回时延,并且不同用户的Interest无法合并。Random network coding is proved to be a technique suitable for multicast networks and can save transmission bandwidth. However, there are very few researches on the application of ICN, and the urgent problem to be solved is "how to ensure that the user receives enough linearly independent coding blocks to decode the original content". Wang et al. proposed a network coding and content caching scheme to realize ICN on software-defined network. As the central controller, the SDN controller determines how to optimally cache the requested content block to a node close to the user according to the statistical data about the request obtained by each switch, so that the total amount of data transmission is as small as possible. The literature considers how to perform network-wide optimal caching and transmission of requested content blocks and coded blocks according to ICN network topology, cache capacity, and link capacity. However, it does not specify how to implement the scheme in the ICN network, but only theoretically discusses how to optimize and how to reduce the amount of calculation required. WU et al. proposed CodingCache to improve CCN caching efficiency through network coding and random forwarding. The process of CodingCache obtaining coding blocks is actually obtained in a block-by-block manner, because it must put the coefficient vectors of the obtained coding blocks in the request, so that the node receiving the request can provide the next linearly independent code block. Its limitation is that if a user wants to obtain N coding blocks, N round-trip delays are required, and the interests of different users cannot be combined.

为了实现不同流行度的内容在网络中的合理分布,本发明提出一种基于节点社团重要度的缓存策略与缓存替换策略(SCCNC),利用网络编码来节省缓存空间,提高缓存效率。在SCCNC中,以社团为单位,节点社团重要度高的节点缓存原始块,其它节点缓存编码块。在缓存替换机制上,对于不同的节点,采用不同的缓存替换策略,使内容在时间与空间分布上更合理。In order to realize the reasonable distribution of content with different popularity in the network, the present invention proposes a caching strategy and caching replacement strategy (SCCNC) based on node community importance, which uses network coding to save cache space and improve cache efficiency. In SCCNC, the community is used as the unit, the node with high community importance caches the original block, and other nodes cache the coded block. In the cache replacement mechanism, for different nodes, different cache replacement strategies are adopted to make the content more reasonable in time and space distribution.

发明内容Contents of the invention

本发明的技术方案:Technical scheme of the present invention:

1、节点社团重要度定义1. Definition of node community importance

本发明将基于节点局部重要度—节点社团重要度确定ICN网络中内容的缓存位置。网络邻接矩阵的特征谱能清楚地反映网络中社团的数目,例如由c个社团组成的网络,则该网络的邻接矩阵将有c个特征值远大于其它特征值,这些特征值可以作为量化网络社团结构的重要指标。因而,网络社团强度定义为:其中λ12,…,λc表示邻接矩阵特征值中按降序排列的前c个特征值。当节点k离开网络时,整个网络的社团结构和邻接矩阵特征值都将随之变化,即所以,节点k对网络社团特性的重要度为Pk=P-P′。利用摄动理论可得节点社团重要度的近似解,如式(1)。The present invention will determine the cache position of the content in the ICN network based on the local importance of the node-the importance of the node community. The eigenspectrum of the network adjacency matrix can clearly reflect the number of communities in the network, for example, a network composed of c communities, then the adjacency matrix of the network will have c eigenvalues that are much larger than other eigenvalues, and these eigenvalues can be used as quantitative network An important indicator of community structure. Therefore, the network community strength is defined as: Among them, λ 1 , λ 2 ,..., λ c represent the first c eigenvalues in descending order in the adjacency matrix eigenvalues. When node k leaves the network, the community structure and the eigenvalues of the adjacency matrix of the entire network will change accordingly, namely Therefore, the importance of node k to the characteristics of the network community is P k =PP'. The approximate solution of node community importance can be obtained by using the perturbation theory, such as formula (1).

其中,c为网络中社团数目,vi表示以网络中的路由器为节点,路由器之间的物理链路为边构建的邻接矩阵的第i个特征向量,vik表示特征向量vi中的第k个元素。Pk值越大,节点k在其所属的社团中越重要,即社团内外的其它节点访问该节点将越容易。对于n个节点,c个社团的网络,有为使测量参数的和为1,定义Among them, c is the number of communities in the network, v i represents the i-th eigenvector of the adjacency matrix constructed with the routers in the network as nodes and the physical links between routers as edges, and v ik represents the i-th eigenvector in the eigenvector v i k elements. The larger the value of Pk , the more important node k is in the community it belongs to, that is, the easier it will be for other nodes inside and outside the community to visit this node. For a network with n nodes and c communities, there is In order to make the sum of the measured parameters equal to 1, define

Ik=Pk/c,满足在应用I之前,需预先知道网络中社团数目c的值。本发明利用网络的频谱特性直接确定网络社团数目。如果c给定,该方法无需对网络进行社团划分,避免了复杂的社团划分的计算量,可以直接描述节点对社团的重要度。由公式(1),计算每个节点的社团重要度,只需求出表示网络节点连接关系的邻接矩阵的所有特征值和特征向量。而现实中的大部分网络为稀疏网络,利用Lanczos算法和QL算法,求稀疏对称矩阵的所有特征值和特征向量的时间复杂度为O(nm),其中n和m分别表示网络的节点数和边数,计算量比较低,适合于实际应用。I k =P k /c, satisfy Before applying I, the value of the community number c in the network needs to be known in advance. The invention utilizes the spectrum characteristic of the network to directly determine the number of network communities. If c is given, this method does not need to divide the network into communities, avoids the calculation of complex community divisions, and can directly describe the importance of nodes to communities. According to formula (1), to calculate the community importance of each node, only all the eigenvalues and eigenvectors of the adjacency matrix representing the connection relationship of network nodes are required. However, most networks in reality are sparse networks. Using Lanczos algorithm and QL algorithm, the time complexity of finding all eigenvalues and eigenvectors of sparse symmetric matrices is O(nm), where n and m represent the number of nodes and The number of sides is relatively low, which is suitable for practical applications.

2、Interest包和Data包转发机制2. Interest packet and Data packet forwarding mechanism

在SCCNC中,Interest记录其转发路径上的每个社团中节点社团重要度的最大值,即{I1max,I2max,…,Iimax},其中Iimax表示Interest转发路径上第i个社团中节点重要度的最大值。当Data沿Interest转发路径返回用户时,中间节点通过对比自己的节点重要度Iij及Data携带的该社团的节点重要度最大值,Iimax,制定对应的缓存方案。本发明设计一种Interest合并机制,用于合并节点收到的多个Interest,目的是减少Interest包和Data包的通信开销。当节点Nj收到Interest时,将自己的节点重要度Iij与Interest中携带的当前社团的重要度最大值Iimax进行对比,如果Iij>Iimax,则令Iimax=Iij。当Interest被转发到一个新的社团i时,遇到的第一个节点Ni1(记为FN),记录下游社团的节点重要度最大值,即I(i-1)max。这样Interest只需携带当前社团的节点重要度最大值,以减少Interest的通信开销。如图1可见,当社团2中的节点N21收到Interest(p,p1,I4)时,用自己的节点社团重要度I21替换Interest中节点社团重要度最大值,I4,然后将新的Interest,即Interest(p,p1,I21),转发给上游节点,同时新建一条PIT条目记录Interest(p,p1,I4)。In SCCNC, Interest records the maximum value of the node community importance in each community on its forwarding path, namely {I 1max , I 2max ,…,I imax }, where I imax represents the i-th community on the Interest forwarding path The maximum value of node importance. When Data returns to the user along the Interest forwarding path, the intermediate node formulates a corresponding caching scheme by comparing its own node importance I ij with the maximum value of the community's node importance carried by Data, I imax . The present invention designs an Interest merging mechanism for merging multiple Interests received by nodes, with the purpose of reducing the communication overhead of Interest packets and Data packets. When the node N j receives the Interest, it compares its own node importance I ij with the maximum value I imax of the current community importance carried in the Interest, and if I ij >I imax , then set I imax =I ij . When Interest is forwarded to a new community i, the first node N i1 encountered (denoted as FN) records the maximum value of node importance in the downstream community, namely I (i-1)max . In this way, Interest only needs to carry the maximum value of the node importance of the current community, so as to reduce the communication overhead of Interest. As shown in Figure 1, when node N 21 in community 2 receives Interest(p,p 1 ,I 4 ), it replaces the maximum value of node community importance in Interest, I 4 , with its own node community importance I 21 , and then Forward the new Interest, namely Interest(p,p 1 ,I 21 ), to the upstream node, and create a new PIT entry record Interest(p,p 1 ,I 4 ).

当节点Nj从接口k收到Interest时,首先检查其PIT表。每个PIT表项为一个五元组,如表1所示:When node Nj receives Interest from interface k, it first checks its PIT table. Each PIT entry is a quintuple, as shown in Table 1:

<ContentName,ChunkID,Faces,Iimax,I(i-1)max><ContentName,ChunkID,Faces,Iimax,I (i-1)max >

其中,“ContentName”是内容名,“ChunkID”是内容块的名字,“Faces”是收到Interest的接口号,“Iimax”是Interest转发路径上当前社团ci的节点重要度的最大值,“I(i-1)max”是Interest转发路径上下游社团ci-1的节点重要度最大值,只有当前社团ci中的FN记录I(i-1)max。如果PIT中已有对应的表项,则将新到的Interest与其合并,同时丢弃该Interest;否则,新建一条PIT表项。算法1描述了Interest的转发过程。Among them, "ContentName" is the name of the content, "ChunkID" is the name of the content block, "Faces" is the interface number that received the Interest, "Iimax" is the maximum value of the node importance of the current community ci on the forwarding path of the Interest, "I (i-1)max ” is the maximum value of the node importance of the upstream and downstream community ci -1 of the Interest forwarding path, only the FN record I (i-1)max in the current community ci. If there is already a corresponding entry in the PIT, merge the newly arrived Interest with it and discard the Interest; otherwise, create a new PIT entry. Algorithm 1 describes the forwarding process of Interest.

算法1Interest转发过程Algorithm 1Interest forwarding process

表1扩展的PIT表Table 1 Extended PIT table

Table 1Extended PIT tableTable 1Extended PIT table

在SCCNC中,Data包携带从Interest或PIT表中提取的节点社团重要度信息,Iimax,沿Interest转发路径返回用户。中间节点在收到Data包时,对比自己的节点社团重要度Iik和Data包携带的节点社团重要度最大值Iimax,根据对比结果制定相应的缓存策略。Data包的转发过程的伪代码如算法2所示。In SCCNC, the Data packet carries the node community importance information extracted from the Interest or PIT table, I imax , and returns to the user along the Interest forwarding path. When the intermediate node receives the Data packet, it compares its own node community importance I ik with the maximum node community importance I imax carried by the Data packet, and formulates a corresponding caching strategy according to the comparison result. The pseudocode of the forwarding process of the Data packet is shown in Algorithm 2.

算法2Data转发过程Algorithm 2Data forwarding process

3、基于网络编码的缓存机制3. Cache mechanism based on network coding

以社团为单位,在同一社团内,根据Interest转发路径上各节点的社团重要度制定不同的缓存策略:重要度最高的节点缓存原始内容块,这是因为重要度高的节点更容易被其它节点访问;重要度低的节点缓存编码块,以节省缓存空间,提高缓存多样性。当节点Nj收到Data包,且Data包中携带的是内容p的原始内容块D时,将自己的节点重要度Iij与Data中携带的当前社团的重要度最大值Iimax进行对比,如果Iij=Iimax,将该内容块存储到本地缓存中。否则,查看本地缓存CS中是否有内容p的内容块D′,若存在,则对D和D′进行随机网络编码,生成新的编码块D″,并用D″替换D′。将网络编码应用到缓存中,一个编码块包含多个内容块的信息,可以响应多个内容块的请求。该缓存机制实现了缓存在网络空间上的合理分布,减少了网络延迟,提高网络的传输效率。缓存机制的伪代码如算法3所示。Taking the community as the unit, in the same community, different caching strategies are formulated according to the community importance of each node on the Interest forwarding path: the node with the highest importance caches the original content block, because the node with high importance is more likely to be used by other nodes Access; nodes with low importance cache coded blocks to save cache space and improve cache diversity. When the node N j receives the Data packet, and the Data packet carries the original content block D of the content p, it compares its own node importance I ij with the current community importance I max carried in Data , If I ij =I imax , store the content block in the local cache. Otherwise, check whether there is a content block D' of content p in the local cache CS, and if it exists, perform random network coding on D and D' to generate a new coded block D", and replace D' with D". Applying network coding to the cache, one coding block contains information of multiple content blocks, and can respond to requests for multiple content blocks. The caching mechanism realizes a reasonable distribution of caches in the network space, reduces network delays, and improves network transmission efficiency. The pseudocode of the caching mechanism is shown in Algorithm 3.

算法3SCCNC缓存策略Algorithm 3 SCCNC cache strategy

When an Data(Iimax)arrivedWhen an Data(I imax )arrived

4、基于网络编码的缓存替换策略4. Cache replacement strategy based on network coding

在SCCNC中,以社团为单位,同一社团内根据各节点的节点社团重要度不同,执行不同的缓存替换策略。节点社团重要度大的节点,流行度低的缓存内容被替换的概率大;而节点社团重要度小的节点,流行度高的缓存内容被替换的概率大,这样可以实现缓存在时间和空间上的合理分布。In SCCNC, the community is used as the unit, and different cache replacement strategies are implemented in the same community according to the importance of each node's community. Nodes with high importance in the node community have a high probability of replacing cached content with low popularity; while nodes with low importance in the node community have a high probability of replacing cached content with high popularity, which can realize caching in time and space reasonable distribution.

假设社团i的节点Nj的节点社团重要度为Iij,社团i内的平均节点重要度为当缓存耗尽时,首先利用LRU(Least Recently Used)对缓存内容进行排序,构建缓存序列。第k个内容被移除的概率是Suppose the node community importance of node N j of community i is I ij , and the average node importance in community i is When the cache is exhausted, first use LRU (Least Recently Used) to sort the cache content and build a cache sequence. The probability that the kth content is removed is

式中:α为归一化因子,满足公式(3),β为概率调节系数。In the formula: α is the normalization factor, which satisfies the formula (3), and β is the probability adjustment coefficient.

当缓存替换发生时,假设内容p是待移除的内容,若缓存的是n个原始块,则对n个原始块进行随机网络编码,生成一个编码块,缓存该编码块,移除n个原始块。这样做的好处是,可以释放n-1个内容块的缓存空间,同时保留n个内容块的信息,以响应后续请求。When the cache replacement occurs, assuming that the content p is the content to be removed, if n original blocks are cached, random network coding is performed on the n original blocks to generate an encoded block, cache the encoded block, and remove n original block. The advantage of doing this is that the cache space of n-1 content blocks can be released, and at the same time, the information of n content blocks can be reserved to respond to subsequent requests.

附图说明Description of drawings

图1是本发明SCCNC举例结构图;Fig. 1 is the example structural diagram of SCCNC of the present invention;

图2是本发明平均下载时间坐标图;Fig. 2 is an average download time coordinate diagram of the present invention;

图3是本发明缓存命中率坐标图;Fig. 3 is a cache hit rate coordinate diagram of the present invention;

图4是本发明服务器命中减少率坐标图;Fig. 4 is a coordinate diagram of server hit reduction rate of the present invention;

图5是本发明跳数减少率坐标图;Fig. 5 is a coordinate diagram of the hop reduction rate of the present invention;

图6是本发明传输流量坐标图。Fig. 6 is a coordinate diagram of transmission flow in the present invention.

具体实施方式Detailed ways

仿真实验与分析Simulation experiment and analysis

仿真实验中的网络拓扑来自BRITE[25][26],网络包含100节点,平均节点度为4,链路带宽为1Gbps。本发明使用BRITE生成多种网络拓扑,进行多次仿真实验,得到相似的仿真结果。网络采用Dijkstra算法作为路由机制。4000个大小为1GB的内容随机存储在10个内容服务器上,每个内容被分成100个内容块,其中10个内容块为一代,我们采用代内随机线性网络编码,编解码过程只发生在属于同一代的内容块之间。内容流行度服从Zipf分布,其中α∈[0.7,1,1.5,2]。用户对内容的请求过程服务泊松分布。本发明将SCCNC与以下三种机制进行对比:NC-CCN[16],CodingCache(CC)[17]和Leave Copy Down(LCD)[27]The network topology in the simulation experiment comes from BRITE [25][26] , the network contains 100 nodes, the average node degree is 4, and the link bandwidth is 1Gbps. The present invention uses BRITE to generate various network topologies, performs multiple simulation experiments, and obtains similar simulation results. The network uses the Dijkstra algorithm as a routing mechanism. 4,000 pieces of content with a size of 1GB are randomly stored on 10 content servers, and each content is divided into 100 content blocks, of which 10 content blocks are one generation. We use intra-generation random linear network coding, and the encoding and decoding process only occurs in the between content blocks of the same generation. Content popularity obeys Zipf distribution, where α∈[0.7,1,1.5,2]. The user's request process for content follows a Poisson distribution. The present invention compares SCCNC with the following three mechanisms: NC-CCN [16] , CodingCache (CC) [17] and Leave Copy Down (LCD) [27] .

本发明从以下几个方面对这四种机制的性能进行对比:The present invention compares the performance of these four mechanisms from the following aspects:

●平均下载时间:平均下载时间是指平均每个用户从发送第一个Interest到该用户接收最后一个内容块所需的时间。●Average download time: The average download time refers to the average time required by each user from sending the first Interest to receiving the last content block.

●缓存命中率:缓存命中率是衡量缓存性能的重要指标,被定义为由缓存响应Interest而不是内容服务器响应的概率。缓存命中率越高,代表网络的缓存效率越高。●Cache hit rate: The cache hit rate is an important index to measure the performance of the cache, which is defined as the probability that the cache responds to the Interest instead of the content server. The higher the cache hit rate, the higher the cache efficiency of the network.

●服务器命中减少率γ(t):若对内容块i的请求是由缓存响应的,则ωi(t)=0,否则ωi(t)=1。N(t)代表在时间区间[t-Δ,t]内用户收到的内容块总量。●Server hit reduction rate γ(t): If the request for content block i is responded by the cache, then ω i (t)=0, otherwise ω i (t)=1. N(t) represents the total amount of content blocks received by users in the time interval [t-Δ,t].

·下载跳数减少率β(t):式中,hi(t)是内容块i从缓存命中节点到请求者实际经历的跳数,Hi(t)是内容块i从内容服务器到请求者的跳数。如果对内容块i的请求是由内容服务器响应的,则hi(t)=Hi(t)。·Download hop reduction rate β(t): In the formula, h i (t) is the number of hops actually experienced by the content block i from the cache hit node to the requester, and H i (t) is the hop number of the content block i from the content server to the requester. If a request for content block i is responded to by a content server, then h i (t) = H i (t).

●传输流量:传输流量被定义为从第一个用户发送Interest到最后一个用户收到最后一个内容块整个过程中,网络传输的Data包的数据量。●Transmission traffic: Transmission traffic is defined as the data volume of Data packets transmitted by the network during the entire process from the first user sending Interest to the last user receiving the last content block.

从实验结果来看,与其它三种机制相比,SCCNC可以节约内容下载时间,网络传输的数据量,提高缓存命中率,减低服务器负载。From the experimental results, compared with the other three mechanisms, SCCNC can save the content download time, the amount of data transmitted over the network, improve the cache hit rate, and reduce the server load.

图3所示为四种缓存方案在不同参数下平均下载时间的实验结果。四种机制的平均下载时间随Zipf参数α的增加而减少,如图3(a)所示。这是因为Zipf参数α越大,用户的偏好越集中,用户发送的请求集中在一小部分内容上,使得这部分内容在网络中的缓存越来越多,越来越靠近用户。同理,用户的请求越多,网络中的缓存内容也就越多,所以四中机制的平均下载时间随用户请求数量的增加而降低,如图3(b)所示。我们可以看出SCCNC机制一直保持着明显优势,这种优势在较小的Zipf参数α和较少的用户请求的情况下尤为明显。这是因为在SCCNC中,原始块缓存在节点社团重要度高的节点上,这些节点更容易被其他节点访问。Figure 3 shows the experimental results of the average download time of the four caching schemes under different parameters. The average download time of the four mechanisms decreases with the increase of the Zipf parameter α, as shown in Fig. 3(a). This is because the larger the Zipf parameter α, the more concentrated the user's preference, and the request sent by the user is concentrated on a small part of the content, making this part of the content cached in the network more and more, and closer to the user. Similarly, the more user requests, the more cached content in the network, so the average download time of the four mechanisms decreases as the number of user requests increases, as shown in Figure 3(b). We can see that the SCCNC mechanism has always maintained a clear advantage, and this advantage is especially obvious in the case of a smaller Zipf parameter α and fewer user requests. This is because in SCCNC, original blocks are cached on nodes with high node community importance, which are more easily accessed by other nodes.

在SCCNC中,节点社团重要度低的节点缓存编码块,一个编码块可以当做多个内容块,用来响应来自不同用户对不同内容块的请求。例如,编码块cb(cb=A⊕B),能同时满足不同用户对A、B内容块的请求。因此,SCCNC能实现更高的缓存命中率(如图4所示)、更低的服务器负载减少率(如图5所示)以及更低的传输流量(如图7所示)。In SCCNC, nodes with low community importance cache code blocks, and one code block can be used as multiple content blocks to respond to requests from different users for different content blocks. For example, the coding block cb (cb=A⊕B) can satisfy different users' requests for content blocks A and B at the same time. Therefore, SCCNC can achieve higher cache hit ratio (as shown in Figure 4), lower server load reduction rate (as shown in Figure 5) and lower transmission traffic (as shown in Figure 7).

如图6所示,SCCNC在跳数减少率方面比其它缓存方案具有更佳的性能表现。其原因是,SCCNC中各节点根据其节点重要度做出不同的缓存替换决策。在缓存耗尽时,利用编码代替移除的方法,释放缓存空间,同时保留多个内容块信息。本发明提出的基于节点社团重要度和网络编码的缓存替换策略使不同流行度的内容在时间和空间的分布更合理。As shown in Figure 6, SCCNC has better performance than other caching schemes in terms of hop reduction rate. The reason is that each node in SCCNC makes different cache replacement decisions according to its node importance. When the cache is exhausted, encoding is used instead of removal to release cache space while retaining multiple content block information. The cache replacement strategy based on node community importance and network coding proposed by the present invention makes the distribution of content with different popularity more reasonable in time and space.

结论in conclusion

本发明提出一种社团感知的ICN缓存策略(SCCNC),具有不同节点社团重要度的节点采取不同的缓存决策和缓存替换策略,使缓存内容在时间和空间分布上更加合理。将网络编码引入缓存决策和缓存替换策略,在不增加缓存空间的情况下,提高缓存命中率和缓存多样性。本发明在多种实验条件下对SCCNC策略进行了仿真验证,结果表明该策略与其它三种缓存策略相比,能更好地提升包括缓存命中率、跳数减少率、平均下载时间等在内的网络缓存性能。The present invention proposes a community-aware ICN cache strategy (SCCNC). Nodes with different node community importances adopt different cache decisions and cache replacement strategies to make cache content more reasonable in terms of time and space distribution. Introduce network coding into cache decision-making and cache replacement strategy to improve cache hit rate and cache diversity without increasing cache space. The present invention simulates and verifies the SCCNC strategy under various experimental conditions, and the results show that the strategy can better improve cache hit rate, hop reduction rate, average download time, etc. compared with other three cache strategies network cache performance.

Claims (4)

1. a kind of ICN caching methods that corporations perceive, it is characterized in that node part importance-node corporations importance will be based on Determining the cache location of content in ICN networks, the characteristic spectrum of network adjacent matrix can clearly reflect the number of corporations in network, Such as the network being made of c corporations, then the adjacency matrix of the network will have c characteristic value much larger than other feature value, these Characteristic value can as quantization network community structure important indicator, thus, network community strength definition is:Wherein λ12,…,λcRepresent the preceding c characteristic value arranged in descending order in adjacency matrix characteristic value.Work as section When point k leaves network, the community structure and adjacency matrix characteristic value of whole network will all change therewith, i.e.,So node k is P to the importance of network community characteristick=P-P ' can must be saved using perturbation theory The approximate solution of point corporations importance, such as formula (1):
Wherein, c is corporations' number in network, viIt represents using the router in network as node, the physical link between router is The ith feature vector of the adjacency matrix of side structure, vikRepresent feature vector viIn k-th of element, PkIt is worth bigger, node k More important in corporations belonging to it, i.e., the other node visits node inside and outside corporations will be easier, for n node, c The network of corporations, hasFor make measurement parameter and be 1, define Ik=Pk/ c meetsIt is applying Before I, the value of corporations' number c in network need to be known in advance, directly determine network community number using the spectral characteristic of network, such as Fruit c gives, and this method avoids the calculation amount that complicated corporations divide, can directly retouch without carrying out corporations' division to network State importance of the node to corporations.By formula (1), corporations' importance of each node is calculated, demand goes out to represent network node All characteristic values and feature vector of the adjacency matrix of connection relation, and most of network in reality is sparse network, is utilized Lanczos algorithms and QL algorithms, it is O (nm) to ask all characteristic values of sparse symmetrical matrix and the time complexity of feature vector, Wherein n and m represents the number of nodes and number of edges of network respectively, and calculation amount is suitable for practical application than relatively low.
2. the ICN caching methods that corporations according to claim 1 perceive, it is characterized in that Interest packets and the forwarding of Data packets Mechanism, in SCCNC, Interest records the maximum value of each corporations' interior joint corporations importance on its forward-path, i.e., {I1max,I2max,…,Iimax, wherein IimaxRepresent the maximum of i-th of corporations' interior joint importance on Interest forward-paths Value.When Data returns to user along Interest forward-paths, intermediate node is by comparing the pitch point importance I of oneselfijAnd The pitch point importance maximum value for the corporations that Data is carried, Iimax, formulate corresponding buffering scheme.Present invention design is a kind of Interest merges mechanism, the multiple Interest received for merge node, it is therefore an objective to reduce Interest packets and Data packets Communication overhead, as node NjWhen receiving Interest, by the pitch point importance I of oneselfijIt is current with being carried in Interest The importance maximum value I of corporationsimaxIt is compared, if Iij>Iimax, then I is enabledimax=Iij, when Interest is forwarded to one During a new corporations i, first node N encounteringi1(being denoted as FN) records the pitch point importance maximum value of downstream corporations, i.e., I(i-1)max, such Interest need to only carry the pitch point importance maximum value of current corporations, be opened with the communication for reducing Interest Pin, as the node N in corporations 221Receive Interest (p, p1,I4) when, with the node corporations importance I of oneself21It replaces Interest interior joints corporations importance maximum value, I4, then by new Interest, i.e. Interest (p, p1,I21), forwarding To upstream node, while create PIT program recording Interest (p, a p1,I4);
As node NjWhen receiving Interest from interface k, its PIT table is first checked for.Each PIT list items are a five-tuple, such as table Shown in 1:
<ContentName,ChunkID,Faces,Iimax,I(i-1)max>
Wherein, " ContentName " is content name, and " ChunkID " is the name of content blocks, and " Faces " is to receive Interest Interface number, " Iimax" it is current corporations c on Interest forward-pathsiPitch point importance maximum value, " I(i-1)max" be Interest forward-path upstream and downstream corporations ci-1Pitch point importance maximum value, only current corporations ciIn FN record I(i-1)maxIf having corresponding list item in PIT, the Interest newly arrived is merged with it, while abandon the Interest; Otherwise, a PIT list item is created, algorithm 1 describes the repeating process of Interest.
3. the ICN caching methods that corporations according to claim 1 perceive, it is characterized in that the caching machine based on network code System as unit of corporations, in same corporations, is formulated different according to corporations' importance of node each on Interest forward-paths Cache policy:The highest nodal cache original contents block of importance, this is because the high node of importance be easier it is other Node visit;The low nodal cache encoding block of importance to save spatial cache, improves caching diversity.As node NjIt receives Data packets, and when that carried in Data packets is the original contents block D of content p, by the pitch point importance I of oneselfijWith being taken in Data The importance maximum value I of the current corporations of bandimaxIt is compared, if Iij=Iimax, by content blocks storage to local cache In, whether otherwise, checking in local cache CS has the content blocks D ' of content p, if in the presence of to D and D ' carry out random network volumes Code generates new encoding block D ", and replaces D ' with D ", and network code is applied in caching, and an encoding block is included in multiple Hold the information of block, the request of multiple content blocks can be responded, which, which realizes, is buffered in rationally dividing in cyberspace Cloth reduces network delay, improves the efficiency of transmission of network.
4. the ICN caching methods that corporations according to claim 1 perceive, it is characterized in that the caching based on network code is replaced Strategy, in SCCNC, as unit of corporations, the node corporations importance in same corporations according to each node is different, performs difference Cache replacement policy, the big node of node corporations importance, the probability that the low cache contents of popularity are replaced is big;And node The small node of corporations' importance, the probability that the high cache contents of popularity are replaced is big, can realize in this way be buffered in the time and Reasonable layout spatially, it is assumed that the node N of corporations ijNode corporations importance be Iij, the average nodal in corporations i is important It spends and isWhen caching exhausts, cache contents are ranked up first with LRU (Least Recently Used), structure is slow Sequence is deposited, the probability that k-th of content is removed is
In formula:α is normalization factor, meets formula (3), and β is probability adjustment factor.
When caching, which is replaced, to be occurred, it is assumed that content p is content to be removed, original to n if caching is n original block Block carries out random network code, generates an encoding block, caches the encoding block, removes n original block, the advantage of doing so is that, The spatial cache of n-1 content blocks can be discharged, while retains the information of n content blocks, to respond subsequent request.
CN201810058582.7A 2018-01-22 2018-01-22 Community-aware ICN caching method Pending CN108173965A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810058582.7A CN108173965A (en) 2018-01-22 2018-01-22 Community-aware ICN caching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810058582.7A CN108173965A (en) 2018-01-22 2018-01-22 Community-aware ICN caching method

Publications (1)

Publication Number Publication Date
CN108173965A true CN108173965A (en) 2018-06-15

Family

ID=62515032

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810058582.7A Pending CN108173965A (en) 2018-01-22 2018-01-22 Community-aware ICN caching method

Country Status (1)

Country Link
CN (1) CN108173965A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109347662A (en) * 2018-09-28 2019-02-15 西安交通大学深圳研究院 The quick digging system of distributed social network structure towards large-scale network traffic
CN110245095A (en) * 2019-06-20 2019-09-17 华中科技大学 A method and system for optimizing solid-state disk cache based on data block graph
CN114866610A (en) * 2022-05-23 2022-08-05 电子科技大学 A CCN-based satellite-terrestrial network caching method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400299A (en) * 2013-07-02 2013-11-20 西安交通大学 Method for detecting network overlapped communities based on overlapped point identification
CN104113545A (en) * 2014-07-21 2014-10-22 北京大学深圳研究生院 Streaming media system under information center network and application method thereof
CN104821961A (en) * 2015-04-16 2015-08-05 广东技术师范学院 ICN cache strategy based on node community importance
CN105391515A (en) * 2014-08-27 2016-03-09 帕洛阿尔托研究中心公司 Network coding for content-centric network
CN105812462A (en) * 2016-03-09 2016-07-27 广东技术师范学院 SDN (software-defined networking)-based ICN (information-centric networking) routing method
CN106537880A (en) * 2014-07-13 2017-03-22 思科技术公司 Caching data in information centric networking architecture
US20170093710A1 (en) * 2015-09-29 2017-03-30 Palo Alto Research Center Incorporated System and method for stateless information-centric networking
CN106790421A (en) * 2016-12-01 2017-05-31 广东技术师范学院 A kind of step caching methods of ICN bis- based on corporations
CN106789261A (en) * 2016-12-26 2017-05-31 广东技术师范学院 A kind of local content popularity of information centre's network is dynamically determined method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400299A (en) * 2013-07-02 2013-11-20 西安交通大学 Method for detecting network overlapped communities based on overlapped point identification
CN106537880A (en) * 2014-07-13 2017-03-22 思科技术公司 Caching data in information centric networking architecture
CN104113545A (en) * 2014-07-21 2014-10-22 北京大学深圳研究生院 Streaming media system under information center network and application method thereof
CN105391515A (en) * 2014-08-27 2016-03-09 帕洛阿尔托研究中心公司 Network coding for content-centric network
CN104821961A (en) * 2015-04-16 2015-08-05 广东技术师范学院 ICN cache strategy based on node community importance
US20170093710A1 (en) * 2015-09-29 2017-03-30 Palo Alto Research Center Incorporated System and method for stateless information-centric networking
CN105812462A (en) * 2016-03-09 2016-07-27 广东技术师范学院 SDN (software-defined networking)-based ICN (information-centric networking) routing method
CN106790421A (en) * 2016-12-01 2017-05-31 广东技术师范学院 A kind of step caching methods of ICN bis- based on corporations
CN106789261A (en) * 2016-12-26 2017-05-31 广东技术师范学院 A kind of local content popularity of information centre's network is dynamically determined method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
蔡君: "基于节点社团重要度的ICN 缓存策略", 《通信学报》 *
蔡君等: "一种基于节点社团重要度的信息中心网络缓存决策策略", 《小型微型计算机系统》 *
雷方元等: "一种基于SDN的ICN高效缓存机制", 《计算机科学》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109347662A (en) * 2018-09-28 2019-02-15 西安交通大学深圳研究院 The quick digging system of distributed social network structure towards large-scale network traffic
CN109347662B (en) * 2018-09-28 2019-08-13 西安交通大学深圳研究院 The quick digging system of distributed social network structure towards large-scale network traffic
CN110245095A (en) * 2019-06-20 2019-09-17 华中科技大学 A method and system for optimizing solid-state disk cache based on data block graph
CN114866610A (en) * 2022-05-23 2022-08-05 电子科技大学 A CCN-based satellite-terrestrial network caching method

Similar Documents

Publication Publication Date Title
CN103001870B (en) A kind of content center network works in coordination with caching method and system
Banerjee et al. Greedy caching: An optimized content placement strategy for information-centric networks
Posch et al. SAF: Stochastic adaptive forwarding in named data networking
CN104753797B (en) A kind of content center network dynamic routing method based on selectivity caching
Li et al. A chunk caching location and searching scheme in content centric networking
Nour et al. A distributed cache placement scheme for large-scale information-centric networking
CN104821961B (en) A kind of ICN caching methods based on node corporations importance
Wu et al. Design and evaluation of probabilistic caching in information-centric networking
Wang et al. Hop-based probabilistic caching for information-centric networks
CN105656788B (en) CCN Content Caching Method Based on Popularity Statistics
Janaszka et al. On popularity-based load balancing in content networks
Xu et al. A dominating-set-based collaborative caching with request routing in content centric networking
Reshadinezhad et al. An efficient adaptive cache management scheme for named data networks
CN108173965A (en) Community-aware ICN caching method
Wu et al. MBP: A max-benefit probability-based caching strategy in information-centric networking
Zhang et al. Combing CCN with network coding: An architectural perspective
Majeed et al. Pre-caching: A proactive scheme for caching video traffic in named data mesh networks
Chaudhary et al. eNCache: Improving content delivery with cooperative caching in Named Data Networking
Serhane et al. CnS: A cache and split scheme for 5G-enabled ICN networks
Aloulou et al. A popularity-driven controller-based routing and cooperative caching for named data networks
Banerjee et al. Greedy caching: A latency-aware caching strategy for information-centric networks
Kumar et al. Cpndd: Content placement approach in content centric networking
Man et al. An adaptive cache management approach in ICN with pre-filter queues
Le et al. The performance of caching strategies in content centric networking
Zhu et al. A Popularity-Based Collaborative Caching Algorithm for Content-Centric Networking.

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180615

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载