Article From:

This article has been authorized by Zhao Jigang to be published by NetEase Cloud Community.

Welcome to Netease Cloud Community to learn more about Netease technology product operation experience.

dubboThree results caching mechanisms are provided:

  • lru:Remove redundant caches based on the latest least-use principle to keep the hottest data cached

  • threadlocal:Current thread cache

  • jcache:Can bridge various cache implementations

I. usage mode

1     <dubbo:reference id="demoService" check="false" interface="">
2         <dubbo:method name="sayHello" timeout="60000" cache="lru"/>
3     </dubbo:reference>

Add cache configuration.

Note: There is a bug in the Dubbo result cache, When cache=”xxx” is configured at the service level, there is no problem, when it is configured at the method level.Wait, no matter how you configure it, sleep with LruCache.


LRU Cache Source Parsing

  1 /**
 2  * CacheFilter
 3  * CacheFilter is loaded only when the cache configuration is configured4 * /5@Activate (group = Constants.CONSUMER, Constants.PROVIDER}, valuE = Constants. CACHE_KEY)6 public class CacheFilter implements Filter {7 private CacheFactoryCacheFactory;Eight9 public void setCacheFactory {10 this.cAcheFactory = cacheFactory;11}Twelve13 Public Result invoke (Invoker<?> invoker, Invoc)Invocation) throws RpcException {14 If (cacheFactory! = null & amp; & amp; ConfigUtils. i)SNotEmpty (invoker. getUrl (). getMethodParameter (invocation. getMethodName (), Constants. CACHE_KEY)) {One5// Use CacheFactory $Adaptive to get specific CacheFactory, and then use specific CacheFactory to get specific Cache objectsSixteenCache cache = cacheFactory. getCache (invoker. getUrl (). addParameter (Constants. METHOD_KEY, invoca)GetMethodName ());17 If (cache! = null) {18// The key of the cached object is arg1, arg2ARG3,..., arg419 String key = StringUtils. to ArgumentString (invocation. get Arguments));20// Get Cache Value21 Object value = cache. get (key);22If (value! = null) {23 return new RpcResult (value);24}Two5 Result result = invoker. invoke (invocation);26// No exceptions were found in the responseInformation, then the value of the corresponding result is plugged into the cache27 If (! Result. hasException ()) {28 cache. put (key)Result. getValue ());29}30 return result;31}Thirty-two}33 return invoker. invoke (invocation);34}35}

From @Activate (group = Constants. CONSUMER, Constants. PROVIDER}, value = Constants. CACHE_KEY), we can see that conIf the cache= “xxx” is configured on the Sumer or provider side, the CacheFilter will be used.

First, get the concrete Cache instance: The cacheFactory attribute in CacheFilter is the CacheFactory $Adaptive instance.

 1 public class CacheFactory$Adaptive implements {
 2     public getCache( arg0) {
 3         if (arg0 == null) throw new IllegalArgumentException("url == null");
 4 url = arg0;
 5         String extName = url.getParameter("cache", "lru");
 6         if (extName == null)
 7             throw new IllegalStateException("Fail to get extension( name from url(" + url.toString() + ") use keys([cache])");
 8         // Get specific CacheFactory9 com. alibaba. dubbo. cache. CacheFactory extension = com. alibaba. dubbo. CACHe. CacheFactory) Extension Loader. getExtension Loader (com. alibaba. dubbo. cache. CacheFactory. class). getEXtension (extName);10// Use specific CacheFactory to get specific Cache11 return extension. getCache (ar)G0);12}13}

Here extName allows us to configure lru, which is also LRU by default if not. The specific CacheFactory obtained here is LruCacheFactory.

 1 @SPI("lru")
 2 public interface CacheFactory {
 3     @Adaptive("cache")
 4     Cache getCache(URL url);
 5 }
 7 public abstract class AbstractCacheFactory implements CacheFactory {
 8     private final ConcurrentMap<String, Cache> caches = new ConcurrentHashMap<String, Cache>();
10     public Cache getCache(URL url) {
11         String key = url.toFullString();
12         Cache cache = caches.get(key);
13         if (cache == null) {
14             caches.put(key, createCache(url));
15             cache = caches.get(key);
16         }
17         return cache;
18     }
20     protected abstract Cache createCache(URL url);
21 }
23 public class LruCacheFactory extends AbstractCacheFactory {
24     protected Cache createCache(URL url) {
25         return new LruCache(url);
26     }
27 }

Calling the LruCacheFactory. getCache (URL url) method actually calls its parent AbstractCacheFactory method. The logic is: create an instance of LruCache and store it in ConcIn urrentMap< String, Cache> caches, the key is url. toFullString ().

Let’s look at the creation of LruCache:

 1 public interface Cache {
 2     void put(Object key, Object value);
 3     Object get(Object key);
 4 }
 6 public class LruCache implements Cache {
 7     private final Map<Object, Object> store;
 9     public LruCache(URL url) {
10         final int max = url.getParameter("cache.size", 1000);
11 = new LRUCache<Object, Object>(max);
12     }
14     public void put(Object key, Object value) {
15         store.put(key, value);
16     }
18     public Object get(Object key) {
19         return store.get(key);
20     }
21 }

The maximum number of default cache stores is 1000. Then an LRUCache object is created.

 1 public class LRUCache<K, V> extends LinkedHashMap<K, V> {
 2     private static final long serialVersionUID = -5167631809472116969L;
 4     private static final float DEFAULT_LOAD_FACTOR = 0.75f;
 6     private static final int DEFAULT_MAX_CAPACITY = 1000;
 7     private final Lock lock = new ReentrantLock();
 8     private volatile int maxCapacity;
10     public LRUCache(int maxCapacity) {
11         /**
12          * Be careful:13 * LinkedHashMap maintains a two-way linked list that runs on all Entries: This list defines the order of iteration, which can be insertion or access.14*TrueThe data structure being stored is also the Entry [] array of its parent HashMap. The two-way linked list mentioned above is only used to maintain the iteration order (to help implement the LRU algorithm, etc.).15*16*LinkedHashMap (int initial capacity, float loadFactor, Boolean accessOrder)17* The third parameter accessOrder:false(insertion order), true (access order)18*/19 super (16, DEFAULT_LOAD_FACTOR, true);20 this.maXCapacity = maxCapacity;21}Twenty-two23 / * *24* Need to delete the oldest data (that is, data that has not been accessed recently)25*@paramEldest26*return@27 * /28@Override29 protected Boolean removeEldestEntry (jav)A.util.Map.Entry<K,V>eldest)30 return size (& gt; maxCapacity;31}Thirty-twoThirty-three@Override34 Public V get (Object key) {35 try {36 lock. lock ();Thirty-sevenReturn super. get (key);38} finally {39 lock. unlock ();40}Forty-one}Forty-two43@Override44 Public V put (K key, V value) {45 try {46 lock.Lock ();47 return super. put (key, value);48} finally {49 lock.unloCK ();50}51}Fifty-two53@Override54 Public V remove (Object key) {55 try{56 lock. lock ();57 return super. remove (key);58} finally {Fifty-nineLock. unlock ();60}61}Sixty-two63@Override64 public int size () {Sixty-fiveTry {66 lock. lock ();67 return super. size ();68} finally {Six9 lock. unlock ();70}71}72...73}

Be careful:

  • LinkedHashMapMaintaining a bidirectional linked list that runs on all Entries: This list defines the order of iterations, which can be insertion or access (the real stored data structure or the Entry [] array of its parent HashMap). The bidirectional linked list mentioned above is only used to maintain iterations.Sequence)

  • When the third parameter accessOrder = true of LinkedHashMap (int initial capacity, float loadFactor, Boolean accessOrder) is specified,Each time get (Object key) is executed, the retrieved Entry is placed at the tail node, that is to say, the header node of the bidirectional linked list was accessed the longest time ago, when put (Object key, Object value) is executed.Then, execute removeEldestEntry (java.util.Map.Entry< K, V> eldest) to determine whether the header node needs to be deleted. (These are LinkedHashMap factsFor specific source code analysis, see


3. ThreadLocal Cache Source Parsing

According to the bug mentioned at the beginning of the article, cache=”” can only be configured at the service level.

1 <dubbo:reference id="demoService" check="false" interface="" cache="threadlocal"/>
1 public class ThreadLocalCacheFactory extends AbstractCacheFactory {
 2     protected Cache createCache(URL url) {
 3         return new ThreadLocalCache(url);
 4     }
 5 }
 7 public class ThreadLocalCache implements Cache {
 8     private final ThreadLocal<Map<Object, Object>> store;
10     public ThreadLocalCache(URL url) {
11 = new ThreadLocal<Map<Object, Object>>() {
12             @Override
13             protected Map<Object, Object> initialValue() {
14                 return new HashMap<Object, Object>();
15             }
16         };
17     }
19     public void put(Object key, Object value) {
20         store.get().put(key, value);
21     }
23     public Object get(Object key) {
24         return store.get().get(key);
25     }
26 }

ThreadLocalCacheThe implementation is HashMap.

Related articles:
【Recommendation) BugBash Activity Sharing
【Recommendation) Activity Forecast: Yidun CTO Zhu Haoqi will attend the 2018 AIIA Conference to share “Application Practice of Artificial Intelligence in Content Security”
【Take a look and take you into the world of SparkSQL

Link of this Article: Dubbo result cache mechanism

Leave a Reply

Your email address will not be published. Required fields are marked *