中文字幕av专区_日韩电影在线播放_精品国产精品久久一区免费式_av在线免费观看网站

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

由于OOM導致鎖沒有釋放的定位問題是怎么就覺得

發布時間:2021-10-09 14:03:30 來源:億速云 閱讀:146 作者:柒染 欄目:大數據

由于OOM導致鎖沒有釋放的定位問題是怎么就覺得,針對這個問題,這篇文章詳細介紹了相對應的分析和解答,希望可以幫助更多想解決這個問題的小伙伴找到更簡單易行的方法。

最近發現業務上返回慢,并且feign的fallback被觸發了。查看日志,發現觸發了重試,是什么觸發的重試呢,通過異常堆棧發現:

Caused by: feign.RetryableException: connect timed out executing GET http://test-service/test-api
        at feign.FeignException.errorExecuting(FeignException.java:65) ~[feign-core-9.7.0.jar!/:?]
        at feign.SynchronousMethodHandler.executeAndDecode(SynchronousMethodHandler.java:105) ~[feign-core-9.7.0.jar!/:?]
        at feign.SynchronousMethodHandler.invoke(SynchronousMethodHandler.java:77) ~[feign-core-9.7.0.jar!/:?]
        at feign.hystrix.HystrixInvocationHandler$1.run(HystrixInvocationHandler.java:107) ~[feign-hystrix-9.7.0.jar!/:?]
        at com.netflix.hystrix.HystrixCommand$2.call(HystrixCommand.java:302) ~[hystrix-core-1.5.18.jar!/:1.5.18]
        at com.netflix.hystrix.HystrixCommand$2.call(HystrixCommand.java:298) ~[hystrix-core-1.5.18.jar!/:1.5.18]
        at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:46) ~[rxjava-1.3.8.jar!/:1.3.8]
        ... 27 more
Caused by: java.net.SocketTimeoutException: connect timed out
        at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:?]
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399) ~[?:?]
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242) ~[?:?]
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224) ~[?:?]
        at java.net.Socket.connect(Socket.java:591) ~[?:?]
        at sun.net.NetworkClient.doConnect(NetworkClient.java:177) ~[?:?]
        at sun.net.www.http.HttpClient.openServer(HttpClient.java:474) ~[?:?]
        at sun.net.www.http.HttpClient.openServer(HttpClient.java:569) ~[?:?]
        at sun.net.www.http.HttpClient.<init>(HttpClient.java:242) ~[?:?]
        at sun.net.www.http.HttpClient.New(HttpClient.java:341) ~[?:?]
        at sun.net.www.http.HttpClient.New(HttpClient.java:362) ~[?:?]
        at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1248) ~[?:?]
        at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1187) ~[?:?]
        at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1081) ~[?:?]
        at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:1015) ~[?:?]
        at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587) ~[?:?]
        at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1515) ~[?:?]
        at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:527) ~[?:?]
        at feign.Client$Default.convertResponse(Client.java:150) ~[feign-core-9.7.0.jar!/:?]
        at feign.Client$Default.execute(Client.java:72) ~[feign-core-9.7.0.jar!/:?]
        at org.springframework.cloud.sleuth.instrument.web.client.feign.TracingFeignClient.execute(TracingFeignClient.java:91) ~[spring-cloud-sleuth-core-2.0.3.RELEASE.jar!/:2.0.3.RELEASE]
        at org.springframework.cloud.sleuth.instrument.web.client.feign.LazyTracingFeignClient.execute(LazyTracingFeignClient.java:55) ~[spring-cloud-sleuth-core-2.0.3.RELEASE.jar!/:2.0.3.RELEASE]
        at org.springframework.cloud.openfeign.ribbon.RetryableFeignLoadBalancer$1.doWithRetry(RetryableFeignLoadBalancer.java:103) ~[spring-cloud-openfeign-core-2.0.3.RELEASE.jar!/:2.0.3.RELEASE]
        at org.springframework.cloud.openfeign.ribbon.RetryableFeignLoadBalancer$1.doWithRetry(RetryableFeignLoadBalancer.java:88) ~[spring-cloud-openfeign-core-2.0.3.RELEASE.jar!/:2.0.3.RELEASE]
        at org.springframework.retry.support.RetryTemplate.doExecute(RetryTemplate.java:287) ~[spring-retry-1.2.4.RELEASE.jar!/:?]
        at org.springframework.retry.support.RetryTemplate.execute(RetryTemplate.java:180) ~[spring-retry-1.2.4.RELEASE.jar!/:?]
        at org.springframework.cloud.openfeign.ribbon.RetryableFeignLoadBalancer.execute(RetryableFeignLoadBalancer.java:88) ~[spring-cloud-openfeign-core-2.0.3.RELEASE.jar!/:2.0.3.RELEASE]
        at org.springframework.cloud.openfeign.ribbon.RetryableFeignLoadBalancer.execute(RetryableFeignLoadBalancer.java:54) ~[spring-cloud-openfeign-core-2.0.3.RELEASE.jar!/:2.0.3.RELEASE]
        at com.netflix.client.AbstractLoadBalancerAwareClient$1.call(AbstractLoadBalancerAwareClient.java:104) ~[ribbon-loadbalancer-2.2.5.jar!/:2.2.5]
        at com.netflix.loadbalancer.reactive.LoadBalancerCommand$3$1.call(LoadBalancerCommand.java:303) ~[ribbon-loadbalancer-2.2.5.jar!/:2.2.5]
        at com.netflix.loadbalancer.reactive.LoadBalancerCommand$3$1.call(LoadBalancerCommand.java:287) ~[ribbon-loadbalancer-2.2.5.jar!/:2.2.5]
        at rx.internal.util.ScalarSynchronousObservable$3.call(ScalarSynchronousObservable.java:231) ~[rxjava-1.3.8.jar!/:1.3.8]
        at rx.internal.util.ScalarSynchronousObservable$3.call(ScalarSynchronousObservable.java:228) ~[rxjava-1.3.8.jar!/:1.3.8]
        at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar!/:1.3.8]

發現調用微服務test-service連接超時,注意是連接超時,而不是讀取超時。接下來,首先查看ribbon連接超時配置,看是否配置的連接超時過短,發現是:

#ribbon默認連接超時
ribbon.ConnectTimeout=500

500ms建立tcp連接已經不短了,所以配置沒有問題。從當前實例手動調用微服務實例的接口,看是否是通的:

curl http://test-service的ip:test-service的端口

發現訪問成功,沒有阻塞。證明網絡連接沒有問題,我們來看下netstat網絡連接狀態:

Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
tcp        0      0 test-5c665cc496-6jkx5:54644 ip-10-238-8-119.eu-central-1.compute.internal:6379 ESTABLISHED 
tcp        0      0 test-5c665cc496-6jkx5:54666 ip-10-238-8-119.eu-central-1.compute.internal:6379 ESTABLISHED 
tcp        0      0 test-5c665cc496-6jkx5:54680 ip-10-238-8-119.eu-central-1.compute.internal:6379 ESTABLISHED 
tcp        0      0 test-5c665cc496-6jkx5:54088 ip-10-238-8-68.eu-central-1.compute.internal:8211 ESTABLISHED 
tcp        0      0 test-5c665cc496-6jkx5:54674 ip-10-238-8-119.eu-central-1.compute.internal:6379 ESTABLISHED 
tcp        0      0 test-5c665cc496-6jkx5:54682 ip-10-238-8-119.eu-central-1.compute.internal:6379 ESTABLISHED 
tcp        0      0 test-5c665cc496-6jkx5:54652 ip-10-238-8-119.eu-central-1.compute.internal:6379 ESTABLISHED 
tcp       32      0 test-5c665cc496-6jkx5:47092 244-213-201-91.test.com:https CLOSE_WAIT  
tcp        0      0 test-5c665cc496-6jkx5:54662 ip-10-238-8-119.eu-central-1.compute.internal:6379 ESTABLISHED 
tcp        0      0 test-5c665cc496-6jkx5:54624 ip-10-238-8-119.eu-central-1.compute.internal:6379 ESTABLISHED 
tcp       32      0 test-5c665cc496-6jkx5:47098 244-213-201-91.test.com:https CLOSE_WAIT  
tcp        0      0 test-5c665cc496-6jkx5:54672 ip-10-238-8-119.eu-central-1.compute.internal:6379 ESTABLISHED 
tcp        0      0 test-5c665cc496-6jkx5:54630 ip-10-238-8-119.eu-central-1.compute.internal:6379 ESTABLISHED 
tcp        0      0 test-5c665cc496-6jkx5:58758 ip-10-238-9-71.eu-central-1.compute.internal:mysql ESTABLISHED 
tcp        0      0 test-5c665cc496-6jkx5:8251 10-238-8-117.api-gateway.test1.svc.cluster.local:43132 FIN_WAIT2   
tcp        0      0 test-5c665cc496-6jkx5:54648 ip-10-238-8-119.eu-central-1.compute.internal:6379 ESTABLISHED 
tcp        0      0 test-5c665cc496-6jkx5:58778 ip-10-238-9-71.eu-central-1.compute.internal:mysql ESTABLISHED 
tcp        0      0 test-5c665cc496-6jkx5:54646 ip-10-238-8-119.eu-central-1.compute.internal:6379 ESTABLISHED 
tcp        0      0 test-5c665cc496-6jkx5:54628 ip-10-238-8-119.eu-central-1.compute.internal:6379 ESTABLISHED 
tcp        0      0 test-5c665cc496-6jkx5:54650 ip-10-238-8-119.eu-central-1.compute.internal:6379 ESTABLISHED 
tcp        0      0 test-5c665cc496-6jkx5:54632 ip-10-238-8-119.eu-central-1.compute.internal:6379 ESTABLISHED 
tcp        0      0 test-5c665cc496-6jkx5:54638 ip-10-238-8-119.eu-central-1.compute.internal:6379 ESTABLISHED 
tcp        0      0 test-5c665cc496-6jkx5:59434 ip-10-238-9-71.eu-central-1.compute.internal:mysql ESTABLISHED 
tcp       32      0 test-5c665cc496-6jkx5:47104 244-213-201-91.test.com:https CLOSE_WAIT  
tcp        1      0 test-5c665cc496-6jkx5:54146 ip-10-238-8-68.eu-central-1.compute.internal:8211 CLOSE_WAIT  
tcp       32      0 test-5c665cc496-6jkx5:47100 244-213-201-91.test.com:https CLOSE_WAIT  
tcp       32      0 test-5c665cc496-6jkx5:47106 244-213-201-91.test.com:https CLOSE_WAIT

發現:

  1. 并沒有建立與現在service-test這個微服務任何實例ip的鏈接

  2. 并沒有很多網絡連接,也并沒有很多Timed_waiting還有close_wait

猜想,可能是調用的service-test的ip地址不是最新的。

現在該Athas登場了,啟動Arthas,我們查看下feign調用的真實ip: 由于我們使用了sleuth,所以這里用sleuth監控的feigh client去看調用ip:

watch org.springframework.cloud.sleuth.instrument.web.client.feign.TracingFeignClient execute params[0].url()

對應的代碼是:

package org.springframework.cloud.sleuth.instrument.web.client.feign;

final class TracingFeignClient implements Client {


	@Override public Response execute(Request request, Request.Options options)
			throws IOException {
			    //.....
			}
}

通過觀察,我們發現,確實是以前的ip。 那么為什么沒有更新呢,我們來查看下EurekaClient相關的代碼,參考我另一篇文章:Spring Cloud Eureka 全解 (4) - 核心流程-服務與實例列表獲取詳解. 我們來看下PollingServerListUpdater這個類,負責更新的線程池是:

int coreSize = poolSizeProp.get();
ThreadFactory factory = (new ThreadFactoryBuilder())
        .setNameFormat("PollingServerListUpdater-%d")
        .setDaemon(true)
        .build();
_serverListRefreshExecutor = new ScheduledThreadPoolExecutor(coreSize, factory);

謝天謝地,這個線程是有名字的。我們通過Arthas看下線程列表:

[arthas@24]$ thread
thread
Threads Total: 736, NEW: 0, RUNNABLE: 81, BLOCKED: 0, WAITING: 634, TIMED_WAITIN
G: 21, TERMINATED: 0                                                            
ID     NAME                GROUP        PRIORI STATE %CPU   TIME   INTER DAEMON 
11     Log4j2-TF-2-AsyncLo main         5      TIMED 39     138:46 false true   
17     Log4j2-TF-5-AsyncLo main         5      TIMED 37     137:27 false true   
1045   as-command-execute- system       10     RUNNA 19     0:0    false true   
1027   AsyncAppender-Worke system       9      WAITI 0      0:0    false true   
68     AsyncResolver-boots main         5      TIMED 0      0:0    false true   
264    AsyncResolver-boots main         5      WAITI 0      0:0    false true   
1025   Attach Listener     system       9      RUNNA 0      0:0    false true   
9      Common-Cleaner      InnocuousThr 8      TIMED 0      0:0    false true   
151    DataPublisher       main         5      TIMED 0      0:2    false true   
107    DestroyJavaVM       main         5      RUNNA 0      0:37   false false  
69     DiscoveryClient-0   main         5      TIMED 0      0:1    false true   
70     DiscoveryClient-1   main         5      WAITI 0      0:1    false true   
748    DiscoveryClient-Cac main         5      WAITI 0      0:9    false true   
751    DiscoveryClient-Hea main         5      WAITI 0      0:7    false true   
71     DiscoveryClient-Ins main         5      TIMED 0      0:0    false true   
24     Druid-ConnectionPoo main         5      WAITI 0      0:0    false true   
21     Druid-ConnectionPoo main         5      WAITI 0      0:0    false true   
25     Druid-ConnectionPoo main         5      TIMED 0      0:0    false true   
22     Druid-ConnectionPoo main         5      TIMED 0      0:0    false true   
20     Druid-ConnectionPoo main         5      TIMED 0      0:0    false true   
67     Eureka-JerseyClient main         5      WAITI 0      0:0    false true   
3      Finalizer           system       8      WAITI 0      0:0    false true   
119    ForkJoinPool.common main         5      WAITI 0      0:9    false true   
30     I/O dispatcher 1    main         5      RUNNA 0      0:2    false false  
155    NFLoadBalancer-Ping main         5      TIMED 0      0:0    false true   
150    PollingServerListUp main         5      WAITI 0      0:0    false true   
157    PollingServerListUp main         5      WAITI 0      0:0    false true   
2      Reference Handler   system       10     RUNNA 0      0:0    false true   
146    RibbonApacheHttpCli main         5      TIMED 0      0:0    false true   
135    RxComputationSchedu main         5      TIMED 0      1:44   false true   
132    RxIoScheduler-1 (Ev main         5      TIMED 0      0:0    false true   
4      Signal Dispatcher   system       9      RUNNA 0      0:0    false true   
114    SimplePauseDetector main         5      TIMED 0      2:10   false true   
115    SimplePauseDetector main         5      TIMED 0      2:10   false true   
116    SimplePauseDetector main         5      TIMED 0      2:12   false true

發現關鍵字PollingServerListUp有兩個線程,150和157,分別看看他們在干什么:

[arthas@24]$ thread 150
thread 150
"PollingServerListUpdater-0" Id=150 WAITING on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@778af81b
    at java.base@11.0.4/jdk.internal.misc.Unsafe.park(Native Method)
    -  waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@778af81b
    at java.base@11.0.4/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
    at java.base@11.0.4/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
    at java.base@11.0.4/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1170)
    at java.base@11.0.4/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899)
    at java.base@11.0.4/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
    at java.base@11.0.4/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
    at java.base@11.0.4/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base@11.0.4/java.lang.Thread.run(Thread.java:834)

Affect(row-cnt:0) cost in 322 ms.
[arthas@24]$ thread 157
thread 157
"PollingServerListUpdater-1" Id=157 WAITING on java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@3adcda2
    at java.base@11.0.4/jdk.internal.misc.Unsafe.park(Native Method)
    -  waiting on java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@3adcda2
    at java.base@11.0.4/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
    at java.base@11.0.4/java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:885)
    at java.base@11.0.4/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:917)
    at java.base@11.0.4/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1240)
    at java.base@11.0.4/java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:959)
    at com.netflix.loadbalancer.BaseLoadBalancer.setServersList(BaseLoadBalancer.java:475)
    at com.netflix.loadbalancer.DynamicServerListLoadBalancer.setServersList(DynamicServerListLoadBalancer.java:156)
    at com.netflix.loadbalancer.DynamicServerListLoadBalancer.updateAllServerList(DynamicServerListLoadBalancer.java:267)
    at com.netflix.loadbalancer.DynamicServerListLoadBalancer.updateListOfServers(DynamicServerListLoadBalancer.java:250)
    at com.netflix.loadbalancer.DynamicServerListLoadBalancer$1.doUpdate(DynamicServerListLoadBalancer.java:62)
    at com.netflix.loadbalancer.PollingServerListUpdater$1.run(PollingServerListUpdater.java:116)
    at java.base@11.0.4/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
    at java.base@11.0.4/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
    at java.base@11.0.4/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
    at java.base@11.0.4/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at java.base@11.0.4/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base@11.0.4/java.lang.Thread.run(Thread.java:834)

    Number of locked synchronizers = 1
    - java.util.concurrent.ThreadPoolExecutor$Worker@6ab1c5f9

發現PollingServerListUpdater-1一直等待獲取鎖,看來問題就在這里了,查看對應的Ribbon代碼,發現:

  1. PollingServerListUpdater-1需要獲取allServerLock的寫鎖

  2. allServerLock的讀鎖,只有runPinger(對于每個實例ping,需要讀取實例列表)獲取。

那么是不是Ping出了問題,我們看下對應的獲取鎖的代碼BaseLoadBalancer

public void runPinger() throws Exception {
    //省略無用代碼
    allLock = allServerLock.readLock();
    allLock.lock();
    allServers = allServerList.toArray(new Server[allServerList.size()]);
    allLock.unlock();

我們看到,這里的代碼,并沒有用try{lock} finally {unlock}的套路,如果中間代碼異常,則會鎖不能釋放。 并且這里是重入鎖,lock多少次就要unlock多少次,少一次,其他線程都獲取不到。

allServers = allServerList.toArray(new Server[allServerList.size()]);并沒有涉及任何IO,只是數據變換,也沒有空指針異常。猜想是發生了OOM異常,導致內存沒有分配。檢查日志,果然發現了OOM。

這件事告訴我們,對于鎖,一定要try{lock} finally {unlock}。就算代碼不會拋出任何異常,發生OOM時,也有可能導致鎖不能釋放

關于由于OOM導致鎖沒有釋放的定位問題是怎么就覺得問題的解答就分享到這里了,希望以上內容可以對大家有一定的幫助,如果你還有很多疑惑沒有解開,可以關注億速云行業資訊頻道了解更多相關知識。

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

oom
AI

镇原县| 禄丰县| 营山县| 科尔| 沙坪坝区| 漳平市| 津南区| 都江堰市| 鄄城县| 朝阳县| 天津市| 临高县| 云阳县| 汽车| 南雄市| 墨脱县| 怀宁县| 平顶山市| 静安区| 博白县| 新竹县| 旬阳县| 临西县| 兰坪| 北川| 密云县| 敖汉旗| 新河县| 海南省| 宝山区| 长武县| 凌云县| 吴桥县| 荔波县| 婺源县| 阳城县| 潼南县| 永清县| 云安县| 普洱| 五大连池市|