您的位置:首页 > 财经 > 产业 > ui网页设计尺寸_上海微网站制作建设_关键词歌词_企业网络推广方案策划书

ui网页设计尺寸_上海微网站制作建设_关键词歌词_企业网络推广方案策划书

2025/1/6 15:18:33 来源:https://blog.csdn.net/jnrjian/article/details/144794838  浏览:    关键词:ui网页设计尺寸_上海微网站制作建设_关键词歌词_企业网络推广方案策划书
ui网页设计尺寸_上海微网站制作建设_关键词歌词_企业网络推广方案策划书

As for how to clear them, from man 5 proc:

/proc/sys/vm/drop_caches (since Linux 2.6.16)

Writing to this file causes the kernel to drop clean caches, dentries, and inodes from memory, causing that memory to become free.
This can be useful for memory management testing and performing reproducible filesystem benchmarks.
Because writing to this file causes the benefits of caching to be lost, it can degrade overall system performance.

To free pagecache, use: echo 1 > /proc/sys/vm/drop_caches
To free dentries and inodes, use: echo 2 > /proc/sys/vm/drop_caches
To free pagecache, dentries and inodes, use: echo 3 > /proc/sys/vm/drop_caches
Because writing to this file is a nondestructive operation and dirty objects are not freeable, the user should run sync(8) first.

You generally don't want to flush the cache, as its entire purpose is to improve performance !!!

 , but for debugging purposes you can do so by using drop_caches like so (note: you must be root to use drop_caches, but sync can be done as any user):

# sync && echo 3 > /proc/sys/vm/drop_caches

Oracle Database - Enterprise Edition - Version 12.2.0.1 and later
Linux OS - Version Oracle Linux 7.0 and later
Linux x86-64

Symptoms

The node crashes regularly due to low available memory and no free swap space.

Cause

The available memory is running out although the amount of memory allocated to cache is large, but the kernel should reclaim the memory allocated to cache because those memory are commonly due to IO buffer cache.
However, issuing "echo 1 > /proc/sys/vm/drop_caches" to free the IO buffer cache freed only about 50 GB of memory while the amount of memory for cache still shows over 300GB.
 

There are 24GB of available memory and 354GB of buff/cache are reported before issuing "echo 1 > /proc/sys/vm/drop_caches"

[root@node04 ~]# free -g
total used free shared buff/cache available
Mem: 755 375 25 321 354 24
Swap: 23 23 0

However, issuing "echo 1 > /proc/sys/vm/drop_caches" freed only little amount of memory from buff/cache area and allocated that to the available memory.

Even after issuing "echo 1 > /proc/sys/vm/drop_caches", available memory is increased by 27GB because only 27GB out of 354GB is released from buff/cache area.

[root@node04 ~]# echo 1 > /proc/sys/vm/drop_caches;free -g
total used free shared buff/cache available
Mem: 755 372 55 321 327 51
Swap: 23 23 0

The reason for this is that most of the memory allocated to cache is shared memory allocate by database (not SGA that is convered by hugepages but shared memory used in /dev/shm).
The amount of memory in /dev/shm grows daily, eventually leading the available memory running out because kernel; cannot free the memory in /dev/shm unless application/database frees them.

[root@node04 ~]# du -s /dev/shm
37460176 /dev/shm

"ls -l /dev/shm" output shows most of the memory allocated to /dev/shm is used by instance2 database for java client connection (i.e./dev/shm/xxxxSHM)

[root@node04 ~]# date; ls -ltrh /dev/shm/xxxxSHM_instance2* |tail -10
Thu Dec 8 17:21:59 GMT 2022
-rwxrwx--- 1 oraph asmadmin 16M Dec 8 17:12 /dev/shm/xxxxSHM_instance2_2_0_1_0_0_704348541
-rwxrwx--- 1 oraph asmadmin 16M Dec 8 17:13 /dev/shm/xxxxSHM_instance2_2_0_1_0_0_704354558
-rwxrwx--- 1 oraph asmadmin 16M Dec 8 17:14 /dev/shm/xxxxSHM_instance2_2_0_1_0_0_704360581
-rwxrwx--- 1 oraph asmadmin 16M Dec 8 17:15 /dev/shm/xxxxSHM_instance2_2_0_1_0_0_704366608
-rwxrwx--- 1 oraph asmadmin 16M Dec 8 17:16 /dev/shm/xxxxSHM_instance2_2_0_1_0_0_704372634
-rwxrwx--- 1 oraph asmadmin 16M Dec 8 17:17 /dev/shm/xxxxSHM_instance2_2_0_1_0_0_704378648
-rwxrwx--- 1 oraph asmadmin 16M Dec 8 17:18 /dev/shm/xxxxSHM_instance2_2_0_1_0_0_704384672
-rwxrwx--- 1 oraph asmadmin 16M Dec 8 17:19 /dev/shm/xxxxSHM_instance2_2_0_1_0_0_704390700
-rwxrwx--- 1 oraph asmadmin 16M Dec 8 17:20 /dev/shm/xxxxSHM_instance2_2_0_1_0_0_704396715
-rwxrwx--- 1 oraph asmadmin 16M Dec 8 17:21 /dev/shm/xxxxSHM_instance2_2_0_1_0_0_704402743

More and more /dev/shm/xxxxSHM_instance2* segments are created daily.


 

Solution

Recycle the problem instance instance2 to free the already allocated memory in /dev/shm.

Find out the cause of increased usage of the shared memory in /dev/shm


The above case is caused by the following known bug:
bug 33423397 - ORA-07445: EXCEPTION ENCOUNTERED: CORE DUMP [xxxx_SHM_MAP_CALLBACK()+65]

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com