当笔者执行jmap -histo:hive命令时,发现JVM执行了一次FullGC。
出于好奇,我们来做个实验,确认这种情况是偶然还是必然。
实验过程 查看FGC情况jstat -gcutil 15767 1000
输出
S0 S1 E O M CCS YGC YGCT FGC FGCT GCT 0.00 100.00 71.51 94.46 95.32 94.01 311 2.283 4 0.445 2.728 0.00 100.00 71.51 94.46 95.32 94.01 311 2.283 4 0.445 2.728 0.00 100.00 71.51 94.46 95.32 94.01 311 2.283 4 0.445 2.728打印对象内存占用情况
jmap -histo 15767
输出
num #instances #bytes class name ---------------------------------------------- 1: 154109 15734384 [C 2: 380355 6085680 java.lang.ref.ReferenceQueue$Lock 3: 38445 4844040 [B 4: 49598 4364624 java.lang.reflect.Method 5: 59114 4242864 [Ljava.lang.Object; 6: 39519 3183800 [Ljava.util.WeakHashMap$Entry; 7: 131945 3166680 java.lang.String 8: 90420 2893440 java.util.concurrent.ConcurrentHashMap$Node 9: 70118 2243776 java.util.concurrent.locks.AbstractQueuedSynchronizer$Node 10: 11323 1800808 [I 11: 15254 1704280 java.lang.Class 12: 40695 1302240 java.lang.ref.ReferenceQueue再次查看FGC情况
jstat -gcutil 15767 1000
采样
S0 S1 E O M CCS YGC YGCT FGC FGCT GCT 0.00 100.00 73.70 94.46 95.32 94.01 311 2.283 4 0.445 2.728 0.00 100.00 73.70 94.46 95.32 94.01 311 2.283 4 0.445 2.728 0.00 100.00 73.70 94.46 95.32 94.01 311 2.283 4 0.445 2.728打印活跃对象内存占用情况
jmap -histo:live 15767
num #instances #bytes class name ---------------------------------------------- 1: 93472 10021048 [C 2: 16056 3987032 [I 3: 36038 3171344 java.lang.reflect.Method 4: 90117 2883744 java.util.concurrent.ConcurrentHashMap$Node 5: 88345 2120280 java.lang.String 6: 15122 1690552 java.lang.Class 7: 23932 1494600 [Ljava.lang.Object; 8: 5828 1303016 [B 9: 23966 1150368 org.aspectj.weaver.reflect.ShadowMatchImpl 10: 35202 1126464 java.util.HashMap$Node 11: 9818 922264 [Ljava.util.HashMap$Node; 12: 21378 855120 java.util.linkedHashMap$Entry再次查看FGC情况
S0 S1 E O M CCS YGC YGCT FGC FGCT GCT 0.00 0.00 2.07 60.00 94.79 93.30 311 2.283 5 0.670 2.954 0.00 0.00 2.07 60.00 94.79 93.30 311 2.283 5 0.670 2.954 0.00 0.00 2.07 60.00 94.79 93.30 311 2.283 5 0.670 2.954
可以官方文档上并没有说这个命令会执行FullGC,查了不少资料并没有统一的意见。
扒源码吧,请看下面两段。
源码(OpenJDK)jdk/src/hotspot/share/services/attachListener.cpp
https://github.com/openjdk/jdk/blob/master/src/hotspot/share/services/attachListener.cpp
请看以下两个方法。
dump_heap// Implementation of "dumpheap" command.
// See also: HeapDumpDCmd class
//
// Input arguments :-
// arg0: Name of the dump file
// arg1: "-live" or "-all"
// arg2: Compress level
jint dump_heap(AttachOperation* op, outputStream* out) {
const char* path = op->arg(0);
if (path == NULL || path[0] == ' ') {
out->print_cr("No dump file specified");
} else {
bool live_objects_only = true; // default is true to retain the behavior before this change is made
const char* arg1 = op->arg(1);
if (arg1 != NULL && (strlen(arg1) > 0)) {
if (strcmp(arg1, "-all") != 0 && strcmp(arg1, "-live") != 0) {
out->print_cr("Invalid argument to dumpheap operation: %s", arg1);
return JNI_ERR;
}
live_objects_only = strcmp(arg1, "-live") == 0;
}
const char* num_str = op->arg(2);
uintx level = 0;
if (num_str != NULL && num_str[0] != ' ') {
if (!Arguments::parse_uintx(num_str, &level, 0)) {
out->print_cr("Invalid compress level: [%s]", num_str);
return JNI_ERR;
} else if (level < 1 || level > 9) {
out->print_cr("Compression level out of range (1-9): " UINTX_FORMAT, level);
return JNI_ERR;
}
}
// Parallel thread number for heap dump, initialize based on active processor count.
// Note the real number of threads used is also determined by active workers and compression
// backend thread number. See heapDumper.cpp.
uint parallel_thread_num = MAX2(1, (uint)os::initial_active_processor_count() * 3 / 8);
// Request a full GC before heap dump if live_objects_only = true
// This helps reduces the amount of unreachable objects in the dump
// and makes it easier to browse.
HeapDumper dumper(live_objects_only );
dumper.dump(path, out, (int)level, false, (uint)parallel_thread_num);
}
return JNI_OK;
}
heap_inspection
// Implementation of "inspectheap" command
// See also: ClassHistogramDCmd class
//
// Input arguments :-
// arg0: "-live" or "-all"
// arg1: Name of the dump file or NULL
// arg2: parallel thread number
static jint heap_inspection(AttachOperation* op, outputStream* out) {
bool live_objects_only = true; // default is true to retain the behavior before this change is made
outputStream* os = out; // if path not specified or path is NULL, use out
fileStream* fs = NULL;
const char* arg0 = op->arg(0);
uint parallel_thread_num = MAX2(1, (uint)os::initial_active_processor_count() * 3 / 8);
if (arg0 != NULL && (strlen(arg0) > 0)) {
if (strcmp(arg0, "-all") != 0 && strcmp(arg0, "-live") != 0) {
out->print_cr("Invalid argument to inspectheap operation: %s", arg0);
return JNI_ERR;
}
live_objects_only = strcmp(arg0, "-live") == 0;
}
const char* path = op->arg(1);
if (path != NULL && path[0] != ' ') {
// create file
fs = new (ResourceObj::C_HEAP, mtInternal) fileStream(path);
if (fs == NULL) {
out->print_cr("Failed to allocate space for file: %s", path);
}
os = fs;
}
const char* num_str = op->arg(2);
if (num_str != NULL && num_str[0] != ' ') {
uintx num;
if (!Arguments::parse_uintx(num_str, &num, 0)) {
out->print_cr("Invalid parallel thread number: [%s]", num_str);
return JNI_ERR;
}
parallel_thread_num = num == 0 ? parallel_thread_num : (uint)num;
}
VM_GC_HeapInspection heapop(os, live_objects_only , parallel_thread_num);
VMThread::execute(&heapop);
if (os != NULL && os != out) {
out->print_cr("Heap inspection file created: %s", path);
delete fs;
}
return JNI_OK;
}
划重点
请重点关注以下字眼:
结论jmap -histo:不会触发Full GC。
jmap -histo:hive:会触发Full GC。
内存分析几个常用的命令 查看Java进程jps
输出采样:
9984 2772 Jps 7956 Launcher 2700 RemoteMavenServer查看整个JVM内存状态
jmap -heap 15767
输出采样:
Attaching to process ID 15767, please wait... Debugger attached successfully. Server compiler detected. JVM version is 25.221-b11 using thread-local object allocation. Mark Sweep Compact GC Heap Configuration: MinHeapFreeRatio = 40 MaxHeapFreeRatio = 70 MaxHeapSize = 260046848 (248.0MB) NewSize = 5570560 (5.3125MB) MaxNewSize = 86638592 (82.625MB) OldSize = 11206656 (10.6875MB) NewRatio = 2 SurvivorRatio = 8 metaspaceSize = 21807104 (20.796875MB) CompressedClassSpaceSize = 1073741824 (1024.0MB) MaxmetaspaceSize = 17592186044415 MB G1HeapRegionSize = 0 (0.0MB) Heap Usage: New Generation (Eden + 1 Survivor Space): capacity = 27918336 (26.625MB) used = 20837720 (19.872398376464844MB) free = 7080616 (6.752601623535156MB) 74.63811596794308% used Eden Space: capacity = 24838144 (23.6875MB) used = 17757536 (16.934906005859375MB) free = 7080608 (6.752593994140625MB) 71.49300688489446% used From Space: capacity = 3080192 (2.9375MB) used = 3080184 (2.9374923706054688MB) free = 8 (7.62939453125E-6MB) 99.99974027593085% used To Space: capacity = 3080192 (2.9375MB) used = 0 (0.0MB) free = 3080192 (2.9375MB) 0.0% used tenured generation: capacity = 61616128 (58.76171875MB) used = 58201952 (55.505706787109375MB) free = 3414176 (3.256011962890625MB) 94.4589572392475% used 38450 interned Strings occupying 3710912 bytes.导出整个JVM信息
jmap -dump:format=b,file=/var/log/jvm/15767.dump 15767
输出采样:
Dumping heap to /var/log/jvm/15767.dump ... Heap dump file created启动web服务查看jmap导出的java程序的jvm信息
jhat -J-Xmx1024M /var/log/jvm/15767.dump
输出采样:
Reading from /var/log/jvm/15767.dump... Dump file created Mon Oct 11 14:48:22 CST 2021 Snapshot read, resolving... Resolving 844127 objects... Chasing references, expect 168 dots.......................... Eliminating duplicate references.......................... Snapshot resolved. Started HTTP server on port 7000 Server is ready.
jstack 15767:查看15767进程的所有堆栈信息。



