2 # Architectures that offer an FUNCTION_TRACER implementation should
3 # select HAVE_FUNCTION_TRACER:
6 config USER_STACKTRACE_SUPPORT
12 config HAVE_FUNCTION_TRACER
15 config HAVE_FUNCTION_RET_TRACER
18 config HAVE_FUNCTION_TRACE_MCOUNT_TEST
21 This gets selected when the arch tests the function_trace_stop
22 variable at the mcount call site. Otherwise, this variable
23 is tested by the called function.
25 config HAVE_DYNAMIC_FTRACE
28 config HAVE_FTRACE_MCOUNT_RECORD
31 config TRACER_MAX_TRACE
41 select STACKTRACE if STACKTRACE_SUPPORT
47 config FUNCTION_TRACER
48 bool "Kernel Function Tracer"
49 depends on HAVE_FUNCTION_TRACER
50 depends on DEBUG_KERNEL
53 select CONTEXT_SWITCH_TRACER
55 Enable the kernel to trace every kernel function. This is done
56 by using a compiler feature to insert a small, 5-byte No-Operation
57 instruction to the beginning of every kernel function, which NOP
58 sequence is then dynamically patched into a tracer call when
59 tracing is enabled by the administrator. If it's runtime disabled
60 (the bootup default), then the overhead of the instructions is very
61 small and not measurable even in micro-benchmarks.
63 config FUNCTION_RET_TRACER
64 bool "Kernel Function return Tracer"
65 depends on HAVE_FUNCTION_RET_TRACER
66 depends on FUNCTION_TRACER
68 Enable the kernel to trace a function at its return.
69 It's first purpose is to trace the duration of functions.
70 This is done by setting the current return address on the thread
71 info structure of the current task.
74 bool "Interrupts-off Latency Tracer"
76 depends on TRACE_IRQFLAGS_SUPPORT
77 depends on GENERIC_TIME
78 depends on DEBUG_KERNEL
81 select TRACER_MAX_TRACE
83 This option measures the time spent in irqs-off critical
84 sections, with microsecond accuracy.
86 The default measurement method is a maximum search, which is
87 disabled by default and can be runtime (re-)started
90 echo 0 > /debugfs/tracing/tracing_max_latency
92 (Note that kernel size and overhead increases with this option
93 enabled. This option and the preempt-off timing option can be
94 used together or separately.)
97 bool "Preemption-off Latency Tracer"
99 depends on GENERIC_TIME
101 depends on DEBUG_KERNEL
103 select TRACER_MAX_TRACE
105 This option measures the time spent in preemption off critical
106 sections, with microsecond accuracy.
108 The default measurement method is a maximum search, which is
109 disabled by default and can be runtime (re-)started
112 echo 0 > /debugfs/tracing/tracing_max_latency
114 (Note that kernel size and overhead increases with this option
115 enabled. This option and the irqs-off timing option can be
116 used together or separately.)
118 config SYSPROF_TRACER
119 bool "Sysprof Tracer"
123 This tracer provides the trace needed by the 'Sysprof' userspace
127 bool "Scheduling Latency Tracer"
128 depends on DEBUG_KERNEL
130 select CONTEXT_SWITCH_TRACER
131 select TRACER_MAX_TRACE
133 This tracer tracks the latency of the highest priority task
134 to be scheduled in, starting from the point it has woken up.
136 config CONTEXT_SWITCH_TRACER
137 bool "Trace process context switches"
138 depends on DEBUG_KERNEL
142 This tracer gets called from the context switch and records
143 all switching of tasks.
146 bool "Trace boot initcalls"
147 depends on DEBUG_KERNEL
149 select CONTEXT_SWITCH_TRACER
151 This tracer helps developers to optimize boot times: it records
152 the timings of the initcalls and traces key events and the identity
153 of tasks that can cause boot delays, such as context-switches.
155 Its aim is to be parsed by the /scripts/bootgraph.pl tool to
156 produce pretty graphics about boot inefficiencies, giving a visual
157 representation of the delays during initcalls - but the raw
158 /debug/tracing/trace text output is readable too.
160 ( Note that tracing self tests can't be enabled if this tracer is
161 selected, because the self-tests are an initcall as well and that
162 would invalidate the boot trace. )
164 config TRACE_BRANCH_PROFILING
165 bool "Trace likely/unlikely profiler"
166 depends on DEBUG_KERNEL
169 This tracer profiles all the the likely and unlikely macros
170 in the kernel. It will display the results in:
172 /debugfs/tracing/profile_annotated_branch
174 Note: this will add a significant overhead, only turn this
175 on if you need to profile the system's use of these macros.
179 config PROFILE_ALL_BRANCHES
180 bool "Profile all if conditionals"
181 depends on TRACE_BRANCH_PROFILING
183 This tracer profiles all branch conditions. Every if ()
184 taken in the kernel is recorded whether it hit or miss.
185 The results will be displayed in:
187 /debugfs/tracing/profile_branch
189 This configuration, when enabled, will impose a great overhead
190 on the system. This should only be enabled when the system
195 config TRACING_BRANCHES
198 Selected by tracers that will trace the likely and unlikely
199 conditions. This prevents the tracers themselves from being
200 profiled. Profiling the tracing infrastructure can only happen
201 when the likelys and unlikelys are not being traced.
204 bool "Trace likely/unlikely instances"
205 depends on TRACE_BRANCH_PROFILING
206 select TRACING_BRANCHES
208 This traces the events of likely and unlikely condition
209 calls in the kernel. The difference between this and the
210 "Trace likely/unlikely profiler" is that this is not a
211 histogram of the callers, but actually places the calling
212 events into a running trace buffer to see when and where the
213 events happened, as well as their results.
218 bool "Trace max stack"
219 depends on HAVE_FUNCTION_TRACER
220 depends on DEBUG_KERNEL
221 select FUNCTION_TRACER
224 This special tracer records the maximum stack footprint of the
225 kernel and displays it in debugfs/tracing/stack_trace.
227 This tracer works by hooking into every function call that the
228 kernel executes, and keeping a maximum stack depth value and
229 stack-trace saved. Because this logic has to execute in every
230 kernel function, all the time, this option can slow down the
231 kernel measurably and is generally intended for kernel
236 config DYNAMIC_FTRACE
237 bool "enable/disable ftrace tracepoints dynamically"
238 depends on FUNCTION_TRACER
239 depends on HAVE_DYNAMIC_FTRACE
240 depends on DEBUG_KERNEL
243 This option will modify all the calls to ftrace dynamically
244 (will patch them out of the binary image and replaces them
245 with a No-Op instruction) as they are called. A table is
246 created to dynamically enable them again.
248 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
249 has native performance as long as no tracing is active.
251 The changes to the code are done by a kernel thread that
252 wakes up once a second and checks to see if any ftrace calls
253 were made. If so, it runs stop_machine (stops all CPUS)
254 and modifies the code to jump over the call to ftrace.
256 config FTRACE_MCOUNT_RECORD
258 depends on DYNAMIC_FTRACE
259 depends on HAVE_FTRACE_MCOUNT_RECORD
261 config FTRACE_SELFTEST
264 config FTRACE_STARTUP_TEST
265 bool "Perform a startup test on ftrace"
266 depends on TRACING && DEBUG_KERNEL && !BOOT_TRACER
267 select FTRACE_SELFTEST
269 This option performs a series of startup tests on ftrace. On bootup
270 a series of tests are made to verify that the tracer is
271 functioning properly. It will do tests on all the configured