SlideShare a Scribd company logo
Brought to you by
New ways to find
latency in Linux using
tracing
Steven Rostedt
Open Source Engineer at VMware
©2021 VMware, Inc.
New ways to find
latency in Linux
using tracing
Steven Rostedt
Open Source Engineer
srostedt@vmware.com
twitter: @srostedt
What is ftrace?
●
Official tracer of the Linux Kernel
– Introduced in 2008
●
Came from the PREEMPT_RT patch set
– Initially focused on finding causes of latency
●
Has grows significantly since 2008
– Added new ways to find latency
– Does much more than find latency
The ftrace interface (the tracefs file system)
# mount -t tracefs nodev /sys/kernel/tracing
# ls /sys/kernel/tracing
available_events max_graph_depth snapshot
available_filter_functions options stack_max_size
available_tracers osnoise stack_trace
buffer_percent per_cpu stack_trace_filter
buffer_size_kb printk_formats synthetic_events
buffer_total_size_kb README timestamp_mode
current_tracer recursed_functions trace
dynamic_events saved_cmdlines trace_clock
dyn_ftrace_total_info saved_cmdlines_size trace_marker
enabled_functions saved_tgids trace_marker_raw
error_log set_event trace_options
eval_map set_event_notrace_pid trace_pipe
events set_event_pid trace_stat
free_buffer set_ftrace_filter tracing_cpumask
function_profile_enabled set_ftrace_notrace tracing_max_latency
hwlat_detector set_ftrace_notrace_pid tracing_on
instances set_ftrace_pid tracing_thresh
kprobe_events set_graph_function uprobe_events
kprobe_profile set_graph_notrace uprobe_profile
Introducing trace-cmd
Luckily today, we do not need to know all those files
●
trace-cmd is a front end interface to ftrace
●
It interfaces with the tracefs directory for you
https://www.trace-cmd.org
git clone git://git.kernel.org/pub/scm/utils/trace-cmd/trace-cmd.git
The Old Tracers
●
Wake up trace
– All tasks
– RT tasks
– Deadline Tasks
●
Preemption off tracers
– irqsoff tracer
– preemptoff tracer
– preemptirqsoff tracer
Wake up tracer
Task 1
Interrupt
Wake up Task 2 event
Task 2
Sched switch event
latency
Running wakeup_rt tracer
# trace-cmd start -p wakeup_rt
# trace-cmd show
# tracer: wakeup_rt
#
# wakeup_rt latency trace v1.1.5 on 5.10.52-test-rt47
# latency: 94 us, #211/211, CPU#4 | (M:preempt_rt VP:0, KP:0, SP:0 HP:0 #P:8)
# -----------------
# | task: rcuc/4-42 (uid:0 nice:0 policy:1 rt_prio:1)
# -----------------
#
# _--------=> CPU#
# / _-------=> irqs-off
# | / _------=> need-resched
# || / _-----=> need-resched-lazy
# ||| / _----=> hardirq/softirq
# |||| / _---=> preempt-depth
# ||||| / _--=> preempt-lazy-depth
# |||||| / _-=> migrate-disable
# ||||||| / delay
# cmd pid |||||||| time | caller
#  / ||||||||  | /
bash-1872 4dN.h5.. 1us+: 1872:120:R + [004] 42: 98:R rcuc/4
bash-1872 4dN.h5.. 30us : <stack trace>
=> __ftrace_trace_stack
=> probe_wakeup
=> ttwu_do_wakeup
=> try_to_wake_up
Running wakeup_rt tracer
=> invoke_rcu_core
=> rcu_sched_clock_irq
=> update_process_times
=> tick_sched_handle
=> tick_sched_timer
=> __hrtimer_run_queues
=> hrtimer_interrupt
=> __sysvec_apic_timer_interrupt
=> asm_call_irq_on_stack
=> sysvec_apic_timer_interrupt
=> asm_sysvec_apic_timer_interrupt
=> lock_acquire
=> _raw_spin_lock
=> shmem_get_inode
=> shmem_mknod
=> lookup_open.isra.0
=> path_openat
=> do_filp_open
=> do_sys_openat2
=> __x64_sys_openat
=> do_syscall_64
=> entry_SYSCALL_64_after_hwframe
bash-1872 4dN.h5.. 31us : 0
bash-1872 4dN.h4.. 32us : task_woken_rt ←ttwu_do_wakeup
bash-1872 4dN..2.. 87us : put_prev_entity <-put_prev_task_fair
bash-1872 4dN..2.. 87us : update_curr <-put_prev_entity
bash-1872 4dN..2.. 87us : __update_load_avg_se <-update_load_avg
bash-1872 4dN..2.. 87us : __update_load_avg_cfs_rq <-update_load_avg
Running wakeup_rt tracer
bash-1872 4dN..2.. 87us : pick_next_task_stop <-__schedule
bash-1872 4dN..2.. 87us : pick_next_task_dl <-__schedule
bash-1872 4dN..2.. 87us : pick_next_task_rt <-__schedule
bash-1872 4dN..2.. 88us : update_rt_rq_load_avg <-pick_next_task_rt
bash-1872 4d...3.. 88us : __schedule
bash-1872 4d...3.. 88us : 1872:120:R ==> [004] 42: 98:R rcuc/4
bash-1872 4d...3.. 94us : <stack trace>
=> __ftrace_trace_stack
=> probe_wakeup_sched_switch
=> __schedule
=> preempt_schedule_common
=> preempt_schedule_thunk
=> _raw_spin_unlock
=> shmem_get_inode
=> shmem_mknod
=> lookup_open.isra.0
=> path_openat
=> do_filp_open
=> do_sys_openat2
=> __x64_sys_openat
=> do_syscall_64
=> entry_SYSCALL_64_after_hwframe
Interrupt off tracer
Task
Interrupt
Irqs disabled
latency
Irqs enabled
Running preemptirqsoff tracer
# trace-cmd start -p preemptirqsoff
# trace-cmd show
# tracer: preemptirqsoff
#
# preemptirqsoff latency trace v1.1.5 on 5.14.0-rc4-test+
# --------------------------------------------------------------------
# latency: 2325 us, #3005/3005, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:8)
# -----------------
# | task: bash-48651 (uid:0 nice:0 policy:0 rt_prio:0)
# -----------------
# => started at: _raw_spin_lock
# => ended at: _raw_spin_unlock
#
#
# _------=> CPU#
# / _-----=> irqs-off
# | / _----=> need-resched
# || / _---=> hardirq/softirq
# ||| / _--=> preempt-depth
# |||| / delay
# cmd pid ||||| time | caller
#  / |||||  | /
trace-cm-48651 1...1 0us : _raw_spin_lock
trace-cm-48651 1...1 1us : do_raw_spin_trylock <-_raw_spin_lock
trace-cm-48651 1...1 1us : flush_tlb_batched_pending <-unmap_page_range
trace-cm-48651 1...1 2us : vm_normal_page <-unmap_page_range
trace-cm-48651 1...1 2us : page_remove_rmap <-unmap_page_range
Running preemptirqsoff tracer
[...]
trace-cm-48651 1...1 2313us : unlock_page_memcg <-unmap_page_range
trace-cm-48651 1...1 2314us : __rcu_read_unlock <-unlock_page_memcg
trace-cm-48651 1...1 2315us : __tlb_remove_page_size <-unmap_page_range
trace-cm-48651 1...1 2316us : vm_normal_page <-unmap_page_range
trace-cm-48651 1...1 2317us : page_remove_rmap <-unmap_page_range
trace-cm-48651 1...1 2318us : lock_page_memcg <-page_remove_rmap
trace-cm-48651 1...1 2318us : __rcu_read_lock <-lock_page_memcg
trace-cm-48651 1...1 2319us : unlock_page_memcg <-unmap_page_range
trace-cm-48651 1...1 2320us : __rcu_read_unlock <-unlock_page_memcg
trace-cm-48651 1...1 2321us : __tlb_remove_page_size <-unmap_page_range
trace-cm-48651 1...1 2322us : _raw_spin_unlock <-unmap_page_range
trace-cm-48651 1...1 2323us : do_raw_spin_unlock <-_raw_spin_unlock
trace-cm-48651 1...1 2324us : preempt_count_sub <-_raw_spin_unlock
trace-cm-48651 1...1 2325us : _raw_spin_unlock
trace-cm-48651 1...1 2327us+: tracer_preempt_on <-_raw_spin_unlock
trace-cm-48651 1...1 2343us : <stack trace>
=> unmap_page_range
=> unmap_vmas
=> exit_mmap
=> mmput
=> begin_new_exec
=> load_elf_binary
=> bprm_execve
=> do_execveat_common
=> __x64_sys_execve
=> do_syscall_64
=> entry_SYSCALL_64_after_hwframe
Running preemptirqsoff tracer
# trace-cmd start -p preemptirqsoff -d -O sym-offset
# trace-cmd show
# tracer: preemptirqsoff
[..]
# latency: 248 us, #4/4, CPU#6 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:8)
# -----------------
# | task: ksoftirqd/6-47 (uid:0 nice:0 policy:0 rt_prio:0)
# -----------------
# => started at: run_ksoftirqd
# => ended at: run_ksoftirqd
[..]
# cmd pid ||||| time | caller
#  / |||||  | /
<idle>-0 4d..1 1us!: cpuidle_enter_state+0xcf/0x410
kworker/-48096 4...1 242us : schedule+0x4d/0xe0 <-schedule+0x4d/0xe0
kworker/-48096 4...1 243us : tracer_preempt_on+0xf4/0x110 <-schedule+0x4d/0xe0
kworker/-48096 4...1 248us : <stack trace>
=> worker_thread+0xd4/0x3c0
=> kthread+0x155/0x180
=> ret_from_fork+0x22/0x30
Measuring Interrupt Latency
Task
Interrupt
latency
Measuring latency from interrupts
●
You can easily trace the latency from interrupts
– For x86:
# trace-cmd record -p function_graph -l handle_irq_event 
-l ‘*sysvec_*’ -e irq_handler_entry 
-e 'irq_vectors:*entry*'
Tracing Latency from Interrupts
# trace-cmd report -l --cpu 2
sleep-2513 [002] 2973.197184: funcgraph_entry: | __sysvec_apic_timer_interrupt() {
sleep-2513 [002] 2973.197186: local_timer_entry: vector=236
sleep-2513 [002] 2973.197196: funcgraph_exit: + 12.233 us | }
sleep-2513 [002] 2973.197206: funcgraph_entry: | __sysvec_irq_work() {
sleep-2513 [002] 2973.197207: irq_work_entry: vector=246
sleep-2513 [002] 2973.197207: funcgraph_exit: 1.405 us | }
<idle>-0 [002] 2973.198186: funcgraph_entry: | __sysvec_apic_timer_interrupt() {
<idle>-0 [002] 2973.198187: local_timer_entry: vector=236
<idle>-0 [002] 2973.198193: funcgraph_exit: 7.992 us | }
<idle>-0 [002] 2974.991721: funcgraph_entry: | handle_irq_event() {
<idle>-0 [002] 2974.991723: irq_handler_entry: irq=24 name=ahci[0000:00:1f.2]
<idle>-0 [002] 2974.991733: funcgraph_exit: + 13.158 us | }
<idle>-0 [002] 2977.039843: funcgraph_entry: | handle_irq_event() {
<idle>-0 [002] 2977.039845: irq_handler_entry: irq=24 name=ahci[0000:00:1f.2]
<idle>-0 [002] 2977.039855: funcgraph_exit: + 12.869 us | }
Problems with the latency tracers
●
No control over what tasks they trace
– They trace the highest priority process
– May not be the process you are interested in
●
Not flexible
– Has one use case
– General for the entire system
Tracing interrupt latency with function graph tracer
●
Does not have a “max latency”
– You see all the latency in a trace
– No way to record the max latency found
Introducing Synthetic Events
●
Can map two events into a single event
Introducing Synthetic Events
●
Can map two events into a single event
– sched_waking + sched_switch wakeup_latency
→
– irqs_disabled + irqs_enabled irqs_off_latency
→
– irq_enter_handler + irq_exit_handler irq_latency
→
Introducing Synthetic Events
●
Can map two events into a single event
– sched_waking + sched_switch wakeup_latency
→
– irqs_disabled + irqs_enabled irqs_off_latency
→
– irq_enter_handler + irq_exit_handler irq_latency
→
●
Have all the functionality as a normal event
Introducing Synthetic Events
●
Can map two events into a single event
– sched_waking + sched_switch wakeup_latency
→
– irqs_disabled + irqs_enabled irqs_off_latency
→
– irq_enter_handler + irq_exit_handler irq_latency
→
●
Have all the functionality as a normal event
– Can be filtered on
Introducing Synthetic Events
●
Can map two events into a single event
– sched_waking + sched_switch wakeup_latency
→
– irqs_disabled + irqs_enabled irqs_off_latency
→
– irq_enter_handler + irq_exit_handler irq_latency
→
●
Have all the functionality as a normal event
– Can be filtered on
– Can have triggers attached (like histograms)
Synthetic events
# echo 'wakeup_lat s32 pid; u64 delta;' > /sys/kernel/tracing/synthetic_events
# echo 'hist:keys=pid:__arg__1=pid,__arg__2=common_timestamp.usecs' > 
/sys/kernel/tracing/events/sched/sched_waking/trigger
# echo 'hist:keys=next_pid:pid=$__arg__1,’
‘delta=common_timestamp.usecs-$__arg__2:onmatch(sched.sched_waking)’
‘.trace(wakeup_lat,$pid,$delta) if next_comm == “cyclictest”' > 
/sys/kernel/tracing/events/sched/sched_switch/trigger
Synthetic events
# echo 'wakeup_lat s32 pid; u64 delta;' > /sys/kernel/tracing/synthetic_events
# echo 'hist:keys=pid:__arg__1=pid,__arg__2=common_timestamp.usecs' > 
/sys/kernel/tracing/events/sched/sched_waking/trigger
# echo 'hist:keys=next_pid:pid=$__arg__1,’
‘delta=common_timestamp.usecs-$__arg__2:onmatch(sched.sched_waking)’
‘.trace(wakeup_lat,$pid,$delta) if next_comm == “cyclictest”' > 
/sys/kernel/tracing/events/sched/sched_switch/trigger
Too Complex!
Introducing libtracefs
A library to interact with the tracefs file system
All functions have man pages
The tracefs_sql man page has the sqlhist program in it
https://git.kernel.org/pub/scm/libs/libtrace/libtracefs.git/
git clone git://git.kernel.org/pub/scm/libs/libtrace/libtracefs.git
cd libtracefs
make sqlhist # builds from the man page!
Synthetic events
# sqlhist -e -n wakeup_lat 
‘SELECT start.pid, (end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS delta FROM ’ 
‘sched_waking AS start JOIN sched_switch AS end ON start.pid = end.next_pid' 
‘WHERE end.next_pid = “cyclictest”’
Synthetic events
# sqlhist -e -n wakeup_lat 
‘SELECT start.pid, (end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS delta FROM ’ 
‘sched_waking AS start JOIN sched_switch AS end ON start.pid = end.next_pid' 
‘WHERE end.next_pid = “cyclictest”’
Synthetic events
# sqlhist -e -n wakeup_lat 
‘SELECT start.pid, (end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS delta FROM ’ 
‘sched_waking AS start JOIN sched_switch AS end ON start.pid = end.next_pid' 
‘WHERE end.next_pid = “cyclictest”’
Synthetic events
# sqlhist -e -n wakeup_lat 
‘SELECT start.pid, (end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS delta FROM ’ 
‘sched_waking AS start JOIN sched_switch AS end ON start.pid = end.next_pid' 
‘WHERE end.next_pid = “cyclictest”’
Synthetic events
# sqlhist -e -n wakeup_lat 
‘SELECT start.pid, (end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS delta FROM ’ 
‘sched_waking AS start JOIN sched_switch AS end ON start.pid = end.next_pid' 
‘WHERE end.next_pid = “cyclictest”’
Synthetic events
# sqlhist -e -n wakeup_lat 
‘SELECT start.pid, (end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS delta FROM ’ 
‘sched_waking AS start JOIN sched_switch AS end ON start.pid = end.next_pid' 
‘WHERE end.next_pid = “cyclictest”’
Synthetic events
# sqlhist -e -n wakeup_lat 
‘SELECT start.pid, (end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS delta FROM ’ 
‘sched_waking AS start JOIN sched_switch AS end ON start.pid = end.next_pid' 
‘WHERE end.next_pid = “cyclictest”’
Synthetic events
# sqlhist -e -n wakeup_lat 
‘SELECT start.pid, (end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS delta FROM ’ 
‘sched_waking AS start JOIN sched_switch AS end ON start.pid = end.next_pid' 
‘WHERE end.next_pid = “cyclictest”’
Synthetic events
# sqlhist -e -n wakeup_lat 
‘SELECT start.pid, (end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS delta FROM ’ 
‘sched_waking AS start JOIN sched_switch AS end ON start.pid = end.next_pid' 
‘WHERE end.next_pid = “cyclictest”’
cyclictest
A tool used by real-time developers to test wake up latency in the system
https://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git/
git clone git://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git
cd rt-tests
make cyclictest
Customize latency tracing
# sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ 
‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ 
‘FROM sched_waking AS start JOIN sched_switch AS end ’ 
‘ON start.pid = end.next_pid’ 
‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"'
# trace-cmd start -e all -e wakeup_lat -R stacktrace
# cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark
# trace-cmd show -s | tail -16
<idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1
<idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120
prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19
<idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17
<idle>-0 [002] d..5 23454.902258: <stack trace>
=> trace_event_raw_event_synth
=> action_trace
=> event_hist_trigger
=> event_triggers_call
=> trace_event_buffer_commit
=> trace_event_raw_event_sched_switch
=> __traceiter_sched_switch
=> __schedule
=> schedule_idle
=> do_idle
=> cpu_startup_entry
=> secondary_startup_64_no_verify
Customize latency tracing
# sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ 
‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ 
‘FROM sched_waking AS start JOIN sched_switch AS end ’ 
‘ON start.pid = end.next_pid’ 
‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"'
# trace-cmd start -e all -e wakeup_lat -R stacktrace
# cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark
# trace-cmd show -s | tail -16
<idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1
<idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120
prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19
<idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17
<idle>-0 [002] d..5 23454.902258: <stack trace>
=> trace_event_raw_event_synth
=> action_trace
=> event_hist_trigger
=> event_triggers_call
=> trace_event_buffer_commit
=> trace_event_raw_event_sched_switch
=> __traceiter_sched_switch
=> __schedule
=> schedule_idle
=> do_idle
=> cpu_startup_entry
=> secondary_startup_64_no_verify
Customize latency tracing
# sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ 
‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ 
‘FROM sched_waking AS start JOIN sched_switch AS end ’ 
‘ON start.pid = end.next_pid’ 
‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"'
# trace-cmd start -e all -e wakeup_lat -R stacktrace
# cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark
# trace-cmd show -s | tail -16
<idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1
<idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120
prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19
<idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17
<idle>-0 [002] d..5 23454.902258: <stack trace>
=> trace_event_raw_event_synth
=> action_trace
=> event_hist_trigger
=> event_triggers_call
=> trace_event_buffer_commit
=> trace_event_raw_event_sched_switch
=> __traceiter_sched_switch
=> __schedule
=> schedule_idle
=> do_idle
=> cpu_startup_entry
=> secondary_startup_64_no_verify
Customize latency tracing
# sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ 
‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ 
‘FROM sched_waking AS start JOIN sched_switch AS end ’ 
‘ON start.pid = end.next_pid’ 
‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"'
# trace-cmd start -e all -e wakeup_lat -R stacktrace
# cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark
# trace-cmd show -s | tail -16
<idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1
<idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120
prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19
<idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17
<idle>-0 [002] d..5 23454.902258: <stack trace>
=> trace_event_raw_event_synth
=> action_trace
=> event_hist_trigger
=> event_triggers_call
=> trace_event_buffer_commit
=> trace_event_raw_event_sched_switch
=> __traceiter_sched_switch
=> __schedule
=> schedule_idle
=> do_idle
=> cpu_startup_entry
=> secondary_startup_64_no_verify
Customize latency tracing
# sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ 
‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ 
‘FROM sched_waking AS start JOIN sched_switch AS end ’ 
‘ON start.pid = end.next_pid’ 
‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"'
# trace-cmd start -e all -e wakeup_lat -R stacktrace
# cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark
# trace-cmd show -s | tail -16
<idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1
<idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120
prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19
<idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17
<idle>-0 [002] d..5 23454.902258: <stack trace>
=> trace_event_raw_event_synth
=> action_trace
=> event_hist_trigger
=> event_triggers_call
=> trace_event_buffer_commit
=> trace_event_raw_event_sched_switch
=> __traceiter_sched_switch
=> __schedule
=> schedule_idle
=> do_idle
=> cpu_startup_entry
=> secondary_startup_64_no_verify
Customize latency tracing
# sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ 
‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ 
‘FROM sched_waking AS start JOIN sched_switch AS end ’ 
‘ON start.pid = end.next_pid’ 
‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"'
# trace-cmd start -e all -e wakeup_lat -R stacktrace
# cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark
# trace-cmd show -s | tail -16
<idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1
<idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120
prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19
<idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17
<idle>-0 [002] d..5 23454.902258: <stack trace>
=> trace_event_raw_event_synth
=> action_trace
=> event_hist_trigger
=> event_triggers_call
=> trace_event_buffer_commit
=> trace_event_raw_event_sched_switch
=> __traceiter_sched_switch
=> __schedule
=> schedule_idle
=> do_idle
=> cpu_startup_entry
=> secondary_startup_64_no_verify
Customize latency tracing
# sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ 
‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ 
‘FROM sched_waking AS start JOIN sched_switch AS end ’ 
‘ON start.pid = end.next_pid’ 
‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"'
# trace-cmd start -e all -e wakeup_lat -R stacktrace
# cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark
# trace-cmd show -s | tail -16
<idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1
<idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120
prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19
<idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17
<idle>-0 [002] d..5 23454.902258: <stack trace>
=> trace_event_raw_event_synth
=> action_trace
=> event_hist_trigger
=> event_triggers_call
=> trace_event_buffer_commit
=> trace_event_raw_event_sched_switch
=> __traceiter_sched_switch
=> __schedule
=> schedule_idle
=> do_idle
=> cpu_startup_entry
=> secondary_startup_64_no_verify
Customize latency tracing
# sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ 
‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ 
‘FROM sched_waking AS start JOIN sched_switch AS end ’ 
‘ON start.pid = end.next_pid’ 
‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"'
# trace-cmd start -e all -e wakeup_lat -R stacktrace
# cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark
# trace-cmd show -s | tail -16
<idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1
<idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120
prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19
<idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17
<idle>-0 [002] d..5 23454.902258: <stack trace>
=> trace_event_raw_event_synth
=> action_trace
=> event_hist_trigger
=> event_triggers_call
=> trace_event_buffer_commit
=> trace_event_raw_event_sched_switch
=> __traceiter_sched_switch
=> __schedule
=> schedule_idle
=> do_idle
=> cpu_startup_entry
=> secondary_startup_64_no_verify
Customize latency tracing
# sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ 
‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ 
‘FROM sched_waking AS start JOIN sched_switch AS end ’ 
‘ON start.pid = end.next_pid’ 
‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"'
# trace-cmd start -e all -e wakeup_lat -R stacktrace
# cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark
# trace-cmd show -s | tail -16
<idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1
<idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120
prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19
<idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17
<idle>-0 [002] d..5 23454.902258: <stack trace>
=> trace_event_raw_event_synth
=> action_trace
=> event_hist_trigger
=> event_triggers_call
=> trace_event_buffer_commit
=> trace_event_raw_event_sched_switch
=> __traceiter_sched_switch
=> __schedule
=> schedule_idle
=> do_idle
=> cpu_startup_entry
=> secondary_startup_64_no_verify
Customize latency tracing
# sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ 
‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ 
‘FROM sched_waking AS start JOIN sched_switch AS end ’ 
‘ON start.pid = end.next_pid’ 
‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"'
# trace-cmd start -e all -e wakeup_lat -R stacktrace
# cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark
# trace-cmd show -s | tail -16
<idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1
<idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120
prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19
<idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17
<idle>-0 [002] d..5 23454.902258: <stack trace>
=> trace_event_raw_event_synth
=> action_trace
=> event_hist_trigger
=> event_triggers_call
=> trace_event_buffer_commit
=> trace_event_raw_event_sched_switch
=> __traceiter_sched_switch
=> __schedule
=> schedule_idle
=> do_idle
=> cpu_startup_entry
=> secondary_startup_64_no_verify
Customize latency tracing
# sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ 
‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ 
‘FROM sched_waking AS start JOIN sched_switch AS end ’ 
‘ON start.pid = end.next_pid’ 
‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"'
# trace-cmd start -e all -e wakeup_lat -R stacktrace
# cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark
# trace-cmd show -s | tail -16
<idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1
<idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120
prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19
<idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17
<idle>-0 [002] d..5 23454.902258: <stack trace>
=> trace_event_raw_event_synth
=> action_trace
=> event_hist_trigger
=> event_triggers_call
=> trace_event_buffer_commit
=> trace_event_raw_event_sched_switch
=> __traceiter_sched_switch
=> __schedule
=> schedule_idle
=> do_idle
=> cpu_startup_entry
=> secondary_startup_64_no_verify
Customize latency tracing
# sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ 
‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ 
‘FROM sched_waking AS start JOIN sched_switch AS end ’ 
‘ON start.pid = end.next_pid’ 
‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"'
# trace-cmd start -e all -e wakeup_lat -R stacktrace
# cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark
# trace-cmd show -s | tail -16
<idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1
<idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120
prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19
<idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17
<idle>-0 [002] d..5 23454.902258: <stack trace>
=> trace_event_raw_event_synth
=> action_trace
=> event_hist_trigger
=> event_triggers_call
=> trace_event_buffer_commit
=> trace_event_raw_event_sched_switch
=> __traceiter_sched_switch
=> __schedule
=> schedule_idle
=> do_idle
=> cpu_startup_entry
=> secondary_startup_64_no_verify
Customize latency tracing
# trace-cmd extract -s
# trace-cmd report --cpu 2 | tail -30
<idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1
<idle>-0 [002] 23454.902239: sched_wakeup: cyclictest:12275 [19] CPU:002
<idle>-0 [002] 23454.902241: hrtimer_expire_exit: hrtimer=0xffffbbd68286fe60
<idle>-0 [002] 23454.902241: hrtimer_cancel: hrtimer=0xffffbbd6826efe70
<idle>-0 [002] 23454.902242: hrtimer_expire_entry: hrtimer=0xffffbbd6826efe70
<idle>-0 [002] 23454.902243: sched_waking: comm=cyclictest pid=12272
prio=120 target_cpu=002
<idle>-0 [002] 23454.902244: prandom_u32: ret=1102749734
<idle>-0 [002] 23454.902246: sched_wakeup: cyclictest:12272 [120] CPU:002
<idle>-0 [002] 23454.902247: hrtimer_expire_exit: hrtimer=0xffffbbd6826efe70
<idle>-0 [002] 23454.902248: write_msr: 6e0, value 4866ce957272
<idle>-0 [002] 23454.902249: local_timer_exit: vector=236
<idle>-0 [002] 23454.902250: cpu_idle: state=4294967295 cpu_id=2
<idle>-0 [002] 23454.902251: rcu_utilization: Start context switch
<idle>-0 [002] 23454.902252: rcu_utilization: End context switch
<idle>-0 [002] 23454.902253: prandom_u32: ret=3692516021
<idle>-0 [002] 23454.902254: sched_switch: swapper/2:0 [120] R ==>
cyclictest:12275 [19]
<idle>-0 [002] 23454.902256: wakeup_lat: next_comm=cyclictest lat=17
<idle>-0 [002] 23454.902258: kernel_stack: <stack trace >
=> trace_event_raw_event_synth (ffffffff8121a0db)
=> action_trace (ffffffff8121e9fb)
=> event_hist_trigger (ffffffff8121ca8d)
=> event_triggers_call (ffffffff81216c72)
[..]
Customize latency tracing
# trace-cmd extract -s
# trace-cmd report --cpu 2 | tail -30
<idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1
<idle>-0 [002] 23454.902239: sched_wakeup: cyclictest:12275 [19] CPU:002
<idle>-0 [002] 23454.902241: hrtimer_expire_exit: hrtimer=0xffffbbd68286fe60
<idle>-0 [002] 23454.902241: hrtimer_cancel: hrtimer=0xffffbbd6826efe70
<idle>-0 [002] 23454.902242: hrtimer_expire_entry: hrtimer=0xffffbbd6826efe70
<idle>-0 [002] 23454.902243: sched_waking: comm=cyclictest pid=12272
prio=120 target_cpu=002
<idle>-0 [002] 23454.902244: prandom_u32: ret=1102749734
<idle>-0 [002] 23454.902246: sched_wakeup: cyclictest:12272 [120] CPU:002
<idle>-0 [002] 23454.902247: hrtimer_expire_exit: hrtimer=0xffffbbd6826efe70
<idle>-0 [002] 23454.902248: write_msr: 6e0, value 4866ce957272
<idle>-0 [002] 23454.902249: local_timer_exit: vector=236
<idle>-0 [002] 23454.902250: cpu_idle: state=4294967295 cpu_id=2
<idle>-0 [002] 23454.902251: rcu_utilization: Start context switch
<idle>-0 [002] 23454.902252: rcu_utilization: End context switch
<idle>-0 [002] 23454.902253: prandom_u32: ret=3692516021
<idle>-0 [002] 23454.902254: sched_switch: swapper/2:0 [120] R ==>
cyclictest:12275 [19]
<idle>-0 [002] 23454.902256: wakeup_lat: next_comm=cyclictest lat=17
<idle>-0 [002] 23454.902258: kernel_stack: <stack trace >
=> trace_event_raw_event_synth (ffffffff8121a0db)
=> action_trace (ffffffff8121e9fb)
=> event_hist_trigger (ffffffff8121ca8d)
=> event_triggers_call (ffffffff81216c72)
[..]
Customize latency tracing
# trace-cmd extract -s
# trace-cmd report --cpu 2 | tail -30
<idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1
<idle>-0 [002] 23454.902239: sched_wakeup: cyclictest:12275 [19] CPU:002
<idle>-0 [002] 23454.902241: hrtimer_expire_exit: hrtimer=0xffffbbd68286fe60
<idle>-0 [002] 23454.902241: hrtimer_cancel: hrtimer=0xffffbbd6826efe70
<idle>-0 [002] 23454.902242: hrtimer_expire_entry: hrtimer=0xffffbbd6826efe70
<idle>-0 [002] 23454.902243: sched_waking: comm=cyclictest pid=12272
prio=120 target_cpu=002
<idle>-0 [002] 23454.902244: prandom_u32: ret=1102749734
<idle>-0 [002] 23454.902246: sched_wakeup: cyclictest:12272 [120] CPU:002
<idle>-0 [002] 23454.902247: hrtimer_expire_exit: hrtimer=0xffffbbd6826efe70
<idle>-0 [002] 23454.902248: write_msr: 6e0, value 4866ce957272
<idle>-0 [002] 23454.902249: local_timer_exit: vector=236
<idle>-0 [002] 23454.902250: cpu_idle: state=4294967295 cpu_id=2
<idle>-0 [002] 23454.902251: rcu_utilization: Start context switch
<idle>-0 [002] 23454.902252: rcu_utilization: End context switch
<idle>-0 [002] 23454.902253: prandom_u32: ret=3692516021
<idle>-0 [002] 23454.902254: sched_switch: swapper/2:0 [120] R ==>
cyclictest:12275 [19]
<idle>-0 [002] 23454.902256: wakeup_lat: next_comm=cyclictest lat=17
<idle>-0 [002] 23454.902258: kernel_stack: <stack trace >
=> trace_event_raw_event_synth (ffffffff8121a0db)
=> action_trace (ffffffff8121e9fb)
=> event_hist_trigger (ffffffff8121ca8d)
=> event_triggers_call (ffffffff81216c72)
[..]
Customize latency tracing
# trace-cmd extract -s
# trace-cmd report --cpu 2 | tail -30
<idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1
<idle>-0 [002] 23454.902239: sched_wakeup: cyclictest:12275 [19] CPU:002
<idle>-0 [002] 23454.902241: hrtimer_expire_exit: hrtimer=0xffffbbd68286fe60
<idle>-0 [002] 23454.902241: hrtimer_cancel: hrtimer=0xffffbbd6826efe70
<idle>-0 [002] 23454.902242: hrtimer_expire_entry: hrtimer=0xffffbbd6826efe70
<idle>-0 [002] 23454.902243: sched_waking: comm=cyclictest pid=12272
prio=120 target_cpu=002
<idle>-0 [002] 23454.902244: prandom_u32: ret=1102749734
<idle>-0 [002] 23454.902246: sched_wakeup: cyclictest:12272 [120] CPU:002
<idle>-0 [002] 23454.902247: hrtimer_expire_exit: hrtimer=0xffffbbd6826efe70
<idle>-0 [002] 23454.902248: write_msr: 6e0, value 4866ce957272
<idle>-0 [002] 23454.902249: local_timer_exit: vector=236
<idle>-0 [002] 23454.902250: cpu_idle: state=4294967295 cpu_id=2
<idle>-0 [002] 23454.902251: rcu_utilization: Start context switch
<idle>-0 [002] 23454.902252: rcu_utilization: End context switch
<idle>-0 [002] 23454.902253: prandom_u32: ret=3692516021
<idle>-0 [002] 23454.902254: sched_switch: swapper/2:0 [120] R ==>
cyclictest:12275 [19]
<idle>-0 [002] 23454.902256: wakeup_lat: next_comm=cyclictest lat=17
<idle>-0 [002] 23454.902258: kernel_stack: <stack trace >
=> trace_event_raw_event_synth (ffffffff8121a0db)
=> action_trace (ffffffff8121e9fb)
=> event_hist_trigger (ffffffff8121ca8d)
=> event_triggers_call (ffffffff81216c72)
[..]
Customize latency tracing
# trace-cmd extract -s
# trace-cmd report --cpu 2 | tail -30
<idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1
<idle>-0 [002] 23454.902239: sched_wakeup: cyclictest:12275 [19] CPU:002
<idle>-0 [002] 23454.902241: hrtimer_expire_exit: hrtimer=0xffffbbd68286fe60
<idle>-0 [002] 23454.902241: hrtimer_cancel: hrtimer=0xffffbbd6826efe70
<idle>-0 [002] 23454.902242: hrtimer_expire_entry: hrtimer=0xffffbbd6826efe70
<idle>-0 [002] 23454.902243: sched_waking: comm=cyclictest pid=12272
prio=120 target_cpu=002
<idle>-0 [002] 23454.902244: prandom_u32: ret=1102749734
<idle>-0 [002] 23454.902246: sched_wakeup: cyclictest:12272 [120] CPU:002
<idle>-0 [002] 23454.902247: hrtimer_expire_exit: hrtimer=0xffffbbd6826efe70
<idle>-0 [002] 23454.902248: write_msr: 6e0, value 4866ce957272
<idle>-0 [002] 23454.902249: local_timer_exit: vector=236
<idle>-0 [002] 23454.902250: cpu_idle: state=4294967295 cpu_id=2
<idle>-0 [002] 23454.902251: rcu_utilization: Start context switch
<idle>-0 [002] 23454.902252: rcu_utilization: End context switch
<idle>-0 [002] 23454.902253: prandom_u32: ret=3692516021
<idle>-0 [002] 23454.902254: sched_switch: swapper/2:0 [120] R ==>
cyclictest:12275 [19]
<idle>-0 [002] 23454.902256: wakeup_lat: next_comm=cyclictest lat=17
<idle>-0 [002] 23454.902258: kernel_stack: <stack trace >
=> trace_event_raw_event_synth (ffffffff8121a0db)
=> action_trace (ffffffff8121e9fb)
=> event_hist_trigger (ffffffff8121ca8d)
=> event_triggers_call (ffffffff81216c72)
[..]
Customize latency tracing
# trace-cmd extract -s
# trace-cmd report --cpu 2 | tail -30
<idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1
<idle>-0 [002] 23454.902239: sched_wakeup: cyclictest:12275 [19] CPU:002
<idle>-0 [002] 23454.902241: hrtimer_expire_exit: hrtimer=0xffffbbd68286fe60
<idle>-0 [002] 23454.902241: hrtimer_cancel: hrtimer=0xffffbbd6826efe70
<idle>-0 [002] 23454.902242: hrtimer_expire_entry: hrtimer=0xffffbbd6826efe70
<idle>-0 [002] 23454.902243: sched_waking: comm=cyclictest pid=12272
prio=120 target_cpu=002
<idle>-0 [002] 23454.902244: prandom_u32: ret=1102749734
<idle>-0 [002] 23454.902246: sched_wakeup: cyclictest:12272 [120] CPU:002
<idle>-0 [002] 23454.902247: hrtimer_expire_exit: hrtimer=0xffffbbd6826efe70
<idle>-0 [002] 23454.902248: write_msr: 6e0, value 4866ce957272
<idle>-0 [002] 23454.902249: local_timer_exit: vector=236
<idle>-0 [002] 23454.902250: cpu_idle: state=4294967295 cpu_id=2
<idle>-0 [002] 23454.902251: rcu_utilization: Start context switch
<idle>-0 [002] 23454.902252: rcu_utilization: End context switch
<idle>-0 [002] 23454.902253: prandom_u32: ret=3692516021
<idle>-0 [002] 23454.902254: sched_switch: swapper/2:0 [120] R ==>
cyclictest:12275 [19]
<idle>-0 [002] 23454.902256: wakeup_lat: next_comm=cyclictest lat=17
<idle>-0 [002] 23454.902258: kernel_stack: <stack trace >
=> trace_event_raw_event_synth (ffffffff8121a0db)
=> action_trace (ffffffff8121e9fb)
=> event_hist_trigger (ffffffff8121ca8d)
=> event_triggers_call (ffffffff81216c72)
[..]
Customize latency tracing
# trace-cmd extract -s
# trace-cmd report --cpu 2 | tail -30
<idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1
<idle>-0 [002] 23454.902239: sched_wakeup: cyclictest:12275 [19] CPU:002
<idle>-0 [002] 23454.902241: hrtimer_expire_exit: hrtimer=0xffffbbd68286fe60
<idle>-0 [002] 23454.902241: hrtimer_cancel: hrtimer=0xffffbbd6826efe70
<idle>-0 [002] 23454.902242: hrtimer_expire_entry: hrtimer=0xffffbbd6826efe70
<idle>-0 [002] 23454.902243: sched_waking: comm=cyclictest pid=12272
prio=120 target_cpu=002
<idle>-0 [002] 23454.902244: prandom_u32: ret=1102749734
<idle>-0 [002] 23454.902246: sched_wakeup: cyclictest:12272 [120] CPU:002
<idle>-0 [002] 23454.902247: hrtimer_expire_exit: hrtimer=0xffffbbd6826efe70
<idle>-0 [002] 23454.902248: write_msr: 6e0, value 4866ce957272
<idle>-0 [002] 23454.902249: local_timer_exit: vector=236
<idle>-0 [002] 23454.902250: cpu_idle: state=4294967295 cpu_id=2
<idle>-0 [002] 23454.902251: rcu_utilization: Start context switch
<idle>-0 [002] 23454.902252: rcu_utilization: End context switch
<idle>-0 [002] 23454.902253: prandom_u32: ret=3692516021
<idle>-0 [002] 23454.902254: sched_switch: swapper/2:0 [120] R ==>
cyclictest:12275 [19]
<idle>-0 [002] 23454.902256: wakeup_lat: next_comm=cyclictest lat=17
<idle>-0 [002] 23454.902258: kernel_stack: <stack trace >
=> trace_event_raw_event_synth (ffffffff8121a0db)
=> action_trace (ffffffff8121e9fb)
=> event_hist_trigger (ffffffff8121ca8d)
=> event_triggers_call (ffffffff81216c72)
[..]
Brought to you by
Steven Rostedt
rostedt@goodmis.org
@srostedt

More Related Content

What's hot

Slab Allocator in Linux Kernel
Slab Allocator in Linux KernelSlab Allocator in Linux Kernel
Slab Allocator in Linux Kernel
Adrian Huang
 
LISA2019 Linux Systems Performance
LISA2019 Linux Systems PerformanceLISA2019 Linux Systems Performance
LISA2019 Linux Systems Performance
Brendan Gregg
 
Linux Performance Analysis: New Tools and Old Secrets
Linux Performance Analysis: New Tools and Old SecretsLinux Performance Analysis: New Tools and Old Secrets
Linux Performance Analysis: New Tools and Old Secrets
Brendan Gregg
 
Linux kernel debugging
Linux kernel debuggingLinux kernel debugging
Linux kernel debugging
Hao-Ran Liu
 
Performance Wins with eBPF: Getting Started (2021)
Performance Wins with eBPF: Getting Started (2021)Performance Wins with eBPF: Getting Started (2021)
Performance Wins with eBPF: Getting Started (2021)
Brendan Gregg
 
The Linux Block Layer - Built for Fast Storage
The Linux Block Layer - Built for Fast StorageThe Linux Block Layer - Built for Fast Storage
The Linux Block Layer - Built for Fast Storage
Kernel TLV
 
Linux Crash Dump Capture and Analysis
Linux Crash Dump Capture and AnalysisLinux Crash Dump Capture and Analysis
Linux Crash Dump Capture and Analysis
Paul V. Novarese
 
Process Scheduler and Balancer in Linux Kernel
Process Scheduler and Balancer in Linux KernelProcess Scheduler and Balancer in Linux Kernel
Process Scheduler and Balancer in Linux Kernel
Haifeng Li
 
Page cache in Linux kernel
Page cache in Linux kernelPage cache in Linux kernel
Page cache in Linux kernel
Adrian Huang
 
BPF Internals (eBPF)
BPF Internals (eBPF)BPF Internals (eBPF)
BPF Internals (eBPF)
Brendan Gregg
 
malloc & vmalloc in Linux
malloc & vmalloc in Linuxmalloc & vmalloc in Linux
malloc & vmalloc in Linux
Adrian Huang
 
Kernel Recipes 2017 - Understanding the Linux kernel via ftrace - Steven Rostedt
Kernel Recipes 2017 - Understanding the Linux kernel via ftrace - Steven RostedtKernel Recipes 2017 - Understanding the Linux kernel via ftrace - Steven Rostedt
Kernel Recipes 2017 - Understanding the Linux kernel via ftrace - Steven Rostedt
Anne Nicolas
 
qemu + gdb: The efficient way to understand/debug Linux kernel code/data stru...
qemu + gdb: The efficient way to understand/debug Linux kernel code/data stru...qemu + gdb: The efficient way to understand/debug Linux kernel code/data stru...
qemu + gdb: The efficient way to understand/debug Linux kernel code/data stru...
Adrian Huang
 
Linux Memory Management
Linux Memory ManagementLinux Memory Management
Linux Memory ManagementNi Zo-Ma
 
Memory Mapping Implementation (mmap) in Linux Kernel
Memory Mapping Implementation (mmap) in Linux KernelMemory Mapping Implementation (mmap) in Linux Kernel
Memory Mapping Implementation (mmap) in Linux Kernel
Adrian Huang
 
Meet cute-between-ebpf-and-tracing
Meet cute-between-ebpf-and-tracingMeet cute-between-ebpf-and-tracing
Meet cute-between-ebpf-and-tracing
Viller Hsiao
 
Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxCon
Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxConAnatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxCon
Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxCon
Jérôme Petazzoni
 
Linux Systems Performance 2016
Linux Systems Performance 2016Linux Systems Performance 2016
Linux Systems Performance 2016
Brendan Gregg
 
Understanding a kernel oops and a kernel panic
Understanding a kernel oops and a kernel panicUnderstanding a kernel oops and a kernel panic
Understanding a kernel oops and a kernel panic
Joseph Lu
 

What's hot (20)

Slab Allocator in Linux Kernel
Slab Allocator in Linux KernelSlab Allocator in Linux Kernel
Slab Allocator in Linux Kernel
 
LISA2019 Linux Systems Performance
LISA2019 Linux Systems PerformanceLISA2019 Linux Systems Performance
LISA2019 Linux Systems Performance
 
Linux Performance Analysis: New Tools and Old Secrets
Linux Performance Analysis: New Tools and Old SecretsLinux Performance Analysis: New Tools and Old Secrets
Linux Performance Analysis: New Tools and Old Secrets
 
Linux kernel debugging
Linux kernel debuggingLinux kernel debugging
Linux kernel debugging
 
Performance Wins with eBPF: Getting Started (2021)
Performance Wins with eBPF: Getting Started (2021)Performance Wins with eBPF: Getting Started (2021)
Performance Wins with eBPF: Getting Started (2021)
 
The Linux Block Layer - Built for Fast Storage
The Linux Block Layer - Built for Fast StorageThe Linux Block Layer - Built for Fast Storage
The Linux Block Layer - Built for Fast Storage
 
Linux Crash Dump Capture and Analysis
Linux Crash Dump Capture and AnalysisLinux Crash Dump Capture and Analysis
Linux Crash Dump Capture and Analysis
 
Process Scheduler and Balancer in Linux Kernel
Process Scheduler and Balancer in Linux KernelProcess Scheduler and Balancer in Linux Kernel
Process Scheduler and Balancer in Linux Kernel
 
Page cache in Linux kernel
Page cache in Linux kernelPage cache in Linux kernel
Page cache in Linux kernel
 
BPF Internals (eBPF)
BPF Internals (eBPF)BPF Internals (eBPF)
BPF Internals (eBPF)
 
malloc & vmalloc in Linux
malloc & vmalloc in Linuxmalloc & vmalloc in Linux
malloc & vmalloc in Linux
 
Kernel Recipes 2017 - Understanding the Linux kernel via ftrace - Steven Rostedt
Kernel Recipes 2017 - Understanding the Linux kernel via ftrace - Steven RostedtKernel Recipes 2017 - Understanding the Linux kernel via ftrace - Steven Rostedt
Kernel Recipes 2017 - Understanding the Linux kernel via ftrace - Steven Rostedt
 
淺談探索 Linux 系統設計之道
淺談探索 Linux 系統設計之道 淺談探索 Linux 系統設計之道
淺談探索 Linux 系統設計之道
 
qemu + gdb: The efficient way to understand/debug Linux kernel code/data stru...
qemu + gdb: The efficient way to understand/debug Linux kernel code/data stru...qemu + gdb: The efficient way to understand/debug Linux kernel code/data stru...
qemu + gdb: The efficient way to understand/debug Linux kernel code/data stru...
 
Linux Memory Management
Linux Memory ManagementLinux Memory Management
Linux Memory Management
 
Memory Mapping Implementation (mmap) in Linux Kernel
Memory Mapping Implementation (mmap) in Linux KernelMemory Mapping Implementation (mmap) in Linux Kernel
Memory Mapping Implementation (mmap) in Linux Kernel
 
Meet cute-between-ebpf-and-tracing
Meet cute-between-ebpf-and-tracingMeet cute-between-ebpf-and-tracing
Meet cute-between-ebpf-and-tracing
 
Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxCon
Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxConAnatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxCon
Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxCon
 
Linux Systems Performance 2016
Linux Systems Performance 2016Linux Systems Performance 2016
Linux Systems Performance 2016
 
Understanding a kernel oops and a kernel panic
Understanding a kernel oops and a kernel panicUnderstanding a kernel oops and a kernel panic
Understanding a kernel oops and a kernel panic
 

Similar to New Ways to Find Latency in Linux Using Tracing

Performance Schema for MySQL Troubleshooting
Performance Schema for MySQL TroubleshootingPerformance Schema for MySQL Troubleshooting
Performance Schema for MySQL Troubleshooting
Sveta Smirnova
 
Percona Live UK 2014 Part III
Percona Live UK 2014  Part IIIPercona Live UK 2014  Part III
Percona Live UK 2014 Part III
Alkin Tezuysal
 
OSDC 2017 - Werner Fischer - Linux performance profiling and monitoring
OSDC 2017 - Werner Fischer - Linux performance profiling and monitoringOSDC 2017 - Werner Fischer - Linux performance profiling and monitoring
OSDC 2017 - Werner Fischer - Linux performance profiling and monitoring
NETWAYS
 
Interruption Timer Périodique
Interruption Timer PériodiqueInterruption Timer Périodique
Interruption Timer Périodique
Anne Nicolas
 
Kernel Recipes 2017 - Performance analysis Superpowers with Linux BPF - Brend...
Kernel Recipes 2017 - Performance analysis Superpowers with Linux BPF - Brend...Kernel Recipes 2017 - Performance analysis Superpowers with Linux BPF - Brend...
Kernel Recipes 2017 - Performance analysis Superpowers with Linux BPF - Brend...
Anne Nicolas
 
Kernel Recipes 2017: Performance Analysis with BPF
Kernel Recipes 2017: Performance Analysis with BPFKernel Recipes 2017: Performance Analysis with BPF
Kernel Recipes 2017: Performance Analysis with BPF
Brendan Gregg
 
pg_proctab: Accessing System Stats in PostgreSQL
pg_proctab: Accessing System Stats in PostgreSQLpg_proctab: Accessing System Stats in PostgreSQL
pg_proctab: Accessing System Stats in PostgreSQL
Mark Wong
 
pg_proctab: Accessing System Stats in PostgreSQL
pg_proctab: Accessing System Stats in PostgreSQLpg_proctab: Accessing System Stats in PostgreSQL
pg_proctab: Accessing System Stats in PostgreSQLCommand Prompt., Inc
 
re:Invent 2019 BPF Performance Analysis at Netflix
re:Invent 2019 BPF Performance Analysis at Netflixre:Invent 2019 BPF Performance Analysis at Netflix
re:Invent 2019 BPF Performance Analysis at Netflix
Brendan Gregg
 
OSSNA 2017 Performance Analysis Superpowers with Linux BPF
OSSNA 2017 Performance Analysis Superpowers with Linux BPFOSSNA 2017 Performance Analysis Superpowers with Linux BPF
OSSNA 2017 Performance Analysis Superpowers with Linux BPF
Brendan Gregg
 
EKON22 Introduction to Machinelearning
EKON22 Introduction to MachinelearningEKON22 Introduction to Machinelearning
EKON22 Introduction to Machinelearning
Max Kleiner
 
bcc/BPF tools - Strategy, current tools, future challenges
bcc/BPF tools - Strategy, current tools, future challengesbcc/BPF tools - Strategy, current tools, future challenges
bcc/BPF tools - Strategy, current tools, future challenges
IO Visor Project
 
BPF Tools 2017
BPF Tools 2017BPF Tools 2017
BPF Tools 2017
Brendan Gregg
 
Debugging linux issues with eBPF
Debugging linux issues with eBPFDebugging linux issues with eBPF
Debugging linux issues with eBPF
Ivan Babrou
 
ANALYZE for executable statements - a new way to do optimizer troubleshooting...
ANALYZE for executable statements - a new way to do optimizer troubleshooting...ANALYZE for executable statements - a new way to do optimizer troubleshooting...
ANALYZE for executable statements - a new way to do optimizer troubleshooting...
Sergey Petrunya
 
Perf Tuning Short
Perf Tuning ShortPerf Tuning Short
Perf Tuning Short
Ligaya Turmelle
 
Undelete (and more) rows from the binary log
Undelete (and more) rows from the binary logUndelete (and more) rows from the binary log
Undelete (and more) rows from the binary log
Frederic Descamps
 
CONFidence 2015: DTrace + OSX = Fun - Andrzej Dyjak
CONFidence 2015: DTrace + OSX = Fun - Andrzej Dyjak   CONFidence 2015: DTrace + OSX = Fun - Andrzej Dyjak
CONFidence 2015: DTrace + OSX = Fun - Andrzej Dyjak
PROIDEA
 

Similar to New Ways to Find Latency in Linux Using Tracing (20)

Performance Schema for MySQL Troubleshooting
Performance Schema for MySQL TroubleshootingPerformance Schema for MySQL Troubleshooting
Performance Schema for MySQL Troubleshooting
 
Percona Live UK 2014 Part III
Percona Live UK 2014  Part IIIPercona Live UK 2014  Part III
Percona Live UK 2014 Part III
 
OSDC 2017 - Werner Fischer - Linux performance profiling and monitoring
OSDC 2017 - Werner Fischer - Linux performance profiling and monitoringOSDC 2017 - Werner Fischer - Linux performance profiling and monitoring
OSDC 2017 - Werner Fischer - Linux performance profiling and monitoring
 
Interruption Timer Périodique
Interruption Timer PériodiqueInterruption Timer Périodique
Interruption Timer Périodique
 
Kernel Recipes 2017 - Performance analysis Superpowers with Linux BPF - Brend...
Kernel Recipes 2017 - Performance analysis Superpowers with Linux BPF - Brend...Kernel Recipes 2017 - Performance analysis Superpowers with Linux BPF - Brend...
Kernel Recipes 2017 - Performance analysis Superpowers with Linux BPF - Brend...
 
Kernel Recipes 2017: Performance Analysis with BPF
Kernel Recipes 2017: Performance Analysis with BPFKernel Recipes 2017: Performance Analysis with BPF
Kernel Recipes 2017: Performance Analysis with BPF
 
pg_proctab: Accessing System Stats in PostgreSQL
pg_proctab: Accessing System Stats in PostgreSQLpg_proctab: Accessing System Stats in PostgreSQL
pg_proctab: Accessing System Stats in PostgreSQL
 
pg_proctab: Accessing System Stats in PostgreSQL
pg_proctab: Accessing System Stats in PostgreSQLpg_proctab: Accessing System Stats in PostgreSQL
pg_proctab: Accessing System Stats in PostgreSQL
 
re:Invent 2019 BPF Performance Analysis at Netflix
re:Invent 2019 BPF Performance Analysis at Netflixre:Invent 2019 BPF Performance Analysis at Netflix
re:Invent 2019 BPF Performance Analysis at Netflix
 
OSSNA 2017 Performance Analysis Superpowers with Linux BPF
OSSNA 2017 Performance Analysis Superpowers with Linux BPFOSSNA 2017 Performance Analysis Superpowers with Linux BPF
OSSNA 2017 Performance Analysis Superpowers with Linux BPF
 
EKON22 Introduction to Machinelearning
EKON22 Introduction to MachinelearningEKON22 Introduction to Machinelearning
EKON22 Introduction to Machinelearning
 
bcc/BPF tools - Strategy, current tools, future challenges
bcc/BPF tools - Strategy, current tools, future challengesbcc/BPF tools - Strategy, current tools, future challenges
bcc/BPF tools - Strategy, current tools, future challenges
 
BPF Tools 2017
BPF Tools 2017BPF Tools 2017
BPF Tools 2017
 
Debugging linux issues with eBPF
Debugging linux issues with eBPFDebugging linux issues with eBPF
Debugging linux issues with eBPF
 
ANALYZE for executable statements - a new way to do optimizer troubleshooting...
ANALYZE for executable statements - a new way to do optimizer troubleshooting...ANALYZE for executable statements - a new way to do optimizer troubleshooting...
ANALYZE for executable statements - a new way to do optimizer troubleshooting...
 
Perf Tuning Short
Perf Tuning ShortPerf Tuning Short
Perf Tuning Short
 
Mysql tracing
Mysql tracingMysql tracing
Mysql tracing
 
Mysql tracing
Mysql tracingMysql tracing
Mysql tracing
 
Undelete (and more) rows from the binary log
Undelete (and more) rows from the binary logUndelete (and more) rows from the binary log
Undelete (and more) rows from the binary log
 
CONFidence 2015: DTrace + OSX = Fun - Andrzej Dyjak
CONFidence 2015: DTrace + OSX = Fun - Andrzej Dyjak   CONFidence 2015: DTrace + OSX = Fun - Andrzej Dyjak
CONFidence 2015: DTrace + OSX = Fun - Andrzej Dyjak
 

More from ScyllaDB

Optimizing NoSQL Performance Through Observability
Optimizing NoSQL Performance Through ObservabilityOptimizing NoSQL Performance Through Observability
Optimizing NoSQL Performance Through Observability
ScyllaDB
 
Event-Driven Architecture Masterclass: Challenges in Stream Processing
Event-Driven Architecture Masterclass: Challenges in Stream ProcessingEvent-Driven Architecture Masterclass: Challenges in Stream Processing
Event-Driven Architecture Masterclass: Challenges in Stream Processing
ScyllaDB
 
Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...
Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...
Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...
ScyllaDB
 
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
ScyllaDB
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
ScyllaDB
 
What Developers Need to Unlearn for High Performance NoSQL
What Developers Need to Unlearn for High Performance NoSQLWhat Developers Need to Unlearn for High Performance NoSQL
What Developers Need to Unlearn for High Performance NoSQL
ScyllaDB
 
Low Latency at Extreme Scale: Proven Practices & Pitfalls
Low Latency at Extreme Scale: Proven Practices & PitfallsLow Latency at Extreme Scale: Proven Practices & Pitfalls
Low Latency at Extreme Scale: Proven Practices & Pitfalls
ScyllaDB
 
Dissecting Real-World Database Performance Dilemmas
Dissecting Real-World Database Performance DilemmasDissecting Real-World Database Performance Dilemmas
Dissecting Real-World Database Performance Dilemmas
ScyllaDB
 
Beyond Linear Scaling: A New Path for Performance with ScyllaDB
Beyond Linear Scaling: A New Path for Performance with ScyllaDBBeyond Linear Scaling: A New Path for Performance with ScyllaDB
Beyond Linear Scaling: A New Path for Performance with ScyllaDB
ScyllaDB
 
Dissecting Real-World Database Performance Dilemmas
Dissecting Real-World Database Performance DilemmasDissecting Real-World Database Performance Dilemmas
Dissecting Real-World Database Performance Dilemmas
ScyllaDB
 
Database Performance at Scale Masterclass: Workload Characteristics by Felipe...
Database Performance at Scale Masterclass: Workload Characteristics by Felipe...Database Performance at Scale Masterclass: Workload Characteristics by Felipe...
Database Performance at Scale Masterclass: Workload Characteristics by Felipe...
ScyllaDB
 
Database Performance at Scale Masterclass: Database Internals by Pavel Emelya...
Database Performance at Scale Masterclass: Database Internals by Pavel Emelya...Database Performance at Scale Masterclass: Database Internals by Pavel Emelya...
Database Performance at Scale Masterclass: Database Internals by Pavel Emelya...
ScyllaDB
 
Database Performance at Scale Masterclass: Driver Strategies by Piotr Sarna
Database Performance at Scale Masterclass: Driver Strategies by Piotr SarnaDatabase Performance at Scale Masterclass: Driver Strategies by Piotr Sarna
Database Performance at Scale Masterclass: Driver Strategies by Piotr Sarna
ScyllaDB
 
Replacing Your Cache with ScyllaDB
Replacing Your Cache with ScyllaDBReplacing Your Cache with ScyllaDB
Replacing Your Cache with ScyllaDB
ScyllaDB
 
Powering Real-Time Apps with ScyllaDB_ Low Latency & Linear Scalability
Powering Real-Time Apps with ScyllaDB_ Low Latency & Linear ScalabilityPowering Real-Time Apps with ScyllaDB_ Low Latency & Linear Scalability
Powering Real-Time Apps with ScyllaDB_ Low Latency & Linear Scalability
ScyllaDB
 
7 Reasons Not to Put an External Cache in Front of Your Database.pptx
7 Reasons Not to Put an External Cache in Front of Your Database.pptx7 Reasons Not to Put an External Cache in Front of Your Database.pptx
7 Reasons Not to Put an External Cache in Front of Your Database.pptx
ScyllaDB
 
Getting the most out of ScyllaDB
Getting the most out of ScyllaDBGetting the most out of ScyllaDB
Getting the most out of ScyllaDB
ScyllaDB
 
NoSQL Database Migration Masterclass - Session 2: The Anatomy of a Migration
NoSQL Database Migration Masterclass - Session 2: The Anatomy of a MigrationNoSQL Database Migration Masterclass - Session 2: The Anatomy of a Migration
NoSQL Database Migration Masterclass - Session 2: The Anatomy of a Migration
ScyllaDB
 
NoSQL Database Migration Masterclass - Session 3: Migration Logistics
NoSQL Database Migration Masterclass - Session 3: Migration LogisticsNoSQL Database Migration Masterclass - Session 3: Migration Logistics
NoSQL Database Migration Masterclass - Session 3: Migration Logistics
ScyllaDB
 
NoSQL Data Migration Masterclass - Session 1 Migration Strategies and Challenges
NoSQL Data Migration Masterclass - Session 1 Migration Strategies and ChallengesNoSQL Data Migration Masterclass - Session 1 Migration Strategies and Challenges
NoSQL Data Migration Masterclass - Session 1 Migration Strategies and Challenges
ScyllaDB
 

More from ScyllaDB (20)

Optimizing NoSQL Performance Through Observability
Optimizing NoSQL Performance Through ObservabilityOptimizing NoSQL Performance Through Observability
Optimizing NoSQL Performance Through Observability
 
Event-Driven Architecture Masterclass: Challenges in Stream Processing
Event-Driven Architecture Masterclass: Challenges in Stream ProcessingEvent-Driven Architecture Masterclass: Challenges in Stream Processing
Event-Driven Architecture Masterclass: Challenges in Stream Processing
 
Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...
Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...
Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...
 
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
What Developers Need to Unlearn for High Performance NoSQL
What Developers Need to Unlearn for High Performance NoSQLWhat Developers Need to Unlearn for High Performance NoSQL
What Developers Need to Unlearn for High Performance NoSQL
 
Low Latency at Extreme Scale: Proven Practices & Pitfalls
Low Latency at Extreme Scale: Proven Practices & PitfallsLow Latency at Extreme Scale: Proven Practices & Pitfalls
Low Latency at Extreme Scale: Proven Practices & Pitfalls
 
Dissecting Real-World Database Performance Dilemmas
Dissecting Real-World Database Performance DilemmasDissecting Real-World Database Performance Dilemmas
Dissecting Real-World Database Performance Dilemmas
 
Beyond Linear Scaling: A New Path for Performance with ScyllaDB
Beyond Linear Scaling: A New Path for Performance with ScyllaDBBeyond Linear Scaling: A New Path for Performance with ScyllaDB
Beyond Linear Scaling: A New Path for Performance with ScyllaDB
 
Dissecting Real-World Database Performance Dilemmas
Dissecting Real-World Database Performance DilemmasDissecting Real-World Database Performance Dilemmas
Dissecting Real-World Database Performance Dilemmas
 
Database Performance at Scale Masterclass: Workload Characteristics by Felipe...
Database Performance at Scale Masterclass: Workload Characteristics by Felipe...Database Performance at Scale Masterclass: Workload Characteristics by Felipe...
Database Performance at Scale Masterclass: Workload Characteristics by Felipe...
 
Database Performance at Scale Masterclass: Database Internals by Pavel Emelya...
Database Performance at Scale Masterclass: Database Internals by Pavel Emelya...Database Performance at Scale Masterclass: Database Internals by Pavel Emelya...
Database Performance at Scale Masterclass: Database Internals by Pavel Emelya...
 
Database Performance at Scale Masterclass: Driver Strategies by Piotr Sarna
Database Performance at Scale Masterclass: Driver Strategies by Piotr SarnaDatabase Performance at Scale Masterclass: Driver Strategies by Piotr Sarna
Database Performance at Scale Masterclass: Driver Strategies by Piotr Sarna
 
Replacing Your Cache with ScyllaDB
Replacing Your Cache with ScyllaDBReplacing Your Cache with ScyllaDB
Replacing Your Cache with ScyllaDB
 
Powering Real-Time Apps with ScyllaDB_ Low Latency & Linear Scalability
Powering Real-Time Apps with ScyllaDB_ Low Latency & Linear ScalabilityPowering Real-Time Apps with ScyllaDB_ Low Latency & Linear Scalability
Powering Real-Time Apps with ScyllaDB_ Low Latency & Linear Scalability
 
7 Reasons Not to Put an External Cache in Front of Your Database.pptx
7 Reasons Not to Put an External Cache in Front of Your Database.pptx7 Reasons Not to Put an External Cache in Front of Your Database.pptx
7 Reasons Not to Put an External Cache in Front of Your Database.pptx
 
Getting the most out of ScyllaDB
Getting the most out of ScyllaDBGetting the most out of ScyllaDB
Getting the most out of ScyllaDB
 
NoSQL Database Migration Masterclass - Session 2: The Anatomy of a Migration
NoSQL Database Migration Masterclass - Session 2: The Anatomy of a MigrationNoSQL Database Migration Masterclass - Session 2: The Anatomy of a Migration
NoSQL Database Migration Masterclass - Session 2: The Anatomy of a Migration
 
NoSQL Database Migration Masterclass - Session 3: Migration Logistics
NoSQL Database Migration Masterclass - Session 3: Migration LogisticsNoSQL Database Migration Masterclass - Session 3: Migration Logistics
NoSQL Database Migration Masterclass - Session 3: Migration Logistics
 
NoSQL Data Migration Masterclass - Session 1 Migration Strategies and Challenges
NoSQL Data Migration Masterclass - Session 1 Migration Strategies and ChallengesNoSQL Data Migration Masterclass - Session 1 Migration Strategies and Challenges
NoSQL Data Migration Masterclass - Session 1 Migration Strategies and Challenges
 

Recently uploaded

FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Product School
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
ThousandEyes
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
Safe Software
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Sri Ambati
 
ODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User GroupODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User Group
CatarinaPereira64715
 
Search and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesSearch and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical Futures
Bhaskar Mitra
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
DianaGray10
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
OnBoard
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
Thijs Feryn
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
Abida Shariff
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
Alison B. Lowndes
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Inflectra
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
UiPathCommunity
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
James Anderson
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
Product School
 

Recently uploaded (20)

FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
ODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User GroupODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User Group
 
Search and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesSearch and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical Futures
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
 

New Ways to Find Latency in Linux Using Tracing

  • 1. Brought to you by New ways to find latency in Linux using tracing Steven Rostedt Open Source Engineer at VMware
  • 2. ©2021 VMware, Inc. New ways to find latency in Linux using tracing Steven Rostedt Open Source Engineer srostedt@vmware.com twitter: @srostedt
  • 3. What is ftrace? ● Official tracer of the Linux Kernel – Introduced in 2008 ● Came from the PREEMPT_RT patch set – Initially focused on finding causes of latency ● Has grows significantly since 2008 – Added new ways to find latency – Does much more than find latency
  • 4. The ftrace interface (the tracefs file system) # mount -t tracefs nodev /sys/kernel/tracing # ls /sys/kernel/tracing available_events max_graph_depth snapshot available_filter_functions options stack_max_size available_tracers osnoise stack_trace buffer_percent per_cpu stack_trace_filter buffer_size_kb printk_formats synthetic_events buffer_total_size_kb README timestamp_mode current_tracer recursed_functions trace dynamic_events saved_cmdlines trace_clock dyn_ftrace_total_info saved_cmdlines_size trace_marker enabled_functions saved_tgids trace_marker_raw error_log set_event trace_options eval_map set_event_notrace_pid trace_pipe events set_event_pid trace_stat free_buffer set_ftrace_filter tracing_cpumask function_profile_enabled set_ftrace_notrace tracing_max_latency hwlat_detector set_ftrace_notrace_pid tracing_on instances set_ftrace_pid tracing_thresh kprobe_events set_graph_function uprobe_events kprobe_profile set_graph_notrace uprobe_profile
  • 5. Introducing trace-cmd Luckily today, we do not need to know all those files ● trace-cmd is a front end interface to ftrace ● It interfaces with the tracefs directory for you https://www.trace-cmd.org git clone git://git.kernel.org/pub/scm/utils/trace-cmd/trace-cmd.git
  • 6. The Old Tracers ● Wake up trace – All tasks – RT tasks – Deadline Tasks ● Preemption off tracers – irqsoff tracer – preemptoff tracer – preemptirqsoff tracer
  • 7. Wake up tracer Task 1 Interrupt Wake up Task 2 event Task 2 Sched switch event latency
  • 8. Running wakeup_rt tracer # trace-cmd start -p wakeup_rt # trace-cmd show # tracer: wakeup_rt # # wakeup_rt latency trace v1.1.5 on 5.10.52-test-rt47 # latency: 94 us, #211/211, CPU#4 | (M:preempt_rt VP:0, KP:0, SP:0 HP:0 #P:8) # ----------------- # | task: rcuc/4-42 (uid:0 nice:0 policy:1 rt_prio:1) # ----------------- # # _--------=> CPU# # / _-------=> irqs-off # | / _------=> need-resched # || / _-----=> need-resched-lazy # ||| / _----=> hardirq/softirq # |||| / _---=> preempt-depth # ||||| / _--=> preempt-lazy-depth # |||||| / _-=> migrate-disable # ||||||| / delay # cmd pid |||||||| time | caller # / |||||||| | / bash-1872 4dN.h5.. 1us+: 1872:120:R + [004] 42: 98:R rcuc/4 bash-1872 4dN.h5.. 30us : <stack trace> => __ftrace_trace_stack => probe_wakeup => ttwu_do_wakeup => try_to_wake_up
  • 9. Running wakeup_rt tracer => invoke_rcu_core => rcu_sched_clock_irq => update_process_times => tick_sched_handle => tick_sched_timer => __hrtimer_run_queues => hrtimer_interrupt => __sysvec_apic_timer_interrupt => asm_call_irq_on_stack => sysvec_apic_timer_interrupt => asm_sysvec_apic_timer_interrupt => lock_acquire => _raw_spin_lock => shmem_get_inode => shmem_mknod => lookup_open.isra.0 => path_openat => do_filp_open => do_sys_openat2 => __x64_sys_openat => do_syscall_64 => entry_SYSCALL_64_after_hwframe bash-1872 4dN.h5.. 31us : 0 bash-1872 4dN.h4.. 32us : task_woken_rt ←ttwu_do_wakeup bash-1872 4dN..2.. 87us : put_prev_entity <-put_prev_task_fair bash-1872 4dN..2.. 87us : update_curr <-put_prev_entity bash-1872 4dN..2.. 87us : __update_load_avg_se <-update_load_avg bash-1872 4dN..2.. 87us : __update_load_avg_cfs_rq <-update_load_avg
  • 10. Running wakeup_rt tracer bash-1872 4dN..2.. 87us : pick_next_task_stop <-__schedule bash-1872 4dN..2.. 87us : pick_next_task_dl <-__schedule bash-1872 4dN..2.. 87us : pick_next_task_rt <-__schedule bash-1872 4dN..2.. 88us : update_rt_rq_load_avg <-pick_next_task_rt bash-1872 4d...3.. 88us : __schedule bash-1872 4d...3.. 88us : 1872:120:R ==> [004] 42: 98:R rcuc/4 bash-1872 4d...3.. 94us : <stack trace> => __ftrace_trace_stack => probe_wakeup_sched_switch => __schedule => preempt_schedule_common => preempt_schedule_thunk => _raw_spin_unlock => shmem_get_inode => shmem_mknod => lookup_open.isra.0 => path_openat => do_filp_open => do_sys_openat2 => __x64_sys_openat => do_syscall_64 => entry_SYSCALL_64_after_hwframe
  • 11. Interrupt off tracer Task Interrupt Irqs disabled latency Irqs enabled
  • 12. Running preemptirqsoff tracer # trace-cmd start -p preemptirqsoff # trace-cmd show # tracer: preemptirqsoff # # preemptirqsoff latency trace v1.1.5 on 5.14.0-rc4-test+ # -------------------------------------------------------------------- # latency: 2325 us, #3005/3005, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:8) # ----------------- # | task: bash-48651 (uid:0 nice:0 policy:0 rt_prio:0) # ----------------- # => started at: _raw_spin_lock # => ended at: _raw_spin_unlock # # # _------=> CPU# # / _-----=> irqs-off # | / _----=> need-resched # || / _---=> hardirq/softirq # ||| / _--=> preempt-depth # |||| / delay # cmd pid ||||| time | caller # / ||||| | / trace-cm-48651 1...1 0us : _raw_spin_lock trace-cm-48651 1...1 1us : do_raw_spin_trylock <-_raw_spin_lock trace-cm-48651 1...1 1us : flush_tlb_batched_pending <-unmap_page_range trace-cm-48651 1...1 2us : vm_normal_page <-unmap_page_range trace-cm-48651 1...1 2us : page_remove_rmap <-unmap_page_range
  • 13. Running preemptirqsoff tracer [...] trace-cm-48651 1...1 2313us : unlock_page_memcg <-unmap_page_range trace-cm-48651 1...1 2314us : __rcu_read_unlock <-unlock_page_memcg trace-cm-48651 1...1 2315us : __tlb_remove_page_size <-unmap_page_range trace-cm-48651 1...1 2316us : vm_normal_page <-unmap_page_range trace-cm-48651 1...1 2317us : page_remove_rmap <-unmap_page_range trace-cm-48651 1...1 2318us : lock_page_memcg <-page_remove_rmap trace-cm-48651 1...1 2318us : __rcu_read_lock <-lock_page_memcg trace-cm-48651 1...1 2319us : unlock_page_memcg <-unmap_page_range trace-cm-48651 1...1 2320us : __rcu_read_unlock <-unlock_page_memcg trace-cm-48651 1...1 2321us : __tlb_remove_page_size <-unmap_page_range trace-cm-48651 1...1 2322us : _raw_spin_unlock <-unmap_page_range trace-cm-48651 1...1 2323us : do_raw_spin_unlock <-_raw_spin_unlock trace-cm-48651 1...1 2324us : preempt_count_sub <-_raw_spin_unlock trace-cm-48651 1...1 2325us : _raw_spin_unlock trace-cm-48651 1...1 2327us+: tracer_preempt_on <-_raw_spin_unlock trace-cm-48651 1...1 2343us : <stack trace> => unmap_page_range => unmap_vmas => exit_mmap => mmput => begin_new_exec => load_elf_binary => bprm_execve => do_execveat_common => __x64_sys_execve => do_syscall_64 => entry_SYSCALL_64_after_hwframe
  • 14. Running preemptirqsoff tracer # trace-cmd start -p preemptirqsoff -d -O sym-offset # trace-cmd show # tracer: preemptirqsoff [..] # latency: 248 us, #4/4, CPU#6 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:8) # ----------------- # | task: ksoftirqd/6-47 (uid:0 nice:0 policy:0 rt_prio:0) # ----------------- # => started at: run_ksoftirqd # => ended at: run_ksoftirqd [..] # cmd pid ||||| time | caller # / ||||| | / <idle>-0 4d..1 1us!: cpuidle_enter_state+0xcf/0x410 kworker/-48096 4...1 242us : schedule+0x4d/0xe0 <-schedule+0x4d/0xe0 kworker/-48096 4...1 243us : tracer_preempt_on+0xf4/0x110 <-schedule+0x4d/0xe0 kworker/-48096 4...1 248us : <stack trace> => worker_thread+0xd4/0x3c0 => kthread+0x155/0x180 => ret_from_fork+0x22/0x30
  • 16. Measuring latency from interrupts ● You can easily trace the latency from interrupts – For x86: # trace-cmd record -p function_graph -l handle_irq_event -l ‘*sysvec_*’ -e irq_handler_entry -e 'irq_vectors:*entry*'
  • 17. Tracing Latency from Interrupts # trace-cmd report -l --cpu 2 sleep-2513 [002] 2973.197184: funcgraph_entry: | __sysvec_apic_timer_interrupt() { sleep-2513 [002] 2973.197186: local_timer_entry: vector=236 sleep-2513 [002] 2973.197196: funcgraph_exit: + 12.233 us | } sleep-2513 [002] 2973.197206: funcgraph_entry: | __sysvec_irq_work() { sleep-2513 [002] 2973.197207: irq_work_entry: vector=246 sleep-2513 [002] 2973.197207: funcgraph_exit: 1.405 us | } <idle>-0 [002] 2973.198186: funcgraph_entry: | __sysvec_apic_timer_interrupt() { <idle>-0 [002] 2973.198187: local_timer_entry: vector=236 <idle>-0 [002] 2973.198193: funcgraph_exit: 7.992 us | } <idle>-0 [002] 2974.991721: funcgraph_entry: | handle_irq_event() { <idle>-0 [002] 2974.991723: irq_handler_entry: irq=24 name=ahci[0000:00:1f.2] <idle>-0 [002] 2974.991733: funcgraph_exit: + 13.158 us | } <idle>-0 [002] 2977.039843: funcgraph_entry: | handle_irq_event() { <idle>-0 [002] 2977.039845: irq_handler_entry: irq=24 name=ahci[0000:00:1f.2] <idle>-0 [002] 2977.039855: funcgraph_exit: + 12.869 us | }
  • 18. Problems with the latency tracers ● No control over what tasks they trace – They trace the highest priority process – May not be the process you are interested in ● Not flexible – Has one use case – General for the entire system
  • 19. Tracing interrupt latency with function graph tracer ● Does not have a “max latency” – You see all the latency in a trace – No way to record the max latency found
  • 20. Introducing Synthetic Events ● Can map two events into a single event
  • 21. Introducing Synthetic Events ● Can map two events into a single event – sched_waking + sched_switch wakeup_latency → – irqs_disabled + irqs_enabled irqs_off_latency → – irq_enter_handler + irq_exit_handler irq_latency →
  • 22. Introducing Synthetic Events ● Can map two events into a single event – sched_waking + sched_switch wakeup_latency → – irqs_disabled + irqs_enabled irqs_off_latency → – irq_enter_handler + irq_exit_handler irq_latency → ● Have all the functionality as a normal event
  • 23. Introducing Synthetic Events ● Can map two events into a single event – sched_waking + sched_switch wakeup_latency → – irqs_disabled + irqs_enabled irqs_off_latency → – irq_enter_handler + irq_exit_handler irq_latency → ● Have all the functionality as a normal event – Can be filtered on
  • 24. Introducing Synthetic Events ● Can map two events into a single event – sched_waking + sched_switch wakeup_latency → – irqs_disabled + irqs_enabled irqs_off_latency → – irq_enter_handler + irq_exit_handler irq_latency → ● Have all the functionality as a normal event – Can be filtered on – Can have triggers attached (like histograms)
  • 25. Synthetic events # echo 'wakeup_lat s32 pid; u64 delta;' > /sys/kernel/tracing/synthetic_events # echo 'hist:keys=pid:__arg__1=pid,__arg__2=common_timestamp.usecs' > /sys/kernel/tracing/events/sched/sched_waking/trigger # echo 'hist:keys=next_pid:pid=$__arg__1,’ ‘delta=common_timestamp.usecs-$__arg__2:onmatch(sched.sched_waking)’ ‘.trace(wakeup_lat,$pid,$delta) if next_comm == “cyclictest”' > /sys/kernel/tracing/events/sched/sched_switch/trigger
  • 26. Synthetic events # echo 'wakeup_lat s32 pid; u64 delta;' > /sys/kernel/tracing/synthetic_events # echo 'hist:keys=pid:__arg__1=pid,__arg__2=common_timestamp.usecs' > /sys/kernel/tracing/events/sched/sched_waking/trigger # echo 'hist:keys=next_pid:pid=$__arg__1,’ ‘delta=common_timestamp.usecs-$__arg__2:onmatch(sched.sched_waking)’ ‘.trace(wakeup_lat,$pid,$delta) if next_comm == “cyclictest”' > /sys/kernel/tracing/events/sched/sched_switch/trigger Too Complex!
  • 27. Introducing libtracefs A library to interact with the tracefs file system All functions have man pages The tracefs_sql man page has the sqlhist program in it https://git.kernel.org/pub/scm/libs/libtrace/libtracefs.git/ git clone git://git.kernel.org/pub/scm/libs/libtrace/libtracefs.git cd libtracefs make sqlhist # builds from the man page!
  • 28. Synthetic events # sqlhist -e -n wakeup_lat ‘SELECT start.pid, (end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS delta FROM ’ ‘sched_waking AS start JOIN sched_switch AS end ON start.pid = end.next_pid' ‘WHERE end.next_pid = “cyclictest”’
  • 29. Synthetic events # sqlhist -e -n wakeup_lat ‘SELECT start.pid, (end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS delta FROM ’ ‘sched_waking AS start JOIN sched_switch AS end ON start.pid = end.next_pid' ‘WHERE end.next_pid = “cyclictest”’
  • 30. Synthetic events # sqlhist -e -n wakeup_lat ‘SELECT start.pid, (end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS delta FROM ’ ‘sched_waking AS start JOIN sched_switch AS end ON start.pid = end.next_pid' ‘WHERE end.next_pid = “cyclictest”’
  • 31. Synthetic events # sqlhist -e -n wakeup_lat ‘SELECT start.pid, (end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS delta FROM ’ ‘sched_waking AS start JOIN sched_switch AS end ON start.pid = end.next_pid' ‘WHERE end.next_pid = “cyclictest”’
  • 32. Synthetic events # sqlhist -e -n wakeup_lat ‘SELECT start.pid, (end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS delta FROM ’ ‘sched_waking AS start JOIN sched_switch AS end ON start.pid = end.next_pid' ‘WHERE end.next_pid = “cyclictest”’
  • 33. Synthetic events # sqlhist -e -n wakeup_lat ‘SELECT start.pid, (end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS delta FROM ’ ‘sched_waking AS start JOIN sched_switch AS end ON start.pid = end.next_pid' ‘WHERE end.next_pid = “cyclictest”’
  • 34. Synthetic events # sqlhist -e -n wakeup_lat ‘SELECT start.pid, (end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS delta FROM ’ ‘sched_waking AS start JOIN sched_switch AS end ON start.pid = end.next_pid' ‘WHERE end.next_pid = “cyclictest”’
  • 35. Synthetic events # sqlhist -e -n wakeup_lat ‘SELECT start.pid, (end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS delta FROM ’ ‘sched_waking AS start JOIN sched_switch AS end ON start.pid = end.next_pid' ‘WHERE end.next_pid = “cyclictest”’
  • 36. Synthetic events # sqlhist -e -n wakeup_lat ‘SELECT start.pid, (end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS delta FROM ’ ‘sched_waking AS start JOIN sched_switch AS end ON start.pid = end.next_pid' ‘WHERE end.next_pid = “cyclictest”’
  • 37. cyclictest A tool used by real-time developers to test wake up latency in the system https://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git/ git clone git://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git cd rt-tests make cyclictest
  • 38. Customize latency tracing # sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ ‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ ‘FROM sched_waking AS start JOIN sched_switch AS end ’ ‘ON start.pid = end.next_pid’ ‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"' # trace-cmd start -e all -e wakeup_lat -R stacktrace # cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark # trace-cmd show -s | tail -16 <idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1 <idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19 <idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17 <idle>-0 [002] d..5 23454.902258: <stack trace> => trace_event_raw_event_synth => action_trace => event_hist_trigger => event_triggers_call => trace_event_buffer_commit => trace_event_raw_event_sched_switch => __traceiter_sched_switch => __schedule => schedule_idle => do_idle => cpu_startup_entry => secondary_startup_64_no_verify
  • 39. Customize latency tracing # sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ ‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ ‘FROM sched_waking AS start JOIN sched_switch AS end ’ ‘ON start.pid = end.next_pid’ ‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"' # trace-cmd start -e all -e wakeup_lat -R stacktrace # cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark # trace-cmd show -s | tail -16 <idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1 <idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19 <idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17 <idle>-0 [002] d..5 23454.902258: <stack trace> => trace_event_raw_event_synth => action_trace => event_hist_trigger => event_triggers_call => trace_event_buffer_commit => trace_event_raw_event_sched_switch => __traceiter_sched_switch => __schedule => schedule_idle => do_idle => cpu_startup_entry => secondary_startup_64_no_verify
  • 40. Customize latency tracing # sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ ‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ ‘FROM sched_waking AS start JOIN sched_switch AS end ’ ‘ON start.pid = end.next_pid’ ‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"' # trace-cmd start -e all -e wakeup_lat -R stacktrace # cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark # trace-cmd show -s | tail -16 <idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1 <idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19 <idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17 <idle>-0 [002] d..5 23454.902258: <stack trace> => trace_event_raw_event_synth => action_trace => event_hist_trigger => event_triggers_call => trace_event_buffer_commit => trace_event_raw_event_sched_switch => __traceiter_sched_switch => __schedule => schedule_idle => do_idle => cpu_startup_entry => secondary_startup_64_no_verify
  • 41. Customize latency tracing # sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ ‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ ‘FROM sched_waking AS start JOIN sched_switch AS end ’ ‘ON start.pid = end.next_pid’ ‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"' # trace-cmd start -e all -e wakeup_lat -R stacktrace # cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark # trace-cmd show -s | tail -16 <idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1 <idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19 <idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17 <idle>-0 [002] d..5 23454.902258: <stack trace> => trace_event_raw_event_synth => action_trace => event_hist_trigger => event_triggers_call => trace_event_buffer_commit => trace_event_raw_event_sched_switch => __traceiter_sched_switch => __schedule => schedule_idle => do_idle => cpu_startup_entry => secondary_startup_64_no_verify
  • 42. Customize latency tracing # sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ ‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ ‘FROM sched_waking AS start JOIN sched_switch AS end ’ ‘ON start.pid = end.next_pid’ ‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"' # trace-cmd start -e all -e wakeup_lat -R stacktrace # cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark # trace-cmd show -s | tail -16 <idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1 <idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19 <idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17 <idle>-0 [002] d..5 23454.902258: <stack trace> => trace_event_raw_event_synth => action_trace => event_hist_trigger => event_triggers_call => trace_event_buffer_commit => trace_event_raw_event_sched_switch => __traceiter_sched_switch => __schedule => schedule_idle => do_idle => cpu_startup_entry => secondary_startup_64_no_verify
  • 43. Customize latency tracing # sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ ‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ ‘FROM sched_waking AS start JOIN sched_switch AS end ’ ‘ON start.pid = end.next_pid’ ‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"' # trace-cmd start -e all -e wakeup_lat -R stacktrace # cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark # trace-cmd show -s | tail -16 <idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1 <idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19 <idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17 <idle>-0 [002] d..5 23454.902258: <stack trace> => trace_event_raw_event_synth => action_trace => event_hist_trigger => event_triggers_call => trace_event_buffer_commit => trace_event_raw_event_sched_switch => __traceiter_sched_switch => __schedule => schedule_idle => do_idle => cpu_startup_entry => secondary_startup_64_no_verify
  • 44. Customize latency tracing # sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ ‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ ‘FROM sched_waking AS start JOIN sched_switch AS end ’ ‘ON start.pid = end.next_pid’ ‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"' # trace-cmd start -e all -e wakeup_lat -R stacktrace # cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark # trace-cmd show -s | tail -16 <idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1 <idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19 <idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17 <idle>-0 [002] d..5 23454.902258: <stack trace> => trace_event_raw_event_synth => action_trace => event_hist_trigger => event_triggers_call => trace_event_buffer_commit => trace_event_raw_event_sched_switch => __traceiter_sched_switch => __schedule => schedule_idle => do_idle => cpu_startup_entry => secondary_startup_64_no_verify
  • 45. Customize latency tracing # sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ ‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ ‘FROM sched_waking AS start JOIN sched_switch AS end ’ ‘ON start.pid = end.next_pid’ ‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"' # trace-cmd start -e all -e wakeup_lat -R stacktrace # cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark # trace-cmd show -s | tail -16 <idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1 <idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19 <idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17 <idle>-0 [002] d..5 23454.902258: <stack trace> => trace_event_raw_event_synth => action_trace => event_hist_trigger => event_triggers_call => trace_event_buffer_commit => trace_event_raw_event_sched_switch => __traceiter_sched_switch => __schedule => schedule_idle => do_idle => cpu_startup_entry => secondary_startup_64_no_verify
  • 46. Customize latency tracing # sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ ‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ ‘FROM sched_waking AS start JOIN sched_switch AS end ’ ‘ON start.pid = end.next_pid’ ‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"' # trace-cmd start -e all -e wakeup_lat -R stacktrace # cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark # trace-cmd show -s | tail -16 <idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1 <idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19 <idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17 <idle>-0 [002] d..5 23454.902258: <stack trace> => trace_event_raw_event_synth => action_trace => event_hist_trigger => event_triggers_call => trace_event_buffer_commit => trace_event_raw_event_sched_switch => __traceiter_sched_switch => __schedule => schedule_idle => do_idle => cpu_startup_entry => secondary_startup_64_no_verify
  • 47. Customize latency tracing # sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ ‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ ‘FROM sched_waking AS start JOIN sched_switch AS end ’ ‘ON start.pid = end.next_pid’ ‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"' # trace-cmd start -e all -e wakeup_lat -R stacktrace # cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark # trace-cmd show -s | tail -16 <idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1 <idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19 <idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17 <idle>-0 [002] d..5 23454.902258: <stack trace> => trace_event_raw_event_synth => action_trace => event_hist_trigger => event_triggers_call => trace_event_buffer_commit => trace_event_raw_event_sched_switch => __traceiter_sched_switch => __schedule => schedule_idle => do_idle => cpu_startup_entry => secondary_startup_64_no_verify
  • 48. Customize latency tracing # sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ ‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ ‘FROM sched_waking AS start JOIN sched_switch AS end ’ ‘ON start.pid = end.next_pid’ ‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"' # trace-cmd start -e all -e wakeup_lat -R stacktrace # cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark # trace-cmd show -s | tail -16 <idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1 <idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19 <idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17 <idle>-0 [002] d..5 23454.902258: <stack trace> => trace_event_raw_event_synth => action_trace => event_hist_trigger => event_triggers_call => trace_event_buffer_commit => trace_event_raw_event_sched_switch => __traceiter_sched_switch => __schedule => schedule_idle => do_idle => cpu_startup_entry => secondary_startup_64_no_verify
  • 49. Customize latency tracing # sqlhist -e -n wakeup_lat -T -m lat 'SELECT end.next_comm AS comm, ’ ‘(end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) AS lat ’ ‘FROM sched_waking AS start JOIN sched_switch AS end ’ ‘ON start.pid = end.next_pid’ ‘WHERE end.next_prio < 100 && end.next_comm == "cyclictest"' # trace-cmd start -e all -e wakeup_lat -R stacktrace # cyclictest -l 1000 -p80 -i250 -a -t -q -m -d 0 -b 1000 --tracemark # trace-cmd show -s | tail -16 <idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1 <idle>-0 [002] d..2 23454.902254: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cyclictest next_pid=12275 next_prio=19 <idle>-0 [002] d..4 23454.902256: wakeup_lat: next_comm=cyclictest lat=17 <idle>-0 [002] d..5 23454.902258: <stack trace> => trace_event_raw_event_synth => action_trace => event_hist_trigger => event_triggers_call => trace_event_buffer_commit => trace_event_raw_event_sched_switch => __traceiter_sched_switch => __schedule => schedule_idle => do_idle => cpu_startup_entry => secondary_startup_64_no_verify
  • 50. Customize latency tracing # trace-cmd extract -s # trace-cmd report --cpu 2 | tail -30 <idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1 <idle>-0 [002] 23454.902239: sched_wakeup: cyclictest:12275 [19] CPU:002 <idle>-0 [002] 23454.902241: hrtimer_expire_exit: hrtimer=0xffffbbd68286fe60 <idle>-0 [002] 23454.902241: hrtimer_cancel: hrtimer=0xffffbbd6826efe70 <idle>-0 [002] 23454.902242: hrtimer_expire_entry: hrtimer=0xffffbbd6826efe70 <idle>-0 [002] 23454.902243: sched_waking: comm=cyclictest pid=12272 prio=120 target_cpu=002 <idle>-0 [002] 23454.902244: prandom_u32: ret=1102749734 <idle>-0 [002] 23454.902246: sched_wakeup: cyclictest:12272 [120] CPU:002 <idle>-0 [002] 23454.902247: hrtimer_expire_exit: hrtimer=0xffffbbd6826efe70 <idle>-0 [002] 23454.902248: write_msr: 6e0, value 4866ce957272 <idle>-0 [002] 23454.902249: local_timer_exit: vector=236 <idle>-0 [002] 23454.902250: cpu_idle: state=4294967295 cpu_id=2 <idle>-0 [002] 23454.902251: rcu_utilization: Start context switch <idle>-0 [002] 23454.902252: rcu_utilization: End context switch <idle>-0 [002] 23454.902253: prandom_u32: ret=3692516021 <idle>-0 [002] 23454.902254: sched_switch: swapper/2:0 [120] R ==> cyclictest:12275 [19] <idle>-0 [002] 23454.902256: wakeup_lat: next_comm=cyclictest lat=17 <idle>-0 [002] 23454.902258: kernel_stack: <stack trace > => trace_event_raw_event_synth (ffffffff8121a0db) => action_trace (ffffffff8121e9fb) => event_hist_trigger (ffffffff8121ca8d) => event_triggers_call (ffffffff81216c72) [..]
  • 51. Customize latency tracing # trace-cmd extract -s # trace-cmd report --cpu 2 | tail -30 <idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1 <idle>-0 [002] 23454.902239: sched_wakeup: cyclictest:12275 [19] CPU:002 <idle>-0 [002] 23454.902241: hrtimer_expire_exit: hrtimer=0xffffbbd68286fe60 <idle>-0 [002] 23454.902241: hrtimer_cancel: hrtimer=0xffffbbd6826efe70 <idle>-0 [002] 23454.902242: hrtimer_expire_entry: hrtimer=0xffffbbd6826efe70 <idle>-0 [002] 23454.902243: sched_waking: comm=cyclictest pid=12272 prio=120 target_cpu=002 <idle>-0 [002] 23454.902244: prandom_u32: ret=1102749734 <idle>-0 [002] 23454.902246: sched_wakeup: cyclictest:12272 [120] CPU:002 <idle>-0 [002] 23454.902247: hrtimer_expire_exit: hrtimer=0xffffbbd6826efe70 <idle>-0 [002] 23454.902248: write_msr: 6e0, value 4866ce957272 <idle>-0 [002] 23454.902249: local_timer_exit: vector=236 <idle>-0 [002] 23454.902250: cpu_idle: state=4294967295 cpu_id=2 <idle>-0 [002] 23454.902251: rcu_utilization: Start context switch <idle>-0 [002] 23454.902252: rcu_utilization: End context switch <idle>-0 [002] 23454.902253: prandom_u32: ret=3692516021 <idle>-0 [002] 23454.902254: sched_switch: swapper/2:0 [120] R ==> cyclictest:12275 [19] <idle>-0 [002] 23454.902256: wakeup_lat: next_comm=cyclictest lat=17 <idle>-0 [002] 23454.902258: kernel_stack: <stack trace > => trace_event_raw_event_synth (ffffffff8121a0db) => action_trace (ffffffff8121e9fb) => event_hist_trigger (ffffffff8121ca8d) => event_triggers_call (ffffffff81216c72) [..]
  • 52. Customize latency tracing # trace-cmd extract -s # trace-cmd report --cpu 2 | tail -30 <idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1 <idle>-0 [002] 23454.902239: sched_wakeup: cyclictest:12275 [19] CPU:002 <idle>-0 [002] 23454.902241: hrtimer_expire_exit: hrtimer=0xffffbbd68286fe60 <idle>-0 [002] 23454.902241: hrtimer_cancel: hrtimer=0xffffbbd6826efe70 <idle>-0 [002] 23454.902242: hrtimer_expire_entry: hrtimer=0xffffbbd6826efe70 <idle>-0 [002] 23454.902243: sched_waking: comm=cyclictest pid=12272 prio=120 target_cpu=002 <idle>-0 [002] 23454.902244: prandom_u32: ret=1102749734 <idle>-0 [002] 23454.902246: sched_wakeup: cyclictest:12272 [120] CPU:002 <idle>-0 [002] 23454.902247: hrtimer_expire_exit: hrtimer=0xffffbbd6826efe70 <idle>-0 [002] 23454.902248: write_msr: 6e0, value 4866ce957272 <idle>-0 [002] 23454.902249: local_timer_exit: vector=236 <idle>-0 [002] 23454.902250: cpu_idle: state=4294967295 cpu_id=2 <idle>-0 [002] 23454.902251: rcu_utilization: Start context switch <idle>-0 [002] 23454.902252: rcu_utilization: End context switch <idle>-0 [002] 23454.902253: prandom_u32: ret=3692516021 <idle>-0 [002] 23454.902254: sched_switch: swapper/2:0 [120] R ==> cyclictest:12275 [19] <idle>-0 [002] 23454.902256: wakeup_lat: next_comm=cyclictest lat=17 <idle>-0 [002] 23454.902258: kernel_stack: <stack trace > => trace_event_raw_event_synth (ffffffff8121a0db) => action_trace (ffffffff8121e9fb) => event_hist_trigger (ffffffff8121ca8d) => event_triggers_call (ffffffff81216c72) [..]
  • 53. Customize latency tracing # trace-cmd extract -s # trace-cmd report --cpu 2 | tail -30 <idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1 <idle>-0 [002] 23454.902239: sched_wakeup: cyclictest:12275 [19] CPU:002 <idle>-0 [002] 23454.902241: hrtimer_expire_exit: hrtimer=0xffffbbd68286fe60 <idle>-0 [002] 23454.902241: hrtimer_cancel: hrtimer=0xffffbbd6826efe70 <idle>-0 [002] 23454.902242: hrtimer_expire_entry: hrtimer=0xffffbbd6826efe70 <idle>-0 [002] 23454.902243: sched_waking: comm=cyclictest pid=12272 prio=120 target_cpu=002 <idle>-0 [002] 23454.902244: prandom_u32: ret=1102749734 <idle>-0 [002] 23454.902246: sched_wakeup: cyclictest:12272 [120] CPU:002 <idle>-0 [002] 23454.902247: hrtimer_expire_exit: hrtimer=0xffffbbd6826efe70 <idle>-0 [002] 23454.902248: write_msr: 6e0, value 4866ce957272 <idle>-0 [002] 23454.902249: local_timer_exit: vector=236 <idle>-0 [002] 23454.902250: cpu_idle: state=4294967295 cpu_id=2 <idle>-0 [002] 23454.902251: rcu_utilization: Start context switch <idle>-0 [002] 23454.902252: rcu_utilization: End context switch <idle>-0 [002] 23454.902253: prandom_u32: ret=3692516021 <idle>-0 [002] 23454.902254: sched_switch: swapper/2:0 [120] R ==> cyclictest:12275 [19] <idle>-0 [002] 23454.902256: wakeup_lat: next_comm=cyclictest lat=17 <idle>-0 [002] 23454.902258: kernel_stack: <stack trace > => trace_event_raw_event_synth (ffffffff8121a0db) => action_trace (ffffffff8121e9fb) => event_hist_trigger (ffffffff8121ca8d) => event_triggers_call (ffffffff81216c72) [..]
  • 54. Customize latency tracing # trace-cmd extract -s # trace-cmd report --cpu 2 | tail -30 <idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1 <idle>-0 [002] 23454.902239: sched_wakeup: cyclictest:12275 [19] CPU:002 <idle>-0 [002] 23454.902241: hrtimer_expire_exit: hrtimer=0xffffbbd68286fe60 <idle>-0 [002] 23454.902241: hrtimer_cancel: hrtimer=0xffffbbd6826efe70 <idle>-0 [002] 23454.902242: hrtimer_expire_entry: hrtimer=0xffffbbd6826efe70 <idle>-0 [002] 23454.902243: sched_waking: comm=cyclictest pid=12272 prio=120 target_cpu=002 <idle>-0 [002] 23454.902244: prandom_u32: ret=1102749734 <idle>-0 [002] 23454.902246: sched_wakeup: cyclictest:12272 [120] CPU:002 <idle>-0 [002] 23454.902247: hrtimer_expire_exit: hrtimer=0xffffbbd6826efe70 <idle>-0 [002] 23454.902248: write_msr: 6e0, value 4866ce957272 <idle>-0 [002] 23454.902249: local_timer_exit: vector=236 <idle>-0 [002] 23454.902250: cpu_idle: state=4294967295 cpu_id=2 <idle>-0 [002] 23454.902251: rcu_utilization: Start context switch <idle>-0 [002] 23454.902252: rcu_utilization: End context switch <idle>-0 [002] 23454.902253: prandom_u32: ret=3692516021 <idle>-0 [002] 23454.902254: sched_switch: swapper/2:0 [120] R ==> cyclictest:12275 [19] <idle>-0 [002] 23454.902256: wakeup_lat: next_comm=cyclictest lat=17 <idle>-0 [002] 23454.902258: kernel_stack: <stack trace > => trace_event_raw_event_synth (ffffffff8121a0db) => action_trace (ffffffff8121e9fb) => event_hist_trigger (ffffffff8121ca8d) => event_triggers_call (ffffffff81216c72) [..]
  • 55. Customize latency tracing # trace-cmd extract -s # trace-cmd report --cpu 2 | tail -30 <idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1 <idle>-0 [002] 23454.902239: sched_wakeup: cyclictest:12275 [19] CPU:002 <idle>-0 [002] 23454.902241: hrtimer_expire_exit: hrtimer=0xffffbbd68286fe60 <idle>-0 [002] 23454.902241: hrtimer_cancel: hrtimer=0xffffbbd6826efe70 <idle>-0 [002] 23454.902242: hrtimer_expire_entry: hrtimer=0xffffbbd6826efe70 <idle>-0 [002] 23454.902243: sched_waking: comm=cyclictest pid=12272 prio=120 target_cpu=002 <idle>-0 [002] 23454.902244: prandom_u32: ret=1102749734 <idle>-0 [002] 23454.902246: sched_wakeup: cyclictest:12272 [120] CPU:002 <idle>-0 [002] 23454.902247: hrtimer_expire_exit: hrtimer=0xffffbbd6826efe70 <idle>-0 [002] 23454.902248: write_msr: 6e0, value 4866ce957272 <idle>-0 [002] 23454.902249: local_timer_exit: vector=236 <idle>-0 [002] 23454.902250: cpu_idle: state=4294967295 cpu_id=2 <idle>-0 [002] 23454.902251: rcu_utilization: Start context switch <idle>-0 [002] 23454.902252: rcu_utilization: End context switch <idle>-0 [002] 23454.902253: prandom_u32: ret=3692516021 <idle>-0 [002] 23454.902254: sched_switch: swapper/2:0 [120] R ==> cyclictest:12275 [19] <idle>-0 [002] 23454.902256: wakeup_lat: next_comm=cyclictest lat=17 <idle>-0 [002] 23454.902258: kernel_stack: <stack trace > => trace_event_raw_event_synth (ffffffff8121a0db) => action_trace (ffffffff8121e9fb) => event_hist_trigger (ffffffff8121ca8d) => event_triggers_call (ffffffff81216c72) [..]
  • 56. Customize latency tracing # trace-cmd extract -s # trace-cmd report --cpu 2 | tail -30 <idle>-0 [001] d..1 23454.902254: cpu_idle: state=0 cpu_id=1 <idle>-0 [002] 23454.902239: sched_wakeup: cyclictest:12275 [19] CPU:002 <idle>-0 [002] 23454.902241: hrtimer_expire_exit: hrtimer=0xffffbbd68286fe60 <idle>-0 [002] 23454.902241: hrtimer_cancel: hrtimer=0xffffbbd6826efe70 <idle>-0 [002] 23454.902242: hrtimer_expire_entry: hrtimer=0xffffbbd6826efe70 <idle>-0 [002] 23454.902243: sched_waking: comm=cyclictest pid=12272 prio=120 target_cpu=002 <idle>-0 [002] 23454.902244: prandom_u32: ret=1102749734 <idle>-0 [002] 23454.902246: sched_wakeup: cyclictest:12272 [120] CPU:002 <idle>-0 [002] 23454.902247: hrtimer_expire_exit: hrtimer=0xffffbbd6826efe70 <idle>-0 [002] 23454.902248: write_msr: 6e0, value 4866ce957272 <idle>-0 [002] 23454.902249: local_timer_exit: vector=236 <idle>-0 [002] 23454.902250: cpu_idle: state=4294967295 cpu_id=2 <idle>-0 [002] 23454.902251: rcu_utilization: Start context switch <idle>-0 [002] 23454.902252: rcu_utilization: End context switch <idle>-0 [002] 23454.902253: prandom_u32: ret=3692516021 <idle>-0 [002] 23454.902254: sched_switch: swapper/2:0 [120] R ==> cyclictest:12275 [19] <idle>-0 [002] 23454.902256: wakeup_lat: next_comm=cyclictest lat=17 <idle>-0 [002] 23454.902258: kernel_stack: <stack trace > => trace_event_raw_event_synth (ffffffff8121a0db) => action_trace (ffffffff8121e9fb) => event_hist_trigger (ffffffff8121ca8d) => event_triggers_call (ffffffff81216c72) [..]
  • 57. Brought to you by Steven Rostedt rostedt@goodmis.org @srostedt